WorldWideScience

Sample records for unit ucc computer

  1. Cellodextrin utilization by bifidobacterium breve UCC2003.

    NARCIS (Netherlands)

    Pokusaeva, K.; Motherway, M.O.; Zomer, A.L.; Macsharry, J.; Fitzgerald, G.F.; Sinderen, D. van

    2011-01-01

    Cellodextrins, the incomplete hydrolysis products from insoluble cellulose, are accessible as a carbon source to certain members of the human gut microbiota, such as Bifidobacterium breve UCC2003. Transcription of the cldEFGC gene cluster of B. breve UCC2003 was shown to be induced upon growth on

  2. Cellodextrin utilization by bifidobacterium breve UCC2003.

    Science.gov (United States)

    Pokusaeva, Karina; O'Connell-Motherway, Mary; Zomer, Aldert; Macsharry, John; Fitzgerald, Gerald F; van Sinderen, Douwe

    2011-03-01

    Cellodextrins, the incomplete hydrolysis products from insoluble cellulose, are accessible as a carbon source to certain members of the human gut microbiota, such as Bifidobacterium breve UCC2003. Transcription of the cldEFGC gene cluster of B. breve UCC2003 was shown to be induced upon growth on cellodextrins, implicating this cluster in the metabolism of these sugars. Phenotypic analysis of a B. breve UCC2003::cldE insertion mutant confirmed that the cld gene cluster is exclusively required for cellodextrin utilization by this commensal. Moreover, our results suggest that transcription of the cld cluster is controlled by a LacI-type regulator encoded by cldR, located immediately upstream of cldE. Gel mobility shift assays using purified CldR(His) (produced by the incorporation of a His(12)-encoding sequence into the 3' end of the cldC gene) indicate that the cldEFGC promoter is subject to negative control by CldR(His), which binds to two inverted repeats. Analysis by high-performance anion-exchange chromatography with pulsed amperometric detection (HPAEC-PAD) of medium samples obtained during growth of B. breve UCC2003 on a mixture of cellodextrins revealed its ability to utilize cellobiose, cellotriose, cellotetraose, and cellopentaose, with cellotriose apparently representing the preferred substrate. The cldC gene of the cld operon of B. breve UCC2003 is, to the best of our knowledge, the first described bifidobacterial β-glucosidase exhibiting hydrolytic activity toward various cellodextrins.

  3. Contract Formation and Performance under the UCC and CISG: A Comparative Case Study

    Science.gov (United States)

    Saunders, Kurt M.; Rymsza, Leonard

    2015-01-01

    Contracts for the sale of goods in the United States are governed by the Uniform Commercial Code (UCC) in every state but one. When one of the parties to the contract is based in another country, however, the conflict of laws principles that will determine which country's law governs the transaction can be confounding. In addition, the commercial…

  4. How Do We Really Compute with Units?

    Science.gov (United States)

    Fiedler, B. H.

    2010-01-01

    The methods that we teach students for computing with units of measurement are often not consistent with the practice of professionals. For professionals, the vast majority of computations with quantities of measure are performed within programs on electronic computers, for which an accounting for the units occurs only once, in the design of the…

  5. TEL4Health research at University College Cork (UCC)

    NARCIS (Netherlands)

    Drachsler, Hendrik

    2013-01-01

    Drachsler, H. (2013, 12 May). TEL4Health research at University College Cork (UCC). Invited talk given at Application of Science to Simulation, Education and Research on Training for Health Professionals Centre (ASSERT for Health Care), Cork, Ireland.

  6. A computer controlled tele-cobalt unit

    International Nuclear Information System (INIS)

    Brace, J.A.

    1982-01-01

    A computer controlled cobalt treatment unit was commissioned for treating patients in January 1980. Initially the controlling computer was a minicomputer, but now the control of the therapy unit is by a microcomputer. The treatment files, which specify the movement and configurations necessary to deliver the prescribed dose, are produced on the minicomputer and then transferred to the microcomputer using minitape cartridges. The actual treatment unit is based on a standard cobalt unit with a few additional features e.g. the drive motors can be controlled either by the computer or manually. Since the treatment unit is used for both manual and automatic treatments, the operational procedure under computer control is made to closely follow the manual procedure for a single field treatment. The necessary safety features which protect against human, hardware and software errors as well as the advantages and disadvantages of computer controlled radiotherapy are discussed

  7. Cellodextrin Utilization by Bifidobacterium breve UCC2003▿ †

    Science.gov (United States)

    Pokusaeva, Karina; O'Connell-Motherway, Mary; Zomer, Aldert; MacSharry, John; Fitzgerald, Gerald F.; van Sinderen, Douwe

    2011-01-01

    Cellodextrins, the incomplete hydrolysis products from insoluble cellulose, are accessible as a carbon source to certain members of the human gut microbiota, such as Bifidobacterium breve UCC2003. Transcription of the cldEFGC gene cluster of B. breve UCC2003 was shown to be induced upon growth on cellodextrins, implicating this cluster in the metabolism of these sugars. Phenotypic analysis of a B. breve UCC2003::cldE insertion mutant confirmed that the cld gene cluster is exclusively required for cellodextrin utilization by this commensal. Moreover, our results suggest that transcription of the cld cluster is controlled by a LacI-type regulator encoded by cldR, located immediately upstream of cldE. Gel mobility shift assays using purified CldRHis (produced by the incorporation of a His12-encoding sequence into the 3′ end of the cldC gene) indicate that the cldEFGC promoter is subject to negative control by CldRHis, which binds to two inverted repeats. Analysis by high-performance anion-exchange chromatography with pulsed amperometric detection (HPAEC-PAD) of medium samples obtained during growth of B. breve UCC2003 on a mixture of cellodextrins revealed its ability to utilize cellobiose, cellotriose, cellotetraose, and cellopentaose, with cellotriose apparently representing the preferred substrate. The cldC gene of the cld operon of B. breve UCC2003 is, to the best of our knowledge, the first described bifidobacterial β-glucosidase exhibiting hydrolytic activity toward various cellodextrins. PMID:21216899

  8. Citizens unite for computational immunology!

    Science.gov (United States)

    Belden, Orrin S; Baker, Sarah Catherine; Baker, Brian M

    2015-07-01

    Recruiting volunteers who can provide computational time, programming expertise, or puzzle-solving talent has emerged as a powerful tool for biomedical research. Recent projects demonstrate the potential for such 'crowdsourcing' efforts in immunology. Tools for developing applications, new funding opportunities, and an eager public make crowdsourcing a serious option for creative solutions for computationally-challenging problems. Expanded uses of crowdsourcing in immunology will allow for more efficient large-scale data collection and analysis. It will also involve, inspire, educate, and engage the public in a variety of meaningful ways. The benefits are real - it is time to jump in! Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Web-Based Media Contents Editor for UCC Websites

    Science.gov (United States)

    Kim, Seoksoo

    The purpose of this research is to "design web-based media contents editor for establishing UCC(User Created Contents)-based websites." The web-based editor features user-oriented interfaces and increased convenience, significantly different from previous off-line editors. It allows users to edit media contents online and can be effectively used for online promotion activities of enterprises and organizations. In addition to development of the editor, the research aims to support the entry of enterprises and public agencies to the online market by combining the technology with various UCC items.

  10. Resourse Allocation and Pricing Principles for a University Computer Centre. Working Paper Series Number 6819.

    Science.gov (United States)

    Possen, Uri M.; And Others

    As an introduction, this paper presents a statement of the objectives of the university computing center (UCC) from the viewpoint of the university, the government, the typical user, and the UCC itself. The operating and financial structure of a UCC are described. Three main types of budgeting schemes are discussed: time allocation, pseudo-dollar,…

  11. Control of peripheral units by satellite computer

    International Nuclear Information System (INIS)

    Tran, K.T.

    1974-01-01

    A computer system was developed allowing the control of nuclear physics experiments, and use of the results by means of graphical and conversational assemblies. This system which is made of two computers, one IBM-370/135 and one Telemecanique Electrique T1600, controls the conventional IBM peripherals and also the special ones made in the laboratory, such as data acquisition display and graphics units. The visual display is implemented by a scanning-type television, equipped with a light-pen. These units in themselves are universal, but their specifications were established to meet the requirements of nuclear physics experiments. The input-output channels of the two computers have been connected together by an interface, designed and implemented in the Laboratory. This interface allows the exchange of control signals and data (the data are changed from bytes into word and vice-versa). The T1600 controls the peripherals mentionned above according to the commands of the IBM370. Hence the T1600 has here the part of a satellite computer which allows conversation with the main computer and also insures the control of its special peripheral units [fr

  12. Unit Cost Compendium Calculations

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Unit Cost Compendium (UCC) Calculations raw data set was designed to provide for greater accuracy and consistency in the use of unit costs across the USEPA...

  13. Metabolism of sialic acid by Bifidobacterium breve UCC2003.

    Science.gov (United States)

    Egan, Muireann; O'Connell Motherway, Mary; Ventura, Marco; van Sinderen, Douwe

    2014-07-01

    Bifidobacteria constitute a specific group of commensal bacteria that inhabit the gastrointestinal tracts of humans and other mammals. Bifidobacterium breve UCC2003 has previously been shown to utilize several plant-derived carbohydrates that include cellodextrins, starch, and galactan. In the present study, we investigated the ability of this strain to utilize the mucin- and human milk oligosaccharide (HMO)-derived carbohydrate sialic acid. Using a combination of transcriptomic and functional genomic approaches, we identified a gene cluster dedicated to the uptake and metabolism of sialic acid. Furthermore, we demonstrate that B. breve UCC2003 can cross feed on sialic acid derived from the metabolism of 3'-sialyllactose, an abundant HMO, by another infant gut bifidobacterial strain, Bifidobacterium bifidum PRL2010. Copyright © 2014, American Society for Microbiology. All Rights Reserved.

  14. Metabolism of Sialic Acid by Bifidobacterium breve UCC2003

    Science.gov (United States)

    Egan, Muireann; O'Connell Motherway, Mary; Ventura, Marco

    2014-01-01

    Bifidobacteria constitute a specific group of commensal bacteria that inhabit the gastrointestinal tracts of humans and other mammals. Bifidobacterium breve UCC2003 has previously been shown to utilize several plant-derived carbohydrates that include cellodextrins, starch, and galactan. In the present study, we investigated the ability of this strain to utilize the mucin- and human milk oligosaccharide (HMO)-derived carbohydrate sialic acid. Using a combination of transcriptomic and functional genomic approaches, we identified a gene cluster dedicated to the uptake and metabolism of sialic acid. Furthermore, we demonstrate that B. breve UCC2003 can cross feed on sialic acid derived from the metabolism of 3′-sialyllactose, an abundant HMO, by another infant gut bifidobacterial strain, Bifidobacterium bifidum PRL2010. PMID:24814790

  15. Neutronic Analysis of the RSG-GAS Compact Core without CIP Silicide 3.55 g U/cc and 4.8 g U/cc

    International Nuclear Information System (INIS)

    Jati S; Lily S; Tukiran S

    2004-01-01

    Fuel conversion from U 3 O 8 -Al to U 3 Si 2 -Al 2.96 g U/cc density in the RSG-GAS core had done successfully step by step since 36 th core until 44 th core. So that, since the 45 th core until now (48 th core) had been using full of silicide 2.96 g U/cc. Even though utilization program of silicide fuel with high density (3.55 g U/cc and 4.8 g U/cc) and optimize operation of RSG-GAS core under research. Optimalitation of core with increasing operation cycle have been analyzing about compact core. The mean of compact core is the RSG-GAS core with decrease number of IP or CIP position irradiation. In this research, the neutronic calculation to cover RSG-GAS core and RSG-GAS core without CIP that are using U 3 Si 2 -Al 2.96 g U/cc, 3.55 g U/cc and 4.8 g U/cc had done. Two core calculation done at 15 MW power using SRAC-ASMBURN code. The calculation result show that fuel conversion from 2.96 g U/cc density to 3.55 g U/cc and 4.8 g U/cc will increasing cycle length for both RSG-GAS core and RSG-GAS compact core without CIP. However, increasing of excess reactivity exceeded from nominal value of first design that 9.2%. Change of power peaking factor is not show significant value and still less than 1.4. Core fuelled with U 3 Si 2 -Al 4.8 g U/cc density have maximum discharge burn-up which exceeded from licensing value (70%). RSG-GAS compact core without CIP fuelled U 3 Si 2 -Al 2.96 g U/cc have longer cycle operation then RSG-GAS core and fulfil limitation neutronic parameter at the first design value. (author)

  16. Structure of ribose 5-phosphate isomerase from the probiotic bacterium Lactobacillus salivarius UCC118.

    Science.gov (United States)

    Lobley, Carina M C; Aller, Pierre; Douangamath, Alice; Reddivari, Yamini; Bumann, Mario; Bird, Louise E; Nettleship, Joanne E; Brandao-Neto, Jose; Owens, Raymond J; O'Toole, Paul W; Walsh, Martin A

    2012-12-01

    The structure of ribose 5-phosphate isomerase from the probiotic bacterium Lactobacillus salivarius UCC188 has been determined at 1.72 Å resolution. The structure was solved by molecular replacement, which identified the functional homodimer in the asymmetric unit. Despite only showing 57% sequence identity to its closest homologue, the structure adopted the typical α and β D-ribose 5-phosphate isomerase fold. Comparison to other related structures revealed high homology in the active site, allowing a model of the substrate-bound protein to be proposed. The determination of the structure was expedited by the use of in situ crystallization-plate screening on beamline I04-1 at Diamond Light Source to identify well diffracting protein crystals prior to routine cryocrystallography.

  17. Structure of ribose 5-phosphate isomerase from the probiotic bacterium Lactobacillus salivarius UCC118

    International Nuclear Information System (INIS)

    Lobley, Carina M. C.; Aller, Pierre; Douangamath, Alice; Reddivari, Yamini; Bumann, Mario; Bird, Louise E.; Nettleship, Joanne E.; Brandao-Neto, Jose; Owens, Raymond J.; O’Toole, Paul W.; Walsh, Martin A.

    2012-01-01

    The crystal structure of ribose 5-phosphate isomerase has been determined to 1.72 Å resolution and is presented with a brief comparison to other known ribose 5-phosphate isomerase A structures. The structure of ribose 5-phosphate isomerase from the probiotic bacterium Lactobacillus salivarius UCC188 has been determined at 1.72 Å resolution. The structure was solved by molecular replacement, which identified the functional homodimer in the asymmetric unit. Despite only showing 57% sequence identity to its closest homologue, the structure adopted the typical α and β d-ribose 5-phosphate isomerase fold. Comparison to other related structures revealed high homology in the active site, allowing a model of the substrate-bound protein to be proposed. The determination of the structure was expedited by the use of in situ crystallization-plate screening on beamline I04-1 at Diamond Light Source to identify well diffracting protein crystals prior to routine cryocrystallography

  18. Glycosulfatase-Encoding Gene Cluster in Bifidobacterium breve UCC2003.

    Science.gov (United States)

    Egan, Muireann; Jiang, Hao; O'Connell Motherway, Mary; Oscarson, Stefan; van Sinderen, Douwe

    2016-11-15

    Bifidobacteria constitute a specific group of commensal bacteria typically found in the gastrointestinal tract (GIT) of humans and other mammals. Bifidobacterium breve strains are numerically prevalent among the gut microbiota of many healthy breastfed infants. In the present study, we investigated glycosulfatase activity in a bacterial isolate from a nursling stool sample, B. breve UCC2003. Two putative sulfatases were identified on the genome of B. breve UCC2003. The sulfated monosaccharide N-acetylglucosamine-6-sulfate (GlcNAc-6-S) was shown to support the growth of B. breve UCC2003, while N-acetylglucosamine-3-sulfate, N-acetylgalactosamine-3-sulfate, and N-acetylgalactosamine-6-sulfate did not support appreciable growth. By using a combination of transcriptomic and functional genomic approaches, a gene cluster designated ats2 was shown to be specifically required for GlcNAc-6-S metabolism. Transcription of the ats2 cluster is regulated by a repressor open reading frame kinase (ROK) family transcriptional repressor. This study represents the first description of glycosulfatase activity within the Bifidobacterium genus. Bifidobacteria are saccharolytic organisms naturally found in the digestive tract of mammals and insects. Bifidobacterium breve strains utilize a variety of plant- and host-derived carbohydrates that allow them to be present as prominent members of the infant gut microbiota as well as being present in the gastrointestinal tract of adults. In this study, we introduce a previously unexplored area of carbohydrate metabolism in bifidobacteria, namely, the metabolism of sulfated carbohydrates. B. breve UCC2003 was shown to metabolize N-acetylglucosamine-6-sulfate (GlcNAc-6-S) through one of two sulfatase-encoding gene clusters identified on its genome. GlcNAc-6-S can be found in terminal or branched positions of mucin oligosaccharides, the glycoprotein component of the mucous layer that covers the digestive tract. The results of this study provide

  19. 9 CFR 205.211 - Applicability of court decisions under the UCC.

    Science.gov (United States)

    2010-01-01

    ... OF FARM PRODUCTS Interpretive Opinions § 205.211 Applicability of court decisions under the UCC. (a) Court decisions under the Uniform Commercial Code (UCC), about the scope of the “farm products... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Applicability of court decisions under...

  20. 7 CFR 1962.14 - Account and security information in UCC cases.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 14 2010-01-01 2009-01-01 true Account and security information in UCC cases. 1962.14... Security § 1962.14 Account and security information in UCC cases. Within 2 weeks after receipt of a written... the information, it may be liable for any loss caused the borrower and, in some States, other parties...

  1. Computing with impure numbers - Automatic consistency checking and units conversion using computer algebra

    Science.gov (United States)

    Stoutemyer, D. R.

    1977-01-01

    The computer algebra language MACSYMA enables the programmer to include symbolic physical units in computer calculations, and features automatic detection of dimensionally-inhomogeneous formulas and conversion of inconsistent units in a dimensionally homogeneous formula. Some examples illustrate these features.

  2. New Generation General Purpose Computer (GPC) compact IBM unit

    Science.gov (United States)

    1991-01-01

    New Generation General Purpose Computer (GPC) compact IBM unit replaces a two-unit earlier generation computer. The new IBM unit is documented in table top views alone (S91-26867, S91-26868), with the onboard equipment it supports including the flight deck CRT screen and keypad (S91-26866), and next to the two earlier versions it replaces (S91-26869).

  3. Energy efficiency of computer power supply units - Final report

    Energy Technology Data Exchange (ETDEWEB)

    Aebischer, B. [cepe - Centre for Energy Policy and Economics, Swiss Federal Institute of Technology Zuerich, Zuerich (Switzerland); Huser, H. [Encontrol GmbH, Niederrohrdorf (Switzerland)

    2002-11-15

    This final report for the Swiss Federal Office of Energy (SFOE) takes a look at the efficiency of computer power supply units, which decreases rapidly during average computer use. The background and the purpose of the project are examined. The power supplies for personal computers are discussed and the testing arrangement used is described. Efficiency, power-factor and operating points of the units are examined. Potentials for improvement and measures to be taken are discussed. Also, action to be taken by those involved in the design and operation of such power units is proposed. Finally, recommendations for further work are made.

  4. Installation of new Generation General Purpose Computer (GPC) compact unit

    Science.gov (United States)

    1991-01-01

    In the Kennedy Space Center's (KSC's) Orbiter Processing Facility (OPF) high bay 2, Spacecraft Electronics technician Ed Carter (right), wearing clean suit, prepares for (26864) and installs (26865) the new Generation General Purpose Computer (GPC) compact IBM unit in Atlantis', Orbiter Vehicle (OV) 104's, middeck avionics bay as Orbiter Systems Quality Control technician Doug Snider looks on. Both men work for NASA contractor Lockheed Space Operations Company. All three orbiters are being outfitted with the compact IBM unit, which replaces a two-unit earlier generation computer.

  5. Computer systems for the control of teletherapy units

    International Nuclear Information System (INIS)

    Brace, J.A.

    1985-01-01

    This paper describes a computer-controlled tracking cobalt unit installed at the Royal Free Hospital. It is based on a standard TEM MS90 unit and operates at 90-cm source-axis distance with a geometric field size of 45 x 45 cm at that distance. It has been modified so that it can be used either manually or under computer control. There are nine parameters that can be controlled positionally and two that can be controlled in rate mode; these are presented in a table

  6. Complexity estimates based on integral transforms induced by computational units

    Czech Academy of Sciences Publication Activity Database

    Kůrková, Věra

    2012-01-01

    Roč. 33, September (2012), s. 160-167 ISSN 0893-6080 R&D Projects: GA ČR GAP202/11/1368 Institutional research plan: CEZ:AV0Z10300504 Institutional support: RVO:67985807 Keywords : neural networks * estimates of model complexity * approximation from a dictionary * integral transforms * norms induced by computational units Subject RIV: IN - Informatics, Computer Science Impact factor: 1.927, year: 2012

  7. Discovering novel bile protection systems in Bifidobacterium breve UCC2003 through functional genomics.

    NARCIS (Netherlands)

    Ruiz, L.; Zomer, A.L.; O'Connell-Motherway, M.; Sinderen, D. van; Margolles, A.

    2012-01-01

    Tolerance of gut commensals to bile salt exposure is an important feature for their survival in and colonization of the intestinal environment. A transcriptomic approach was employed to study the response of Bifidobacterium breve UCC2003 to bile, allowing the identification of a number of

  8. Graphics processing unit based computation for NDE applications

    Science.gov (United States)

    Nahas, C. A.; Rajagopal, Prabhu; Balasubramaniam, Krishnan; Krishnamurthy, C. V.

    2012-05-01

    Advances in parallel processing in recent years are helping to improve the cost of numerical simulation. Breakthroughs in Graphical Processing Unit (GPU) based computation now offer the prospect of further drastic improvements. The introduction of 'compute unified device architecture' (CUDA) by NVIDIA (the global technology company based in Santa Clara, California, USA) has made programming GPUs for general purpose computing accessible to the average programmer. Here we use CUDA to develop parallel finite difference schemes as applicable to two problems of interest to NDE community, namely heat diffusion and elastic wave propagation. The implementations are for two-dimensions. Performance improvement of the GPU implementation against serial CPU implementation is then discussed.

  9. Exploiting graphics processing units for computational biology and bioinformatics.

    Science.gov (United States)

    Payne, Joshua L; Sinnott-Armstrong, Nicholas A; Moore, Jason H

    2010-09-01

    Advances in the video gaming industry have led to the production of low-cost, high-performance graphics processing units (GPUs) that possess more memory bandwidth and computational capability than central processing units (CPUs), the standard workhorses of scientific computing. With the recent release of generalpurpose GPUs and NVIDIA's GPU programming language, CUDA, graphics engines are being adopted widely in scientific computing applications, particularly in the fields of computational biology and bioinformatics. The goal of this article is to concisely present an introduction to GPU hardware and programming, aimed at the computational biologist or bioinformaticist. To this end, we discuss the primary differences between GPU and CPU architecture, introduce the basics of the CUDA programming language, and discuss important CUDA programming practices, such as the proper use of coalesced reads, data types, and memory hierarchies. We highlight each of these topics in the context of computing the all-pairs distance between instances in a dataset, a common procedure in numerous disciplines of scientific computing. We conclude with a runtime analysis of the GPU and CPU implementations of the all-pairs distance calculation. We show our final GPU implementation to outperform the CPU implementation by a factor of 1700.

  10. The Evolution of Computer Based Learning Software Design: Computer Assisted Teaching Unit Experience.

    Science.gov (United States)

    Blandford, A. E.; Smith, P. R.

    1986-01-01

    Describes the style of design of computer simulations developed by Computer Assisted Teaching Unit at Queen Mary College with reference to user interface, input and initialization, input data vetting, effective display screen use, graphical results presentation, and need for hard copy. Procedures and problems relating to academic involvement are…

  11. Graphics processing units in bioinformatics, computational biology and systems biology.

    Science.gov (United States)

    Nobile, Marco S; Cazzaniga, Paolo; Tangherloni, Andrea; Besozzi, Daniela

    2017-09-01

    Several studies in Bioinformatics, Computational Biology and Systems Biology rely on the definition of physico-chemical or mathematical models of biological systems at different scales and levels of complexity, ranging from the interaction of atoms in single molecules up to genome-wide interaction networks. Traditional computational methods and software tools developed in these research fields share a common trait: they can be computationally demanding on Central Processing Units (CPUs), therefore limiting their applicability in many circumstances. To overcome this issue, general-purpose Graphics Processing Units (GPUs) are gaining an increasing attention by the scientific community, as they can considerably reduce the running time required by standard CPU-based software, and allow more intensive investigations of biological systems. In this review, we present a collection of GPU tools recently developed to perform computational analyses in life science disciplines, emphasizing the advantages and the drawbacks in the use of these parallel architectures. The complete list of GPU-powered tools here reviewed is available at http://bit.ly/gputools. © The Author 2016. Published by Oxford University Press.

  12. Metabolism of a plant derived galactose‐containing polysaccharide by Bifidobacterium breve UCC2003

    Science.gov (United States)

    O'Connell Motherway, Mary; Fitzgerald, Gerald F.; van Sinderen, Douwe

    2011-01-01

    Summary In this study, we describe the functional characterization of the Bifidobacterium breve UCC2003 gal locus, which is dedicated to the utilization of galactan, a plant‐derived polysaccharide. Using a combination of molecular approaches we conclude that the galA gene of B. breve UCC2003 encodes a β‐1,4‐endogalactanase producing galacto‐oligosaccharides, which are specifically internalized by an ABC transport system, encoded by galBCDE, and which are then hydrolysed to galactose moieties by a dedicated intracellular β‐galactosidase, specified by galG. The generated galactose molecules are presumed to be fed into the fructose‐6‐phosphate phosphoketolase pathway via the Leloir pathway, thereby allowing B. breve UCC2003 to use galactan as its sole carbon and energy source. In addition to these findings we demonstrate that GalR is a LacI‐type DNA‐binding protein, which not only appears to control transcription of the galCDEGR operon, but also that of the galA gene. PMID:21375716

  13. Metabolism of a plant derived galactose-containing polysaccharide by Bifidobacterium breve UCC2003.

    Science.gov (United States)

    O'Connell Motherway, Mary; Fitzgerald, Gerald F; van Sinderen, Douwe

    2011-05-01

    In this study, we describe the functional characterization of the Bifidobacterium breve UCC2003 gal locus, which is dedicated to the utilization of galactan, a plant-derived polysaccharide. Using a combination of molecular approaches we conclude that the galA gene of B. breve UCC2003 encodes a β-1,4-endogalactanase producing galacto-oligosaccharides, which are specifically internalized by an ABC transport system, encoded by galBCDE, and which are then hydrolysed to galactose moieties by a dedicated intracellular β-galactosidase, specified by galG. The generated galactose molecules are presumed to be fed into the fructose-6-phosphate phosphoketolase pathway via the Leloir pathway, thereby allowing B. breve UCC2003 to use galactan as its sole carbon and energy source. In addition to these findings we demonstrate that GalR is a LacI-type DNA-binding protein, which not only appears to control transcription of the galCDEGR operon, but also that of the galA gene. © 2010 University College Cork. Journal compilation © 2010 Society for Applied Microbiology and Blackwell Publishing Ltd.

  14. Implicit Theories of Creativity in Computer Science in the United States and China

    Science.gov (United States)

    Tang, Chaoying; Baer, John; Kaufman, James C.

    2015-01-01

    To study implicit concepts of creativity in computer science in the United States and mainland China, we first asked 308 Chinese computer scientists for adjectives that would describe a creative computer scientist. Computer scientists and non-computer scientists from China (N = 1069) and the United States (N = 971) then rated how well those…

  15. To the problem of reliability standardization in computer-aided manufacturing at NPP units

    International Nuclear Information System (INIS)

    Yastrebenetskij, M.A.; Shvyryaev, Yu.V.; Spektor, L.I.; Nikonenko, I.V.

    1989-01-01

    The problems of reliability standardization in computer-aided manufacturing of NPP units considering the following approaches: computer-aided manufacturing of NPP units as a part of automated technological complex; computer-aided manufacturing of NPP units as multi-functional system, are analyzed. Selection of the composition of reliability indeces for computer-aided manufacturing of NPP units for each of the approaches considered is substantiated

  16. Sandia`s computer support units: The first three years

    Energy Technology Data Exchange (ETDEWEB)

    Harris, R.N. [Sandia National Labs., Albuquerque, NM (United States). Labs. Computing Dept.

    1997-11-01

    This paper describes the method by which Sandia National Laboratories has deployed information technology to the line organizations and to the desktop as part of the integrated information services organization under the direction of the Chief Information officer. This deployment has been done by the Computer Support Unit (CSU) Department. The CSU approach is based on the principle of providing local customer service with a corporate perspective. Success required an approach that was both customer compelled at times and market or corporate focused in most cases. Above all, a complete solution was required that included a comprehensive method of technology choices and development, process development, technology implementation, and support. It is the authors hope that this information will be useful in the development of a customer-focused business strategy for information technology deployment and support. Descriptions of current status reflect the status as of May 1997.

  17. Oklahoma's Mobile Computer Graphics Laboratory.

    Science.gov (United States)

    McClain, Gerald R.

    This Computer Graphics Laboratory houses an IBM 1130 computer, U.C.C. plotter, printer, card reader, two key punch machines, and seminar-type classroom furniture. A "General Drafting Graphics System" (GDGS) is used, based on repetitive use of basic coordinate and plot generating commands. The system is used by 12 institutions of higher education…

  18. Characterization of Endogenous Plasmids from Lactobacillus salivarius UCC118▿ †

    OpenAIRE

    Fang, Fang; Flynn, Sarah; Li, Yin; Claesson, Marcus J.; van Pijkeren, Jan-Peter; Collins, J. Kevin; van Sinderen, Douwe; O'Toole, Paul W.

    2008-01-01

    The genome of Lactobacillus salivarius UCC118 comprises a 1.83-Mb chromosome, a 242-kb megaplasmid (pMP118), and two smaller plasmids of 20 kb (pSF118-20) and 44 kb (pSF118-44). Annotation and bioinformatic analyses suggest that both of the smaller plasmids replicate by a theta replication mechanism. Furthermore, it appears that they are transmissible, although neither possesses a complete set of conjugation genes. Plasmid pSF118-20 encodes a toxin-antitoxin system composed of pemI and pemK h...

  19. Computer aided design of fast neutron therapy units

    International Nuclear Information System (INIS)

    Gileadi, A.E.; Gomberg, H.J.; Lampe, I.

    1980-01-01

    Conceptual design of a radiation-therapy unit using fusion neutrons is presently being considered by KMS Fusion, Inc. As part of this effort, a powerful and versatile computer code, TBEAM, has been developed which enables the user to determine physical characteristics of the fast neutron beam generated in the facility under consideration, using certain given design parameters of the facility as inputs. TBEAM uses the method of statistical sampling (Monte Carlo) to solve the space, time and energy dependent neutron transport equation relating to the conceptual design described by the user-supplied input parameters. The code traces the individual source neutrons as they propagate throughout the shield-collimator structure of the unit, and it keeps track of each interaction by type, position and energy. In its present version, TBEAM is applicable to homogeneous and laminated shields of spherical geometry, to collimator apertures of conical shape, and to neutrons emitted by point sources or such plate sources as are used in neutron generators of various types. TBEAM-generated results comparing the performance of point or plate sources in otherwise identical shield-collimator configurations are presented in numerical form. (H.K.)

  20. Characterization of Endogenous Plasmids from Lactobacillus salivarius UCC118▿ †

    Science.gov (United States)

    Fang, Fang; Flynn, Sarah; Li, Yin; Claesson, Marcus J.; van Pijkeren, Jan-Peter; Collins, J. Kevin; van Sinderen, Douwe; O'Toole, Paul W.

    2008-01-01

    The genome of Lactobacillus salivarius UCC118 comprises a 1.83-Mb chromosome, a 242-kb megaplasmid (pMP118), and two smaller plasmids of 20 kb (pSF118-20) and 44 kb (pSF118-44). Annotation and bioinformatic analyses suggest that both of the smaller plasmids replicate by a theta replication mechanism. Furthermore, it appears that they are transmissible, although neither possesses a complete set of conjugation genes. Plasmid pSF118-20 encodes a toxin-antitoxin system composed of pemI and pemK homologs, and this plasmid could be cured when PemI was produced in trans. The minimal replicon of pSF118-20 was determined by deletion analysis. Shuttle vector derivatives of pSF118-20 were generated that included the replication region (pLS203) and the replication region plus mobilization genes (pLS208). The plasmid pLS203 was stably maintained without selection in Lactobacillus plantarum, Lactobacillus fermentum, and the pSF118-20-cured derivative strain of L. salivarius UCC118 (strain LS201). Cloning in pLS203 of genes encoding luciferase and green fluorescent protein, and expression from a constitutive L. salivarius promoter, demonstrated the utility of this vector for the expression of heterologous genes in Lactobacillus. This study thus expands the knowledge base and vector repertoire of probiotic lactobacilli. PMID:18390685

  1. A GntR-type transcriptional repressor controls sialic acid utilization in Bifidobacterium breve UCC2003.

    Science.gov (United States)

    Egan, Muireann; O'Connell Motherway, Mary; van Sinderen, Douwe

    2015-02-01

    Bifidobacterium breve strains are numerically prevalent among the gut microbiota of healthy, breast-fed infants. The metabolism of sialic acid, a ubiquitous monosaccharide in the infant and adult gut, by B. breve UCC2003 is dependent on a large gene cluster, designated the nan/nag cluster. This study describes the transcriptional regulation of the nan/nag cluster and thus sialic acid metabolism in B. breve UCC2003. Insertion mutagenesis and transcriptome analysis revealed that the nan/nag cluster is regulated by a GntR family transcriptional repressor, designated NanR. Crude cell extract of Escherichia coli EC101 in which the nanR gene had been cloned and overexpressed was shown to bind to two promoter regions within this cluster, each of which containing an imperfect inverted repeat that is believed to act as the NanR operator sequence. Formation of the DNA-NanR complex is prevented in the presence of sialic acid, which we had previously shown to induce transcription of this gene cluster. © FEMS 2014. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. A two-component regulatory system controls autoregulated serpin expression in Bifidobacterium breve UCC2003.

    Science.gov (United States)

    Alvarez-Martin, Pablo; O'Connell Motherway, Mary; Turroni, Francesca; Foroni, Elena; Ventura, Marco; van Sinderen, Douwe

    2012-10-01

    This work reports on the identification and molecular characterization of a two-component regulatory system (2CRS), encoded by serRK, which is believed to control the expression of the ser(2003) locus in Bifidobacterium breve UCC2003. The ser(2003) locus consists of two genes, Bbr_1319 (sagA) and Bbr_1320 (serU), which are predicted to encode a hypothetical membrane-associated protein and a serpin-like protein, respectively. The response regulator SerR was shown to bind to the promoter region of ser(2003), and the probable recognition sequence of SerR was determined by a combinatorial approach of in vitro site-directed mutagenesis coupled to transcriptional fusion and electrophoretic mobility shift assays (EMSAs). The importance of the serRK 2CRS in the response of B. breve to protease-mediated induction was confirmed by generating a B. breve serR insertion mutant, which was shown to exhibit altered ser(2003) transcriptional induction patterns compared to the parent strain, UCC2003. Interestingly, the analysis of a B. breve serU mutant revealed that the SerRK signaling pathway appears to include a SerU-dependent autoregulatory loop.

  3. A Conserved Two-Component Signal Transduction System Controls the Response to Phosphate Starvation in Bifidobacterium breve UCC2003.

    NARCIS (Netherlands)

    Alvarez-Martin, P.; Fernandez, M.; O'Connell-Motherway, M.; O'Connell, K.J.; Sauvageot, N.; Fitzgerald, G.F.; Macsharry, J.; Zomer, A.L.; Sinderen, D. van

    2012-01-01

    This work reports on the identification and molecular characterization of the two-component regulatory system (2CRS) PhoRP, which controls the response to inorganic phosphate (P(i)) starvation in Bifidobacterium breve UCC2003. The response regulator PhoP was shown to bind to the promoter region of

  4. The Computer Backgrounds of Soldiers in Army Units: FY01

    National Research Council Canada - National Science Library

    Singh, Harnam

    2002-01-01

    A multi-year research effort was instituted in FY99 to examine soldiers' experiences with computers, self- perceptions of their computer skill, and their ability to identify frequently used, Windows-based icons...

  5. Hand held control unit for controlling a display screen-oriented computer game, and a display screen-oriented computer game having one or more such control units

    NARCIS (Netherlands)

    2001-01-01

    A hand-held control unit is used to control a display screen-oriented computer game. The unit comprises a housing with a front side, a set of control members lying generally flush with the front side for through actuating thereof controlling actions of in-game display items, and an output for

  6. Discovering Novel Bile Protection Systems in Bifidobacterium breve UCC2003 through Functional Genomics

    Science.gov (United States)

    Ruiz, Lorena; Zomer, Aldert; O'Connell-Motherway, Mary; van Sinderen, Douwe

    2012-01-01

    Tolerance of gut commensals to bile salt exposure is an important feature for their survival in and colonization of the intestinal environment. A transcriptomic approach was employed to study the response of Bifidobacterium breve UCC2003 to bile, allowing the identification of a number of bile-induced genes with a range of predicted functions. The potential roles of a selection of these bile-inducible genes in bile protection were analyzed following heterologous expression in Lactococcus lactis. Genes encoding three transport systems belonging to the major facilitator superfamily (MFS), Bbr_0838, Bbr_0832, and Bbr_1756, and three ABC-type transporters, Bbr_0406-0407, Bbr_1804-1805, and Bbr_1826-1827, were thus investigated and shown to provide enhanced resistance and survival to bile exposure. This work significantly improves our understanding as to how bifidobacteria respond to and survive bile exposure. PMID:22156415

  7. Comparison of Tissue Density in Hounsfield Units in Computed Tomography and Cone Beam Computed Tomography.

    Science.gov (United States)

    Varshowsaz, Masoud; Goorang, Sepideh; Ehsani, Sara; Azizi, Zeynab; Rahimian, Sepideh

    2016-03-01

    Bone quality and quantity assessment is one of the most important steps in implant treatment planning. Different methods such as computed tomography (CT) and recently suggested cone beam computed tomography (CBCT) with lower radiation dose and less time and cost are used for bone density assessment. This in vitro study aimed to compare the tissue density values in Hounsfield units (HUs) in CBCT and CT scans of different tissue phantoms with two different thicknesses, two different image acquisition settings and in three locations in the phantoms. Four different tissue phantoms namely hard tissue, soft tissue, air and water were scanned by three different CBCT and a CT system in two thicknesses (full and half) and two image acquisition settings (high and low kVp and mA). The images were analyzed at three sites (middle, periphery and intermediate) using eFilm software. The difference in density values was analyzed by ANOVA and correction coefficient test (P<0.05). There was a significant difference between density values in CBCT and CT scans in most situations, and CBCT values were not similar to CT values in any of the phantoms in different thicknesses and acquisition parameters or the three different sites. The correction coefficients confirmed the results. CBCT is not reliable for tissue density assessment. The results were not affected by changes in thickness, acquisition parameters or locations.

  8. Noncontrast computed tomographic Hounsfield unit evaluation of cerebral venous thrombosis: a quantitative evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Besachio, David A. [University of Utah, Department of Radiology, Salt Lake City (United States); United States Navy, Bethesda, MD (United States); Quigley, Edward P.; Shah, Lubdha M.; Salzman, Karen L. [University of Utah, Department of Radiology, Salt Lake City (United States)

    2013-08-15

    Our objective is to determine the utility of noncontrast Hounsfield unit values, Hounsfield unit values corrected for the patient's hematocrit, and venoarterial Hounsfield unit difference measurements in the identification of intracranial venous thrombosis on noncontrast head computed tomography. We retrospectively reviewed noncontrast head computed tomography exams performed in both normal patients and those with cerebral venous thrombosis, acquiring Hounsfield unit values in normal and thrombosed cerebral venous structures. Also, we acquired Hounsfield unit values in the internal carotid artery for comparison to thrombosed and nonthrombosed venous structures and compared the venous Hounsfield unit values to the patient's hematocrit. A significant difference is identified between Hounsfield unit values in thrombosed and nonthrombosed venous structures. Applying Hounsfield unit threshold values of greater than 65, a Hounsfield unit to hematocrit ratio of greater than 1.7, and venoarterial difference values greater than 15 alone and in combination, the majority of cases of venous thrombosis are identifiable on noncontrast head computed tomography. Absolute Hounsfield unit values, Hounsfield unit to hematocrit ratios, and venoarterial Hounsfield unit value differences are a useful adjunct in noncontrast head computed tomographic evaluation of cerebral venous thrombosis. (orig.)

  9. Noncontrast computed tomographic Hounsfield unit evaluation of cerebral venous thrombosis: a quantitative evaluation

    International Nuclear Information System (INIS)

    Besachio, David A.; Quigley, Edward P.; Shah, Lubdha M.; Salzman, Karen L.

    2013-01-01

    Our objective is to determine the utility of noncontrast Hounsfield unit values, Hounsfield unit values corrected for the patient's hematocrit, and venoarterial Hounsfield unit difference measurements in the identification of intracranial venous thrombosis on noncontrast head computed tomography. We retrospectively reviewed noncontrast head computed tomography exams performed in both normal patients and those with cerebral venous thrombosis, acquiring Hounsfield unit values in normal and thrombosed cerebral venous structures. Also, we acquired Hounsfield unit values in the internal carotid artery for comparison to thrombosed and nonthrombosed venous structures and compared the venous Hounsfield unit values to the patient's hematocrit. A significant difference is identified between Hounsfield unit values in thrombosed and nonthrombosed venous structures. Applying Hounsfield unit threshold values of greater than 65, a Hounsfield unit to hematocrit ratio of greater than 1.7, and venoarterial difference values greater than 15 alone and in combination, the majority of cases of venous thrombosis are identifiable on noncontrast head computed tomography. Absolute Hounsfield unit values, Hounsfield unit to hematocrit ratios, and venoarterial Hounsfield unit value differences are a useful adjunct in noncontrast head computed tomographic evaluation of cerebral venous thrombosis. (orig.)

  10. Computer Backgrounds of Soldiers in Army Units: FY00

    National Research Council Canada - National Science Library

    Fober, Gene

    2001-01-01

    .... Soldiers from four Army installations were given a survey that examined their experiences with computers, self-perceptions of their skill, and an objective test of their ability to identify Windows-based icons...

  11. A 1.5 GFLOPS Reciprocal Unit for Computer Graphics

    DEFF Research Database (Denmark)

    Nannarelli, Alberto; Rasmussen, Morten Sleth; Stuart, Matthias Bo

    2006-01-01

    The reciprocal operation 1/d is a frequent operation performed in graphics processors (GPUs). In this work, we present the design of a radix-16 reciprocal unit based on the algorithm combining the traditional digit-by-digit algorithm and the approximation of the reciprocal by one Newton-Raphson i...

  12. A conserved two-component signal transduction system controls the response to phosphate starvation in Bifidobacterium breve UCC2003.

    Science.gov (United States)

    Alvarez-Martin, Pablo; Fernández, Matilde; O'Connell-Motherway, Mary; O'Connell, Kerry Joan; Sauvageot, Nicolas; Fitzgerald, Gerald F; MacSharry, John; Zomer, Aldert; van Sinderen, Douwe

    2012-08-01

    This work reports on the identification and molecular characterization of the two-component regulatory system (2CRS) PhoRP, which controls the response to inorganic phosphate (P(i)) starvation in Bifidobacterium breve UCC2003. The response regulator PhoP was shown to bind to the promoter region of pstSCAB, specifying a predicted P(i) transporter system, as well as that of phoU, which encodes a putative P(i)-responsive regulatory protein. This interaction is assumed to cause transcriptional modulation under conditions of P(i) limitation. Our data suggest that the phoRP genes are subject to positive autoregulation and, together with pstSCAB and presumably phoU, represent the complete regulon controlled by the phoRP-encoded 2CRS in B. breve UCC2003. Determination of the minimal PhoP binding region combined with bioinformatic analysis revealed the probable recognition sequence of PhoP, designated here as the PHO box, which together with phoRP is conserved among many high-GC-content Gram-positive bacteria. The importance of the phoRP 2CRS in the response of B. breve to P(i) starvation conditions was confirmed by analysis of a B. breve phoP insertion mutant which exhibited decreased growth under phosphate-limiting conditions compared to its parent strain UCC2003.

  13. Global transcriptional landscape and promoter mapping of the gut commensal Bifidobacterium breve UCC2003.

    Science.gov (United States)

    Bottacini, Francesca; Zomer, Aldert; Milani, Christian; Ferrario, Chiara; Lugli, Gabriele Andrea; Egan, Muireann; Ventura, Marco; van Sinderen, Douwe

    2017-12-28

    Bifidobacterium breve represents a common member of the infant gut microbiota and its presence in the gut has been associated with host well being. For this reason it is relevant to investigate and understand the molecular mechanisms underlying the establishment, persistence and activities of this gut commensal in the host environment. The assessment of vegetative promoters in the bifidobacterial prototype Bifidobacterium breve UCC2003 was performed employing a combination of RNA tiling array analysis and cDNA sequencing. Canonical -10 (TATAAT) and -35 (TTGACA) sequences were identified upstream of transcribed genes or operons, where deviations from this consensus correspond to transcription level variations. A Random Forest analysis assigned the -10 region of B. breve promoters as the element most impacting on the level of transcription, followed by the spacer length and the 5'-UTR length of transcripts. Furthermore, our transcriptome study also identified rho-independent termination as the most common and effective termination signal of highly and moderately transcribed operons in B. breve. The present study allowed us to identify genes and operons that are actively transcribed in this organism during logarithmic growth, and link promoter elements with levels of transcription of essential genes in this organism. As homologs of many of our identified genes are present across the whole genus Bifidobacterium, our dataset constitutes a transcriptomic reference to be used for future investigations of gene expression in members of this genus.

  14. Status of computational fluid dynamics in the United States

    International Nuclear Information System (INIS)

    Kutler, P.; Steger, J.L.; Bailey, F.R.

    1987-01-01

    CFD-related progress in U.S. aerospace industries and research institutions is evaluated with respect to methods employed, their applications, and the computer technologies employed in their implementation. Goals for subsonic CFD are primarily aimed at greater fuel efficiency; those of supersonic CFD involve the achievement of high sustained cruise efficiency. Transatmospheric/hypersonic vehicles are noted to have recently become important concerns for CFD efforts. Attention is given to aspects of discretization, Euler and Navier-Stokes general purpose codes, zonal equation methods, internal and external flows, and the impact of supercomputers and their networks in advancing the state-of-the-art. 91 references

  15. Computer Use and Vision-Related Problems Among University Students In Ajman, United Arab Emirate

    OpenAIRE

    Shantakumari, N; Eldeeb, R; Sreedharan, J; Gopal, K

    2014-01-01

    Background: The extensive use of computers as medium of teaching and learning in universities necessitates introspection into the extent of computer related health disorders among student population. Aim: This study was undertaken to assess the pattern of computer usage and related visual problems, among University students in Ajman, United Arab Emirates. Materials and Methods: A total of 500 Students studying in Gulf Medical University, Ajman and Ajman University of Science and Technology we...

  16. Units for on-line control with the ES computer in physical investigations

    International Nuclear Information System (INIS)

    Efimov, L.G.

    1983-01-01

    The peripheral part of complex of means created for organization of ES computer operation on-line with experimental devices, comprising two units is described. The first unit is employed as a part of a universal driver of the Camac branch for connection with microprogram ES computer channel controller and ensures multioperational (up to 44 record varieties) device software service. The bilateral data exchange between the device and computer can be performed by bytes as well as 16 or 24-digit words using CAMAC group modes and with maximum rate of 1.25 Mbyte/s. The second unit is meant for synchronization of the data aquisition process with the device starting system and for ensuring the device operator dialogue with the computer

  17. Software for a magnetic dick drive unit connected with a computer TPA-1001-i

    International Nuclear Information System (INIS)

    Elizarov, O.I.; Mateeva, A.; Salamatin, I.M.

    1977-01-01

    The disk drive unit with capacity 1250 K and minimal addressing part of memory 1 sector (128 10 -12-bit words) is connected with a TPA-1001-i computer. The operation regimes of the controller, functions and formats of the commands used are described as well as the software. The data transfer between the computer and magnetic disk drive unit is realized by means of programs relocatable in a binary form. These are inserted in a standard program library with modular structure. The manner of control handling and data transfer betweeen programs stored in the library on a magnetic disk drive are described. The resident program (100sub(8) words) inserted in a monitor takes into account special features of the disk drive unit being used. The algorithms of correction programs for a disk drive unit, programs for rewriting the library from papertape to disk drive unit and of the program for writing and reading the monitor are described

  18. The Accident Analysis Due to Reactivity Insertion of RSG GAS 3.55 g U/cc Silicide Core

    International Nuclear Information System (INIS)

    Endiah Puji-Hastuti; Surbakti, Tukiran

    2004-01-01

    The fuels of RSG-GAS reactor was changed from uranium oxide with 250 g U of loading or 2.96 g U/cc of fuel loading to uranium silicide with the same loading. The silicide fuels can be used in higher density, staying longer in the reactor core and hence having a longer cycle length. The silicide fuel in RSG-GAS core was made up in step-wise by using mixed up core Firstly, it was used silicide fuel with 250 g U of loading and then, silicide fuel with 300 g U of loading (3.55 g U/cc of fuel loading). In every step-wise of fuel loading, it must be analyzed its safety margin. In this occasion, the reactivity accident of RSG-GAS core with 300 g U of silicide fuel loading is analyzed. The calculation was done using EUREKA-2/RR code available at P2TRR. The calculation was done by reactivity insertion at start up and power rangers. The worst case accident is transient due to control rod with drawl failure at start up by means of lowest initial power (0.1 W), either in power range. From all cases which have been done, the results of analysis showed that there is no anomaly and safety margin break at RSG-GAS core with 300 g U silicide fuel loading. (author)

  19. Transposon mutagenesis in Bifidobacterium breve: construction and characterization of a Tn5 transposon mutant library for Bifidobacterium breve UCC2003.

    Science.gov (United States)

    Ruiz, Lorena; Motherway, Mary O'Connell; Lanigan, Noreen; van Sinderen, Douwe

    2013-01-01

    Bifidobacteria are claimed to contribute positively to human health through a range of beneficial or probiotic activities, including amelioration of gastrointestinal and metabolic disorders, and therefore this particular group of gastrointestinal commensals has enjoyed increasing industrial and scientific attention in recent years. However, the molecular mechanisms underlying these probiotic mechanisms are still largely unknown, mainly due to the fact that molecular tools for bifidobacteria are rather poorly developed, with many strains lacking genetic accessibility. In this work, we describe the generation of transposon insertion mutants in two bifidobacterial strains, B. breve UCC2003 and B. breve NCFB2258. We also report the creation of the first transposon mutant library in a bifidobacterial strain, employing B. breve UCC2003 and a Tn5-based transposome strategy. The library was found to be composed of clones containing single transposon insertions which appear to be randomly distributed along the genome. The usefulness of the library to perform phenotypic screenings was confirmed through identification and analysis of mutants defective in D-galactose, D-lactose or pullulan utilization abilities.

  20. Transposon mutagenesis in Bifidobacterium breve: construction and characterization of a Tn5 transposon mutant library for Bifidobacterium breve UCC2003.

    Directory of Open Access Journals (Sweden)

    Lorena Ruiz

    Full Text Available Bifidobacteria are claimed to contribute positively to human health through a range of beneficial or probiotic activities, including amelioration of gastrointestinal and metabolic disorders, and therefore this particular group of gastrointestinal commensals has enjoyed increasing industrial and scientific attention in recent years. However, the molecular mechanisms underlying these probiotic mechanisms are still largely unknown, mainly due to the fact that molecular tools for bifidobacteria are rather poorly developed, with many strains lacking genetic accessibility. In this work, we describe the generation of transposon insertion mutants in two bifidobacterial strains, B. breve UCC2003 and B. breve NCFB2258. We also report the creation of the first transposon mutant library in a bifidobacterial strain, employing B. breve UCC2003 and a Tn5-based transposome strategy. The library was found to be composed of clones containing single transposon insertions which appear to be randomly distributed along the genome. The usefulness of the library to perform phenotypic screenings was confirmed through identification and analysis of mutants defective in D-galactose, D-lactose or pullulan utilization abilities.

  1. Transcriptional and functional characterization of genetic elements involved in galacto-oligosaccharide utilization by Bifidobacterium breve UCC2003.

    Science.gov (United States)

    O'Connell Motherway, Mary; Kinsella, Michael; Fitzgerald, Gerald F; van Sinderen, Douwe

    2013-01-01

    Several prebiotics, such as inulin, fructo-oligosaccharides and galacto-oligosaccharides, are widely used commercially in foods and there is convincing evidence, in particular for galacto-oligosaccharides, that prebiotics can modulate the microbiota and promote bifidobacterial growth in the intestinal tract of infants and adults. In this study we describe the identification and functional characterization of the genetic loci responsible for the transport and metabolism of purified galacto-oligosaccharides (PGOS) by Bifidobacterium breve UCC2003. We further demonstrate that an extracellular endogalactanase specified by several B. breve strains, including B. breve UCC2003, is essential for partial degradation of PGOS components with a high degree of polymerization. These partially hydrolysed PGOS components are presumed to be transported into the bifidobacterial cell via various ABC transport systems and sugar permeases where they are further degraded to galactose and glucose monomers that feed into the bifid shunt. This work significantly advances our molecular understanding of bifidobacterial PGOS metabolism and its associated genetic machinery to utilize this prebiotic. © 2012 The Authors. Published by Society for Applied Microbiology and Blackwell Publishing Ltd. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

  2. Autoinducer-2 plays a crucial role in gut colonization and probiotic functionality of Bifidobacterium breve UCC2003.

    Science.gov (United States)

    Christiaen, Steven E A; O'Connell Motherway, Mary; Bottacini, Francesca; Lanigan, Noreen; Casey, Pat G; Huys, Geert; Nelis, Hans J; van Sinderen, Douwe; Coenye, Tom

    2014-01-01

    In the present study we show that luxS of Bifidobacterium breve UCC2003 is involved in the production of the interspecies signaling molecule autoinducer-2 (AI-2), and that this gene is essential for gastrointestinal colonization of a murine host, while it is also involved in providing protection against Salmonella infection in Caenorhabditis elegans. We demonstrate that a B. breve luxS-insertion mutant is significantly more susceptible to iron chelators than the WT strain and that this sensitivity can be partially reverted in the presence of the AI-2 precursor DPD. Furthermore, we show that several genes of an iron starvation-induced gene cluster, which are downregulated in the luxS-insertion mutant and which encodes a presumed iron-uptake system, are transcriptionally upregulated under in vivo conditions. Mutation of two genes of this cluster in B. breve UCC2003 renders the derived mutant strains sensitive to iron chelators while deficient in their ability to confer gut pathogen protection to Salmonella-infected nematodes. Since a functional luxS gene is present in all tested members of the genus Bifidobacterium, we conclude that bifidobacteria operate a LuxS-mediated system for gut colonization and pathogen protection that is correlated with iron acquisition.

  3. Autoinducer-2 plays a crucial role in gut colonization and probiotic functionality of Bifidobacterium breve UCC2003.

    Directory of Open Access Journals (Sweden)

    Steven E A Christiaen

    Full Text Available In the present study we show that luxS of Bifidobacterium breve UCC2003 is involved in the production of the interspecies signaling molecule autoinducer-2 (AI-2, and that this gene is essential for gastrointestinal colonization of a murine host, while it is also involved in providing protection against Salmonella infection in Caenorhabditis elegans. We demonstrate that a B. breve luxS-insertion mutant is significantly more susceptible to iron chelators than the WT strain and that this sensitivity can be partially reverted in the presence of the AI-2 precursor DPD. Furthermore, we show that several genes of an iron starvation-induced gene cluster, which are downregulated in the luxS-insertion mutant and which encodes a presumed iron-uptake system, are transcriptionally upregulated under in vivo conditions. Mutation of two genes of this cluster in B. breve UCC2003 renders the derived mutant strains sensitive to iron chelators while deficient in their ability to confer gut pathogen protection to Salmonella-infected nematodes. Since a functional luxS gene is present in all tested members of the genus Bifidobacterium, we conclude that bifidobacteria operate a LuxS-mediated system for gut colonization and pathogen protection that is correlated with iron acquisition.

  4. Transcriptional and functional characterization of genetic elements involved in galacto-oligosaccharide utilization by Bifidobacterium breve UCC2003

    Science.gov (United States)

    O'Connell Motherway, Mary; Kinsella, Michael; Fitzgerald, Gerald F; Sinderen, Douwe

    2013-01-01

    Several prebiotics, such as inulin, fructo-oligosaccharides and galacto-oligosaccharides, are widely used commercially in foods and there is convincing evidence, in particular for galacto-oligosaccharides, that prebiotics can modulate the microbiota and promote bifidobacterial growth in the intestinal tract of infants and adults. In this study we describe the identification and functional characterization of the genetic loci responsible for the transport and metabolism of purified galacto-oligosaccharides (PGOS) by Bifidobacterium breve UCC2003. We further demonstrate that an extracellular endogalactanase specified by several B. breve strains, including B. breve UCC2003, is essential for partial degradation of PGOS components with a high degree of polymerization. These partially hydrolysed PGOS components are presumed to be transported into the bifidobacterial cell via various ABC transport systems and sugar permeases where they are further degraded to galactose and glucose monomers that feed into the bifid shunt. This work significantly advances our molecular understanding of bifidobacterial PGOS metabolism and its associated genetic machinery to utilize this prebiotic. PMID:23199239

  5. An Alternative Method for Computing Unit Costs and Productivity Ratios. AIR 1984 Annual Forum Paper.

    Science.gov (United States)

    Winstead, Wayland H.; And Others

    An alternative measure for evaluating the performance of academic departments was studied. A comparison was made with the traditional manner for computing unit costs and productivity ratios: prorating the salary and effort of each faculty member to each course level based on the personal mix of course taught. The alternative method used averaging…

  6. Whole Language, Computers and CD-ROM Technology: A Kindergarten Unit on "Benjamin Bunny."

    Science.gov (United States)

    Balajthy, Ernest

    A kindergarten teacher, two preservice teachers, and a college consultant on educational computer technology designed and developed a 10-day whole-language integrated unit on the theme of Beatrix Potter's "Benjamin Bunny." The project was designed as a demonstration of the potential of integrating the CD-ROM-based version of…

  7. Using Videos and 3D Animations for Conceptual Learning in Basic Computer Units

    Science.gov (United States)

    Cakiroglu, Unal; Yilmaz, Huseyin

    2017-01-01

    This article draws on a one-semester study to investigate the effect of videos and 3D animations on students' conceptual understandings about basic computer units. A quasi-experimental design was carried out in two classrooms; videos and 3D animations were used in classroom activities in one group and those were used for homework in the other…

  8. Students' Beliefs about Mobile Devices vs. Desktop Computers in South Korea and the United States

    Science.gov (United States)

    Sung, Eunmo; Mayer, Richard E.

    2012-01-01

    College students in the United States and in South Korea completed a 28-item multidimensional scaling (MDS) questionnaire in which they rated the similarity of 28 pairs of multimedia learning materials on a 10-point scale (e.g., narrated animation on a mobile device Vs. movie clip on a desktop computer) and a 56-item semantic differential…

  9. Computer Drawing Method for Operating Characteristic Curve of PV Power Plant Array Unit

    Science.gov (United States)

    Tan, Jianbin

    2018-02-01

    According to the engineering design of large-scale grid-connected photovoltaic power stations and the research and development of many simulation and analysis systems, it is necessary to draw a good computer graphics of the operating characteristic curves of photovoltaic array elements and to propose a good segmentation non-linear interpolation algorithm. In the calculation method, Component performance parameters as the main design basis, the computer can get 5 PV module performances. At the same time, combined with the PV array series and parallel connection, the computer drawing of the performance curve of the PV array unit can be realized. At the same time, the specific data onto the module of PV development software can be calculated, and the good operation of PV array unit can be improved on practical application.

  10. Computer Science Teacher Professional Development in the United States: A Review of Studies Published between 2004 and 2014

    Science.gov (United States)

    Menekse, Muhsin

    2015-01-01

    While there has been a remarkable interest to make computer science a core K-12 academic subject in the United States, there is a shortage of K-12 computer science teachers to successfully implement computer sciences courses in schools. In order to enhance computer science teacher capacity, training programs have been offered through teacher…

  11. Computation of large covariance matrices by SAMMY on graphical processing units and multicore CPUs

    Energy Technology Data Exchange (ETDEWEB)

    Arbanas, G.; Dunn, M.E.; Wiarda, D., E-mail: arbanasg@ornl.gov, E-mail: dunnme@ornl.gov, E-mail: wiardada@ornl.gov [Oak Ridge National Laboratory, Oak Ridge, TN (United States)

    2011-07-01

    Computational power of Graphical Processing Units and multicore CPUs was harnessed by the nuclear data evaluation code SAMMY to speed up computations of large Resonance Parameter Covariance Matrices (RPCMs). This was accomplished by linking SAMMY to vendor-optimized implementations of the matrix-matrix multiplication subroutine of the Basic Linear Algebra Library to compute the most time-consuming step. The {sup 235}U RPCM computed previously using a triple-nested loop was re-computed using the NVIDIA implementation of the subroutine on a single Tesla Fermi Graphical Processing Unit, and also using the Intel's Math Kernel Library implementation on two different multicore CPU systems. A multiplication of two matrices of dimensions 16,000×20,000 that had previously taken days, took approximately one minute on the GPU. Comparable performance was achieved on a dual six-core CPU system. The magnitude of the speed-up suggests that these, or similar, combinations of hardware and libraries may be useful for large matrix operations in SAMMY. Uniform interfaces of standard linear algebra libraries make them a promising candidate for a programming framework of a new generation of SAMMY for the emerging heterogeneous computing platforms. (author)

  12. Computation of large covariance matrices by SAMMY on graphical processing units and multicore CPUs

    International Nuclear Information System (INIS)

    Arbanas, G.; Dunn, M.E.; Wiarda, D.

    2011-01-01

    Computational power of Graphical Processing Units and multicore CPUs was harnessed by the nuclear data evaluation code SAMMY to speed up computations of large Resonance Parameter Covariance Matrices (RPCMs). This was accomplished by linking SAMMY to vendor-optimized implementations of the matrix-matrix multiplication subroutine of the Basic Linear Algebra Library to compute the most time-consuming step. The 235 U RPCM computed previously using a triple-nested loop was re-computed using the NVIDIA implementation of the subroutine on a single Tesla Fermi Graphical Processing Unit, and also using the Intel's Math Kernel Library implementation on two different multicore CPU systems. A multiplication of two matrices of dimensions 16,000×20,000 that had previously taken days, took approximately one minute on the GPU. Comparable performance was achieved on a dual six-core CPU system. The magnitude of the speed-up suggests that these, or similar, combinations of hardware and libraries may be useful for large matrix operations in SAMMY. Uniform interfaces of standard linear algebra libraries make them a promising candidate for a programming framework of a new generation of SAMMY for the emerging heterogeneous computing platforms. (author)

  13. Simplified techniques of cerebral angiography using a mobile X-ray unit and computed radiography

    International Nuclear Information System (INIS)

    Gondo, Gakuji; Ishiwata, Yusuke; Yamashita, Toshinori; Iida, Takashi; Moro, Yutaka

    1989-01-01

    Simplified techniques of cerebral angiography using a mobile X-ray unit and computed radiography (CR) are discussed. Computed radiography is a digital radiography system in which an imaging plate is used as an X-ray detector and a final image is displayed on the film. In the angiograms performed with CR, the spatial frequency components can be enhanced for the easy analysis of fine blood vessels. Computed radiography has an automatic sensitivity and a latitude-setting mechanism, thus serving as an 'automatic camera.' This mechanism is useful for radiography with a mobile X-ray unit in hospital wards, intensive care units, or operating rooms where the appropriate setting of exposure conditions is difficult. We applied this mechanism to direct percutaneous carotid angiography and intravenous digital subtraction angiography with a mobile X-ray unit. Direct percutaneous carotid angiography using CR and a mobile X-ray unit were taken after the manual injection of a small amount of a contrast material through a fine needle. We performed direct percutaneous carotid angiography with this method 68 times on 25 cases from August 1986 to December 1987. Of the 68 angiograms, 61 were evaluated as good, compared with conventional angiography. Though the remaining seven were evaluated as poor, they were still diagnostically effective. This method is found useful for carotid angiography in emergency rooms, intensive care units, or operating rooms. Cerebral venography using CR and a mobile X-ray unit was done after the manual injection of a contrast material through the bilateral cubital veins. The cerebral venous system could be visualized from 16 to 24 seconds after the beginning of the injection of the contrast material. We performed cerebral venography with this method 14 times on six cases. These venograms were better than conventional angiograms in all cases. This method may be useful in managing patients suffering from cerebral venous thrombosis. (J.P.N.)

  14. Computation studies into architecture and energy transfer properties of photosynthetic units from filamentous anoxygenic phototrophs

    Energy Technology Data Exchange (ETDEWEB)

    Linnanto, Juha Matti [Institute of Physics, University of Tartu, Riia 142, 51014 Tartu (Estonia); Freiberg, Arvi [Institute of Physics, University of Tartu, Riia 142, 51014 Tartu, Estonia and Institute of Molecular and Cell Biology, University of Tartu, Riia 23, 51010 Tartu (Estonia)

    2014-10-06

    We have used different computational methods to study structural architecture, and light-harvesting and energy transfer properties of the photosynthetic unit of filamentous anoxygenic phototrophs. Due to the huge number of atoms in the photosynthetic unit, a combination of atomistic and coarse methods was used for electronic structure calculations. The calculations reveal that the light energy absorbed by the peripheral chlorosome antenna complex transfers efficiently via the baseplate and the core B808–866 antenna complexes to the reaction center complex, in general agreement with the present understanding of this complex system.

  15. Development of DUST: A computer code that calculates release rates from a LLW disposal unit

    International Nuclear Information System (INIS)

    Sullivan, T.M.

    1992-01-01

    Performance assessment of a Low-Level Waste (LLW) disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the disposal unit source term). The major physical processes that influence the source term are water flow, container degradation, waste form leaching, and radionuclide transport. A computer code, DUST (Disposal Unit Source Term) has been developed which incorporates these processes in a unified manner. The DUST code improves upon existing codes as it has the capability to model multiple container failure times, multiple waste form release properties, and radionuclide specific transport properties. Verification studies performed on the code are discussed

  16. Fusion energy division computer systems network

    International Nuclear Information System (INIS)

    Hammons, C.E.

    1980-12-01

    The Fusion Energy Division of the Oak Ridge National Laboratory (ORNL) operated by Union Carbide Corporation Nuclear Division (UCC-ND) is primarily involved in the investigation of problems related to the use of controlled thermonuclear fusion as an energy source. The Fusion Energy Division supports investigations of experimental fusion devices and related fusion theory. This memo provides a brief overview of the computing environment in the Fusion Energy Division and the computing support provided to the experimental effort and theory research

  17. 77 FR 31026 - Use of Computer Simulation of the United States Blood Supply in Support of Planning for Emergency...

    Science.gov (United States)

    2012-05-24

    ...] Use of Computer Simulation of the United States Blood Supply in Support of Planning for Emergency... entitled: ``Use of Computer Simulation of the United States Blood Supply in Support of Planning for... and panel discussions with experts from academia, regulated industry, government, and other...

  18. 78 FR 47011 - Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Science.gov (United States)

    2013-08-02

    ... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Software Unit Testing for Digital Computer Software... revised regulatory guide (RG), revision 1 of RG 1.171, ``Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants.'' This RG endorses American National Standards...

  19. 77 FR 50722 - Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Science.gov (United States)

    2012-08-22

    ... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Software Unit Testing for Digital Computer Software...) is issuing for public comment draft regulatory guide (DG), DG-1208, ``Software Unit Testing for Digital Computer Software used in Safety Systems of Nuclear Power Plants.'' The DG-1208 is proposed...

  20. 24 CFR 290.21 - Computing annual number of units eligible for substitution of tenant-based assistance or...

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Computing annual number of units eligible for substitution of tenant-based assistance or alternative uses. 290.21 Section 290.21 Housing and... Multifamily Projects § 290.21 Computing annual number of units eligible for substitution of tenant-based...

  1. Glycoside hydrolase family 13 α-glucosidases encoded by Bifidobacterium breve UCC2003; A comparative analysis of function, structure and phylogeny.

    Science.gov (United States)

    Kelly, Emer D; Bottacini, Francesca; O'Callaghan, John; Motherway, Mary O'Connell; O'Connell, Kerry Joan; Stanton, Catherine; van Sinderen, Douwe

    2016-05-02

    Bifidobacterium breve is a noted inhabitant and one of the first colonizers of the human gastro intestinal tract (GIT). The ability of this bacterium to persist in the GIT is reflected by the abundance of carbohydrate-active enzymes that are encoded by its genome. One such family of enzymes is represented by the α-glucosidases, of which three, Agl1, Agl2 and MelD, have previously been identified and characterized in the prototype B. breve strain UCC2003. In this report, we describe an additional B. breve UCC2003-encoded α-glucosidase, along with a B. breve UCC2003-encoded α-glucosidase-like protein, designated here as Agl3 and Agl4, respectively, which together with the three previously described enzymes belong to glycoside hydrolase (GH) family 13. Agl3 was shown to exhibit hydrolytic specificity towards the α-(1→6) linkage present in palatinose; the α-(1→3) linkage present in turanose; the α-(1→4) linkages found in maltotriose and maltose; and to a lesser degree, the α-(1→2) linkage found in sucrose and kojibiose; and the α-(1→5) linkage found in leucrose. Surprisingly, based on the substrates analyzed, Agl4 did not exhibit biologically relevant α-glucosidic activity. With the presence of four functionally active GH13 α-glucosidases, B. breve UCC2003 is capable of hydrolyzing all α-glucosidic linkages that can be expected in glycan substrates in the lower GIT. This abundance of α-glucosidases provides B. breve UCC2003 with an adaptive ability and metabolic versatility befitting the transient nature of growth substrates in the GIT. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Calibration of the indium foil used for criticality accident dosimetry in the UCC-ND employee identification badge

    International Nuclear Information System (INIS)

    Ryan, M.T.; Butler, H.M.; Gupton, E.D.; Sims, C.S.

    1982-05-01

    The UCC-ND Employee Identification Badge contains an indium foil disc that is intended for use as a dosimetry screening device in the event of a criticality accident. While it is recognized that indium is not a precise mixed neutron-gamma dosimeter, its activation by neutrons provides adequate means for separating potentially exposed persons into three groups. These groups are: (1) personnel exposed below annual dose limits, (2) personnel exposed above annual dose limits but below 25 rem, and (3) personnel exposed above 25 rem. This screening procedure is designed to facilitate dosimeter processing in order to meet regulatory reporting requirements. A quick method of interpreting induced activity measurements is presented and discussed

  3. Development of a luciferase-based reporter system to monitor Bifidobacterium breve UCC2003 persistence in mice

    Directory of Open Access Journals (Sweden)

    Hill Colin

    2008-09-01

    Full Text Available Abstract Background Probiotics such as bifidobacteria have been shown to maintain a healthy intestinal microbial balance and help protect against infections. However, despite these benefits, bifidobacteria still remain poorly understood at the biochemical, physiological and especially the genetic level. Herein we describe, for the first time, the development of a non-invasive luciferase-based reporter system for real-time tracking of Bifidobacterium species in vivo. Results The reporter vector pLuxMC1 is based on the recently described theta-type plasmid pBC1 from B. catenatulatum 1 and the luxABCDE operon from pPL2lux 2. Derivatives of pLuxMC1, harbouring a bifidobacterial promoter (pLuxMC2 as well as a synthetically derived promoter (pLuxMC3 3 placed upstream of luxABCDE, were constructed and found to stably replicate in B. breve UCC2003. The subsequent analysis of these strains allowed us to assess the functionality of pLuxMC1 both in vitro and in vivo. Conclusion Our results demonstrate the potential of pLuxMC1 as a real-time, non-invasive reporter system for Bifidobacterium. It has also allowed us, for the first time, to track the colonisation potential and persistence of this probiotic species in real time. An interesting and significant outcome of the study is the identification of the caecum as a niche environment for B. breve UCC2003 within the mouse gastrointestinal tract (GI tract.

  4. Initial quantitative evaluation of computed radiography in an intensive care unit

    International Nuclear Information System (INIS)

    Hillis, D.J.; McDonald, I.G.; Kelly, W.J.

    1996-01-01

    The first computed radiography (CR) unit in Australia was installed at St Vincent's Hospital, Melbourne, in February 1994. An initial qualitative evaluation of the attitude of the intensive care unit (ICU) physicians to the CR unit was conducted by use of a survey. The results of the survey of ICU physicians indicated that images were available faster than under the previous system and that the use of the CR system was preferred to evaluate chest tubes and line placements. While it is recognized that a further detailed radiological evaluation of the CR system is required to establish the diagnostic performance of CR compared with conventional film, some comments on the implementation of the system and ICU physician attitudes to the CR system are put forward for consideration by other hospitals examining the possible use of CR systems. 11 refs., 1 tab

  5. All-optical quantum computing with a hybrid solid-state processing unit

    International Nuclear Information System (INIS)

    Pei Pei; Zhang Fengyang; Li Chong; Song Heshan

    2011-01-01

    We develop an architecture of a hybrid quantum solid-state processing unit for universal quantum computing. The architecture allows distant and nonidentical solid-state qubits in distinct physical systems to interact and work collaboratively. All the quantum computing procedures are controlled by optical methods using classical fields and cavity QED. Our methods have a prominent advantage of the insensitivity to dissipation process benefiting from the virtual excitation of subsystems. Moreover, the quantum nondemolition measurements and state transfer for the solid-state qubits are proposed. The architecture opens promising perspectives for implementing scalable quantum computation in a broader sense that different solid-state systems can merge and be integrated into one quantum processor afterward.

  6. Unit cell-based computer-aided manufacturing system for tissue engineering

    International Nuclear Information System (INIS)

    Kang, Hyun-Wook; Park, Jeong Hun; Kang, Tae-Yun; Seol, Young-Joon; Cho, Dong-Woo

    2012-01-01

    Scaffolds play an important role in the regeneration of artificial tissues or organs. A scaffold is a porous structure with a micro-scale inner architecture in the range of several to several hundreds of micrometers. Therefore, computer-aided construction of scaffolds should provide sophisticated functionality for porous structure design and a tool path generation strategy that can achieve micro-scale architecture. In this study, a new unit cell-based computer-aided manufacturing (CAM) system was developed for the automated design and fabrication of a porous structure with micro-scale inner architecture that can be applied to composite tissue regeneration. The CAM system was developed by first defining a data structure for the computing process of a unit cell representing a single pore structure. Next, an algorithm and software were developed and applied to construct porous structures with a single or multiple pore design using solid freeform fabrication technology and a 3D tooth/spine computer-aided design model. We showed that this system is quite feasible for the design and fabrication of a scaffold for tissue engineering. (paper)

  7. Unit cell-based computer-aided manufacturing system for tissue engineering.

    Science.gov (United States)

    Kang, Hyun-Wook; Park, Jeong Hun; Kang, Tae-Yun; Seol, Young-Joon; Cho, Dong-Woo

    2012-03-01

    Scaffolds play an important role in the regeneration of artificial tissues or organs. A scaffold is a porous structure with a micro-scale inner architecture in the range of several to several hundreds of micrometers. Therefore, computer-aided construction of scaffolds should provide sophisticated functionality for porous structure design and a tool path generation strategy that can achieve micro-scale architecture. In this study, a new unit cell-based computer-aided manufacturing (CAM) system was developed for the automated design and fabrication of a porous structure with micro-scale inner architecture that can be applied to composite tissue regeneration. The CAM system was developed by first defining a data structure for the computing process of a unit cell representing a single pore structure. Next, an algorithm and software were developed and applied to construct porous structures with a single or multiple pore design using solid freeform fabrication technology and a 3D tooth/spine computer-aided design model. We showed that this system is quite feasible for the design and fabrication of a scaffold for tissue engineering.

  8. Computing the Density Matrix in Electronic Structure Theory on Graphics Processing Units.

    Science.gov (United States)

    Cawkwell, M J; Sanville, E J; Mniszewski, S M; Niklasson, Anders M N

    2012-11-13

    The self-consistent solution of a Schrödinger-like equation for the density matrix is a critical and computationally demanding step in quantum-based models of interatomic bonding. This step was tackled historically via the diagonalization of the Hamiltonian. We have investigated the performance and accuracy of the second-order spectral projection (SP2) algorithm for the computation of the density matrix via a recursive expansion of the Fermi operator in a series of generalized matrix-matrix multiplications. We demonstrate that owing to its simplicity, the SP2 algorithm [Niklasson, A. M. N. Phys. Rev. B2002, 66, 155115] is exceptionally well suited to implementation on graphics processing units (GPUs). The performance in double and single precision arithmetic of a hybrid GPU/central processing unit (CPU) and full GPU implementation of the SP2 algorithm exceed those of a CPU-only implementation of the SP2 algorithm and traditional matrix diagonalization when the dimensions of the matrices exceed about 2000 × 2000. Padding schemes for arrays allocated in the GPU memory that optimize the performance of the CUBLAS implementations of the level 3 BLAS DGEMM and SGEMM subroutines for generalized matrix-matrix multiplications are described in detail. The analysis of the relative performance of the hybrid CPU/GPU and full GPU implementations indicate that the transfer of arrays between the GPU and CPU constitutes only a small fraction of the total computation time. The errors measured in the self-consistent density matrices computed using the SP2 algorithm are generally smaller than those measured in matrices computed via diagonalization. Furthermore, the errors in the density matrices computed using the SP2 algorithm do not exhibit any dependence of system size, whereas the errors increase linearly with the number of orbitals when diagonalization is employed.

  9. Computer use and vision-related problems among university students in ajman, United arab emirate.

    Science.gov (United States)

    Shantakumari, N; Eldeeb, R; Sreedharan, J; Gopal, K

    2014-03-01

    The extensive use of computers as medium of teaching and learning in universities necessitates introspection into the extent of computer related health disorders among student population. This study was undertaken to assess the pattern of computer usage and related visual problems, among University students in Ajman, United Arab Emirates. A total of 500 Students studying in Gulf Medical University, Ajman and Ajman University of Science and Technology were recruited into this study. Demographic characteristics, pattern of usage of computers and associated visual symptoms were recorded in a validated self-administered questionnaire. Chi-square test was used to determine the significance of the observed differences between the variables. The level of statistical significance was at P computer users were headache - 53.3% (251/471), burning sensation in the eyes - 54.8% (258/471) and tired eyes - 48% (226/471). Female students were found to be at a higher risk. Nearly 72% of students reported frequent interruption of computer work. Headache caused interruption of work in 43.85% (110/168) of the students while tired eyes caused interruption of work in 43.5% (98/168) of the students. When the screen was viewed at distance more than 50 cm, the prevalence of headaches decreased by 38% (50-100 cm - OR: 0.62, 95% of the confidence interval [CI]: 0.42-0.92). Prevalence of tired eyes increased by 89% when screen filters were not used (OR: 1.894, 95% CI: 1.065-3.368). High prevalence of vision related problems was noted among university students. Sustained periods of close screen work without screen filters were found to be associated with occurrence of the symptoms and increased interruptions of work of the students. There is a need to increase the ergonomic awareness among students and corrective measures need to be implemented to reduce the impact of computer related vision problems.

  10. On dosimetry of radiodiagnosis facilities, mainly focused on computed tomography units

    International Nuclear Information System (INIS)

    Ghitulescu, Zoe

    2008-01-01

    The 'talk' refers to the Dosimetry of computed tomography units and it has been thought and structured in three parts, more or less stressed each of them, thus: 1) Basics of image acquisition using computed tomography technique; 2) Effective Dose calculation for a patient and its assessment using BERT concept; 3) Recommended actions of getting a good compromise in between related dose and the image quality. The aim of the first part is that the reader to become acquainted with the CT technique in order to be able of understanding the Effective Dose calculation given example and its conversion into time units using the BERT concept . The drown conclusion is that: 1) Effective dose calculation accomplished by the medical physicist (using a special soft for the CT scanner and the exam type) and, converted in time units through BERT concept, could be then communicated by the radiologist together with the diagnostic notes. Thus, it is obviously necessary a minimum informal of the patients as regards the nature and type of radiation, for instance, by the help of some leaflets. In the third part are discussed the factors which lead to get a good image quality taking into account the ALARA principle of Radiation Protection which states the fact that the dose should be 'as low as reasonable achievable'. (author)

  11. Computer-aided modeling of aluminophosphate zeolites as packings of building units

    KAUST Repository

    Peskov, Maxim

    2012-03-22

    New building schemes of aluminophosphate molecular sieves from packing units (PUs) are proposed. We have investigated 61 framework types discovered in zeolite-like aluminophosphates and have identified important PU combinations using a recently implemented computational algorithm of the TOPOS package. All PUs whose packing completely determines the overall topology of the aluminophosphate framework were described and catalogued. We have enumerated 235 building models for the aluminophosphates belonging to 61 zeolite framework types, from ring- or cage-like PU clusters. It is indicated that PUs can be considered as precursor species in the zeolite synthesis processes. © 2012 American Chemical Society.

  12. Computational fluid dynamics research at the United Technologies Research Center requiring supercomputers

    Science.gov (United States)

    Landgrebe, Anton J.

    1987-01-01

    An overview of research activities at the United Technologies Research Center (UTRC) in the area of Computational Fluid Dynamics (CFD) is presented. The requirement and use of various levels of computers, including supercomputers, for the CFD activities is described. Examples of CFD directed toward applications to helicopters, turbomachinery, heat exchangers, and the National Aerospace Plane are included. Helicopter rotor codes for the prediction of rotor and fuselage flow fields and airloads were developed with emphasis on rotor wake modeling. Airflow and airload predictions and comparisons with experimental data are presented. Examples are presented of recent parabolized Navier-Stokes and full Navier-Stokes solutions for hypersonic shock-wave/boundary layer interaction, and hydrogen/air supersonic combustion. In addition, other examples of CFD efforts in turbomachinery Navier-Stokes methodology and separated flow modeling are presented. A brief discussion of the 3-tier scientific computing environment is also presented, in which the researcher has access to workstations, mid-size computers, and supercomputers.

  13. Energy- and cost-efficient lattice-QCD computations using graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    Bach, Matthias

    2014-07-01

    Quarks and gluons are the building blocks of all hadronic matter, like protons and neutrons. Their interaction is described by Quantum Chromodynamics (QCD), a theory under test by large scale experiments like the Large Hadron Collider (LHC) at CERN and in the future at the Facility for Antiproton and Ion Research (FAIR) at GSI. However, perturbative methods can only be applied to QCD for high energies. Studies from first principles are possible via a discretization onto an Euclidean space-time grid. This discretization of QCD is called Lattice QCD (LQCD) and is the only ab-initio option outside of the high-energy regime. LQCD is extremely compute and memory intensive. In particular, it is by definition always bandwidth limited. Thus - despite the complexity of LQCD applications - it led to the development of several specialized compute platforms and influenced the development of others. However, in recent years General-Purpose computation on Graphics Processing Units (GPGPU) came up as a new means for parallel computing. Contrary to machines traditionally used for LQCD, graphics processing units (GPUs) are a massmarket product. This promises advantages in both the pace at which higher-performing hardware becomes available and its price. CL2QCD is an OpenCL based implementation of LQCD using Wilson fermions that was developed within this thesis. It operates on GPUs by all major vendors as well as on central processing units (CPUs). On the AMD Radeon HD 7970 it provides the fastest double-precision D kernel for a single GPU, achieving 120GFLOPS. D - the most compute intensive kernel in LQCD simulations - is commonly used to compare LQCD platforms. This performance is enabled by an in-depth analysis of optimization techniques for bandwidth-limited codes on GPUs. Further, analysis of the communication between GPU and CPU, as well as between multiple GPUs, enables high-performance Krylov space solvers and linear scaling to multiple GPUs within a single system. LQCD

  14. Energy- and cost-efficient lattice-QCD computations using graphics processing units

    International Nuclear Information System (INIS)

    Bach, Matthias

    2014-01-01

    Quarks and gluons are the building blocks of all hadronic matter, like protons and neutrons. Their interaction is described by Quantum Chromodynamics (QCD), a theory under test by large scale experiments like the Large Hadron Collider (LHC) at CERN and in the future at the Facility for Antiproton and Ion Research (FAIR) at GSI. However, perturbative methods can only be applied to QCD for high energies. Studies from first principles are possible via a discretization onto an Euclidean space-time grid. This discretization of QCD is called Lattice QCD (LQCD) and is the only ab-initio option outside of the high-energy regime. LQCD is extremely compute and memory intensive. In particular, it is by definition always bandwidth limited. Thus - despite the complexity of LQCD applications - it led to the development of several specialized compute platforms and influenced the development of others. However, in recent years General-Purpose computation on Graphics Processing Units (GPGPU) came up as a new means for parallel computing. Contrary to machines traditionally used for LQCD, graphics processing units (GPUs) are a massmarket product. This promises advantages in both the pace at which higher-performing hardware becomes available and its price. CL2QCD is an OpenCL based implementation of LQCD using Wilson fermions that was developed within this thesis. It operates on GPUs by all major vendors as well as on central processing units (CPUs). On the AMD Radeon HD 7970 it provides the fastest double-precision D kernel for a single GPU, achieving 120GFLOPS. D - the most compute intensive kernel in LQCD simulations - is commonly used to compare LQCD platforms. This performance is enabled by an in-depth analysis of optimization techniques for bandwidth-limited codes on GPUs. Further, analysis of the communication between GPU and CPU, as well as between multiple GPUs, enables high-performance Krylov space solvers and linear scaling to multiple GPUs within a single system. LQCD

  15. Functional genome analysis of Bifidobacterium breve UCC2003 reveals type IVb tight adherence (Tad) pili as an essential and conserved host-colonization factor

    Science.gov (United States)

    O'Connell Motherway, Mary; Zomer, Aldert; Leahy, Sinead C.; Reunanen, Justus; Bottacini, Francesca; Claesson, Marcus J.; O'Brien, Frances; Flynn, Kiera; Casey, Patrick G.; Moreno Munoz, Jose Antonio; Kearney, Breda; Houston, Aileen M.; O'Mahony, Caitlin; Higgins, Des G.; Shanahan, Fergus; Palva, Airi; de Vos, Willem M.; Fitzgerald, Gerald F.; Ventura, Marco; O'Toole, Paul W.; van Sinderen, Douwe

    2011-01-01

    Development of the human gut microbiota commences at birth, with bifidobacteria being among the first colonizers of the sterile newborn gastrointestinal tract. To date, the genetic basis of Bifidobacterium colonization and persistence remains poorly understood. Transcriptome analysis of the Bifidobacterium breve UCC2003 2.42-Mb genome in a murine colonization model revealed differential expression of a type IVb tight adherence (Tad) pilus-encoding gene cluster designated “tad2003.” Mutational analysis demonstrated that the tad2003 gene cluster is essential for efficient in vivo murine gut colonization, and immunogold transmission electron microscopy confirmed the presence of Tad pili at the poles of B. breve UCC2003 cells. Conservation of the Tad pilus-encoding locus among other B. breve strains and among sequenced Bifidobacterium genomes supports the notion of a ubiquitous pili-mediated host colonization and persistence mechanism for bifidobacteria. PMID:21690406

  16. Functional genome analysis of Bifidobacterium breve UCC2003 reveals type IVb tight adherence (Tad) pili as an essential and conserved host-colonization factor.

    Science.gov (United States)

    O'Connell Motherway, Mary; Zomer, Aldert; Leahy, Sinead C; Reunanen, Justus; Bottacini, Francesca; Claesson, Marcus J; O'Brien, Frances; Flynn, Kiera; Casey, Patrick G; Munoz, Jose Antonio Moreno; Kearney, Breda; Houston, Aileen M; O'Mahony, Caitlin; Higgins, Des G; Shanahan, Fergus; Palva, Airi; de Vos, Willem M; Fitzgerald, Gerald F; Ventura, Marco; O'Toole, Paul W; van Sinderen, Douwe

    2011-07-05

    Development of the human gut microbiota commences at birth, with bifidobacteria being among the first colonizers of the sterile newborn gastrointestinal tract. To date, the genetic basis of Bifidobacterium colonization and persistence remains poorly understood. Transcriptome analysis of the Bifidobacterium breve UCC2003 2.42-Mb genome in a murine colonization model revealed differential expression of a type IVb tight adherence (Tad) pilus-encoding gene cluster designated "tad(2003)." Mutational analysis demonstrated that the tad(2003) gene cluster is essential for efficient in vivo murine gut colonization, and immunogold transmission electron microscopy confirmed the presence of Tad pili at the poles of B. breve UCC2003 cells. Conservation of the Tad pilus-encoding locus among other B. breve strains and among sequenced Bifidobacterium genomes supports the notion of a ubiquitous pili-mediated host colonization and persistence mechanism for bifidobacteria.

  17. Feasibility Study and Cost Benefit Analysis of Thin-Client Computer System Implementation Onboard United States Navy Ships

    National Research Council Canada - National Science Library

    Arbulu, Timothy D; Vosberg, Brian J

    2007-01-01

    The purpose of this MBA project was to conduct a feasibility study and a cost benefit analysis of using thin-client computer systems instead of traditional networks onboard United States Navy ships...

  18. Improvement of uncorrected visual acuity (UCVA and contrast sensitivity (UCCS with perceptual learning and transcranial random noise stimulation (tRNS in individuals with mild myopia

    Directory of Open Access Journals (Sweden)

    Rebecca eCamilleri

    2014-10-01

    Full Text Available Perceptual learning has been shown to produce an improvement of visual acuity (VA and contrast sensitivity (CS both in subjects with amblyopia and refractive defects such as myopia or presbyopia. Transcranial random noise stimulation (tRNS has proven to be efficacious in accelerating neural plasticity and boosting perceptual learning in healthy participants. In this study we investigated whether a short behavioural training regime using a contrast detection task combined with online tRNS was as effective in improving visual functions in participants with mild myopia compared to a two-month behavioural training regime without tRNS (Camilleri et al., 2014. After two weeks of perceptual training in combination with tRNS, participants showed an improvement of 0.15 LogMAR in uncorrected VA (UCVA that was comparable with that obtained after eight weeks of training with no tRNS, and an improvement in uncorrected CS (UCCS at various spatial frequencies (whereas no UCCS improvement was seen after eight weeks of training with no tRNS. On the other hand, a control group that trained for two weeks without stimulation did not show any significant UCVA or UCCS improvement. These results suggest that the combination of behavioural and neuromodulatory techniques can be fast and efficacious in improving sight in individuals with mild myopia.

  19. Bifidobacterium breve UCC2003 surface exopolysaccharide production is a beneficial trait mediating commensal-host interaction through immune modulation and pathogen protection.

    Science.gov (United States)

    Fanning, Saranna; Hall, Lindsay J; van Sinderen, Douwe

    2012-01-01

    Bifidobacteria constitute a substantial proportion of the human gut microbiota. There are currently many bifidobacterial strains with claimed probiotic attributes. The mechanism through which these strains reside within their host and exert benefits to the host is far from fully understood. We have shown in the case of Bifidobacterium breve UCC2003 that a cell surface exopolysaccharide (EPS) plays a role in in vivo persistence. Biosynthesis of two possible EPSs is controlled by a bidirectional gene cluster which guides alternate EPS synthesis by means of a reorienting promoter. The presence of EPS impacts on host immune response: the wild type, EPS-positive B. breve UCC2003 efficiently evades the adaptive B-cell host response, while its isogenic, EPS-deficient equivalent elicits a strong adaptive immune response. Functionally, EPS positive strains were more resilient to presence of acid and bile and were responsible for reduced colonization levels of Citrobacter rodentium, a gut pathogen. In conclusion, we have found that EPS is important in host interactions and pathogen protection, the latter indicative of a probiotic ability for the EPS of B. breve UCC2003.

  20. Cross-feeding by Bifidobacterium breve UCC2003 during co-cultivation with Bifidobacterium bifidum PRL2010 in a mucin-based medium.

    Science.gov (United States)

    Egan, Muireann; Motherway, Mary O'Connell; Kilcoyne, Michelle; Kane, Marian; Joshi, Lokesh; Ventura, Marco; van Sinderen, Douwe

    2014-11-25

    Bifidobacteria constitute a specific group of commensal bacteria that commonly inhabit the mammalian gastrointestinal tract. Bifidobacterium breve UCC2003 was previously shown to utilize a variety of plant/diet/host-derived carbohydrates, including cellodextrin, starch and galactan, as well as the mucin and HMO-derived monosaccharide, sialic acid. In the current study, we investigated the ability of this strain to utilize parts of a host-derived source of carbohydrate, namely the mucin glycoprotein, when grown in co-culture with the mucin-degrading Bifidobacterium bifidum PRL2010. B. breve UCC2003 was shown to exhibit growth properties in a mucin-based medium, but only when grown in the presence of B. bifidum PRL2010, which is known to metabolize mucin. A combination of HPAEC-PAD and transcriptome analyses identified some of the possible monosaccharides and oligosaccharides which support this enhanced co-cultivation growth/viability phenotype. This study describes the potential existence of a gut commensal relationship between two bifidobacterial species. We demonstrate the in vitro ability of B. breve UCC2003 to cross-feed on sugars released by the mucin-degrading activity of B. bifidum PRL2010, thus advancing our knowledge on the metabolic adaptability which allows the former strain to colonize the (infant) gut by its extensive metabolic abilities to (co-)utilize available carbohydrate sources.

  1. Chemical Equilibrium, Unit 2: Le Chatelier's Principle. A Computer-Enriched Module for Introductory Chemistry. Student's Guide and Teacher's Guide.

    Science.gov (United States)

    Jameson, A. Keith

    Presented are the teacher's guide and student materials for one of a series of self-instructional, computer-based learning modules for an introductory, undergraduate chemistry course. The student manual for this unit on Le Chatelier's principle includes objectives, prerequisites, pretest, instructions for executing the computer program, and…

  2. Computer interfacing of the unified systems for personnel supervising in nuclear units

    International Nuclear Information System (INIS)

    Staicu, M.

    1997-01-01

    The dosimetric supervising of the personnel working in nuclear units is based on the information supplied by: 1) the dosimetric data obtained by the method of thermoluminescence; 2) the dosimetric data obtained by the method of photo dosimetry: 3) the records from medical periodic control. To create a unified system of supervising the following elements were combined: a) an Automatic System of TLD Reading and Data Processing (SACDTL). The data from this system are transmitted 'on line' to the computer; b) the measuring line of the optical density of exposed dosimetric films. The interface achieved within the general ensemble SACDTL could be adapted to this line of measurement. The transmission of the data from the measurement line to the computer is made 'on line'; c) the medical surveillance data for each person transmitted 'off line' to the database computer. The unified system resulting from the unification of the three supervising systems will achieve the following general functions: - registering of the personnel working in the nuclear field; - recording the dosimetric data; - processing and presentation of the data; - issuing of measurement bulletins. Thus, by means of unified database, dosimetric intercomparison and correlative studies can be undertaken. (author)

  3. Pre-treatment with Bifidobacterium breve UCC2003 modulates Citrobacter rodentium-induced colonic inflammation and organ specificity.

    Science.gov (United States)

    Collins, James W; Akin, Ali R; Kosta, Artemis; Zhang, Ning; Tangney, Mark; Francis, Kevin P; Frankel, Gad

    2012-11-01

    Citrobacter rodentium, which colonizes the gut mucosa via formation of attaching and effacing (A/E) lesions, causes transmissible colonic hyperplasia. The aim of this study was to evaluate whether prophylactic treatment with Bifidobacterium breve UCC2003 can improve the outcome of C. rodentium infection. Six-week-old albino C57BL/6 mice were pre-treated for 3 days with B. breve, challenged with bioluminescent C. rodentium and administered B. breve or PBS-C for 8 days post-infection; control mice were either administered B. breve and mock-infected with PBS, or mock-treated with PBS-C and mock-infected with PBS. C. rodentium colonization was monitored by bacterial enumeration from faeces and by a combination of both 2D bioluminescence imaging (BLI) and composite 3D diffuse light imaging tomography with µCT imaging (DLIT-µCT). At day 8 post-infection, colons were removed and assessed for crypt hyperplasia, histology by light microscopy, bacterial colonization by immunofluorescence, and A/E lesion formation by electron microscopy. Prophylactic administration of B. breve did not prevent C. rodentium colonization or A/E lesion formation. However, this treatment did alter C. rodentium distribution within the large intestine and significantly reduced colonic crypt hyperplasia at the peak of bacterial infection. These results show that B. breve could not competitively exclude C. rodentium, but reduced pathogen-induced colonic inflammation.

  4. Cone beam computed tomography image guidance system for a dedicated intracranial radiosurgery treatment unit.

    Science.gov (United States)

    Ruschin, Mark; Komljenovic, Philip T; Ansell, Steve; Ménard, Cynthia; Bootsma, Gregory; Cho, Young-Bin; Chung, Caroline; Jaffray, David

    2013-01-01

    Image guidance has improved the precision of fractionated radiation treatment delivery on linear accelerators. Precise radiation delivery is particularly critical when high doses are delivered to complex shapes with steep dose gradients near critical structures, as is the case for intracranial radiosurgery. To reduce potential geometric uncertainties, a cone beam computed tomography (CT) image guidance system was developed in-house to generate high-resolution images of the head at the time of treatment, using a dedicated radiosurgery unit. The performance and initial clinical use of this imaging system are described. A kilovoltage cone beam CT system was integrated with a Leksell Gamma Knife Perfexion radiosurgery unit. The X-ray tube and flat-panel detector are mounted on a translational arm, which is parked above the treatment unit when not in use. Upon descent, a rotational axis provides 210° of rotation for cone beam CT scans. Mechanical integrity of the system was evaluated over a 6-month period. Subsequent clinical commissioning included end-to-end testing of targeting performance and subjective image quality performance in phantoms. The system has been used to image 2 patients, 1 of whom received single-fraction radiosurgery and 1 who received 3 fractions, using a relocatable head frame. Images of phantoms demonstrated soft tissue contrast visibility and submillimeter spatial resolution. A contrast difference of 35 HU was easily detected at a calibration dose of 1.2 cGy (center of head phantom). The shape of the mechanical flex vs scan angle was highly reproducible and exhibited cone beam CT image guidance system was successfully adapted to a radiosurgery unit. The system is capable of producing high-resolution images of bone and soft tissue. The system is in clinical use and provides excellent image guidance without invasive frames. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Computer programs for unit-cell determination in electron diffraction experiments

    International Nuclear Information System (INIS)

    Li, X.Z.

    2005-01-01

    A set of computer programs for unit-cell determination from an electron diffraction tilt series and pattern indexing has been developed on the basis of several well-established algorithms. In this approach, a reduced direct primitive cell is first determined from experimental data, in the means time, the measurement errors of the tilt angles are checked and minimized. The derived primitive cell is then checked for possible higher lattice symmetry and transformed into a proper conventional cell. Finally a least-squares refinement procedure is adopted to generate optimum lattice parameters on the basis of the lengths of basic reflections in each diffraction pattern and the indices of these reflections. Examples are given to show the usage of the programs

  6. Development of Thermal Performance Analysis Computer Program on Turbine Cycle of Yoggwang 3,4 Units

    Energy Technology Data Exchange (ETDEWEB)

    Hong, S.Y.; Choi, K.H.; Jee, M.H.; Chung, S.I. [Korea Electric Power Research Institute, Taejon (Korea)

    2002-07-01

    The objective of the study ''Development of Thermal Performance Analysis Computer Program on Turbine Cycle of Yonggwang 3,4 Units'' is to utilize computerized program to the performance test of the turbine cycle or the analysis of the operational status of the thermal plants. In addition, the result can be applicable to the analysis of the thermal output at the abnormal status and be a powerful tool to find out the main problems for such cases. As a results, the output of this study can supply the way to confirm the technical capability to operate the plants efficiently and to obtain the economic gains remarkably. (author). 27 refs., 73 figs., 6 tabs.

  7. Utero-fetal unit and pregnant woman modeling using a computer graphics approach for dosimetry studies.

    Science.gov (United States)

    Anquez, Jérémie; Boubekeur, Tamy; Bibin, Lazar; Angelini, Elsa; Bloch, Isabelle

    2009-01-01

    Potential sanitary effects related to electromagnetic fields exposure raise public concerns, especially for fetuses during pregnancy. Human fetus exposure can only be assessed through simulated dosimetry studies, performed on anthropomorphic models of pregnant women. In this paper, we propose a new methodology to generate a set of detailed utero-fetal unit (UFU) 3D models during the first and third trimesters of pregnancy, based on segmented 3D ultrasound and MRI data. UFU models are built using recent geometry processing methods derived from mesh-based computer graphics techniques and embedded in a synthetic woman body. Nine pregnant woman models have been generated using this approach and validated by obstetricians, for anatomical accuracy and representativeness.

  8. UPTF test instrumentation. Measurement system identification, engineering units and computed parameters

    International Nuclear Information System (INIS)

    Sarkar, J.; Liebert, J.; Laeufer, R.

    1992-11-01

    This updated version of the previous report /1/ contains, besides additional instrumentation needed for 2D/3D Programme, the supplementary instrumentation in the inlet plenum of SG simulator and hot and cold leg of broken loop, the cold leg of intact loops and the upper plenum to meet the requirements (Test Phase A) of the UPTF Programme, TRAM, sponsored by the Federal Minister of Research and Technology (BMFT) of the Federal Republic of Germany. For understanding, the derivation and the description of the identification codes for the entire conventional and advanced measurement systems classifying the function, and the equipment unit, key, as adopted in the conventional power plants, have been included. Amendments have also been made to the appendices. In particular, the list of measurement systems covering the measurement identification code, instrument, measured quantity, measuring range, band width, uncertainty and sensor location has been updated and extended to include the supplementary instrumentation. Beyond these amendments, the uncertainties of measurements have been precisely specified. The measurement identification codes which also stand for the identification of the corresponding measured quantities in engineering units and the identification codes derived therefrom for the computed parameters have been adequately detailed. (orig.)

  9. Unit physics performance of a mix model in Eulerian fluid computations

    Energy Technology Data Exchange (ETDEWEB)

    Vold, Erik [Los Alamos National Laboratory; Douglass, Rod [Los Alamos National Laboratory

    2011-01-25

    In this report, we evaluate the performance of a K-L drag-buoyancy mix model, described in a reference study by Dimonte-Tipton [1] hereafter denoted as [D-T]. The model was implemented in an Eulerian multi-material AMR code, and the results are discussed here for a series of unit physics tests. The tests were chosen to calibrate the model coefficients against empirical data, principally from RT (Rayleigh-Taylor) and RM (Richtmyer-Meshkov) experiments, and the present results are compared to experiments and to results reported in [D-T]. Results show the Eulerian implementation of the mix model agrees well with expectations for test problems in which there is no convective flow of the mass averaged fluid, i.e., in RT mix or in the decay of homogeneous isotropic turbulence (HIT). In RM shock-driven mix, the mix layer moves through the Eulerian computational grid, and there are differences with the previous results computed in a Lagrange frame [D-T]. The differences are attributed to the mass averaged fluid motion and examined in detail. Shock and re-shock mix are not well matched simultaneously. Results are also presented and discussed regarding model sensitivity to coefficient values and to initial conditions (IC), grid convergence, and the generation of atomically mixed volume fractions.

  10. Real-time computation of parameter fitting and image reconstruction using graphical processing units

    Science.gov (United States)

    Locans, Uldis; Adelmann, Andreas; Suter, Andreas; Fischer, Jannis; Lustermann, Werner; Dissertori, Günther; Wang, Qiulin

    2017-06-01

    In recent years graphical processing units (GPUs) have become a powerful tool in scientific computing. Their potential to speed up highly parallel applications brings the power of high performance computing to a wider range of users. However, programming these devices and integrating their use in existing applications is still a challenging task. In this paper we examined the potential of GPUs for two different applications. The first application, created at Paul Scherrer Institut (PSI), is used for parameter fitting during data analysis of μSR (muon spin rotation, relaxation and resonance) experiments. The second application, developed at ETH, is used for PET (Positron Emission Tomography) image reconstruction and analysis. Applications currently in use were examined to identify parts of the algorithms in need of optimization. Efficient GPU kernels were created in order to allow applications to use a GPU, to speed up the previously identified parts. Benchmarking tests were performed in order to measure the achieved speedup. During this work, we focused on single GPU systems to show that real time data analysis of these problems can be achieved without the need for large computing clusters. The results show that the currently used application for parameter fitting, which uses OpenMP to parallelize calculations over multiple CPU cores, can be accelerated around 40 times through the use of a GPU. The speedup may vary depending on the size and complexity of the problem. For PET image analysis, the obtained speedups of the GPU version were more than × 40 larger compared to a single core CPU implementation. The achieved results show that it is possible to improve the execution time by orders of magnitude.

  11. Computer science teacher professional development in the United States: a review of studies published between 2004 and 2014

    Science.gov (United States)

    Menekse, Muhsin

    2015-10-01

    While there has been a remarkable interest to make computer science a core K-12 academic subject in the United States, there is a shortage of K-12 computer science teachers to successfully implement computer sciences courses in schools. In order to enhance computer science teacher capacity, training programs have been offered through teacher professional development. In this study, the main goal was to systematically review the studies regarding computer science professional development to understand the scope, context, and effectiveness of these programs in the past decade (2004-2014). Based on 21 journal articles and conference proceedings, this study explored: (1) Type of professional development organization and source of funding, (2) professional development structure and participants, (3) goal of professional development and type of evaluation used, (4) specific computer science concepts and training tools used, (5) and their effectiveness to improve teacher practice and student learning.

  12. Utilizing General Purpose Graphics Processing Units to Improve Performance of Computer Modelling and Visualization

    Science.gov (United States)

    Monk, J.; Zhu, Y.; Koons, P. O.; Segee, B. E.

    2009-12-01

    With the introduction of the G8X series of cards by nVidia an architecture called CUDA was released, virtually all subsequent video cards have had CUDA support. With this new architecture nVidia provided extensions for C/C++ that create an Application Programming Interface (API) allowing code to be executed on the GPU. Since then the concept of GPGPU (general purpose graphics processing unit) has been growing, this is the concept that the GPU is very good a algebra and running things in parallel so we should take use of that power for other applications. This is highly appealing in the area of geodynamic modeling, as multiple parallel solutions of the same differential equations at different points in space leads to a large speedup in simulation speed. Another benefit of CUDA is a programmatic method of transferring large amounts of data between the computer's main memory and the dedicated GPU memory located on the video card. In addition to being able to compute and render on the video card, the CUDA framework allows for a large speedup in the situation, such as with a tiled display wall, where the rendered pixels are to be displayed in a different location than where they are rendered. A CUDA extension for VirtualGL was developed allowing for faster read back at high resolutions. This paper examines several aspects of rendering OpenGL graphics on large displays using VirtualGL and VNC. It demonstrates how performance can be significantly improved in rendering on a tiled monitor wall. We present a CUDA enhanced version of VirtualGL as well as the advantages to having multiple VNC servers. It will discuss restrictions caused by read back and blitting rates and how they are affected by different sizes of virtual displays being rendered.

  13. Computational fluid dynamics simulation of wind-driven inter-unit dispersion around multi-storey buildings: Upstream building effect

    DEFF Research Database (Denmark)

    Ai, Zhengtao; Mak, C.M.; Dai, Y.W.

    2017-01-01

    of such changed airflow patterns on inter-unit dispersion characteristics around a multi-storey building due to wind effect. Computational fluid dynamics (CFD) method in the framework of Reynolds-averaged Navier-stokes modelling was employed to predict the coupled outdoor and indoor airflow field, and the tracer...... gas technique was used to simulate the dispersion of infectious agents between units. Based on the predicted concentration field, a mass conservation based parameter, namely re-entry ratio, was used to evaluate quantitatively the inter-unit dispersion possibilities and thus assess risks along...

  14. Usefulness of computed tomography hounsfield unit measurement for diagnosis of congenital cholesteatoma

    International Nuclear Information System (INIS)

    Ahn, Sang Hyuk; Kim, Yong Woo; Baik, Seung Kug; Hwang, Jae Yeon; Lee, Il Woo

    2014-01-01

    To evaluate the usefulness of Hounsfield unit (HU) measurements for diagnosing of congenital cholesteatoma. A total of 43 patients who underwent surgery due to middle ear cavity lesions were enrolled. Twenty-one patients were confirmed to have congenital cholesteatoma by histopathological results and the other 22 patients were confirmed to have otitis media (OM) by operation. Their computed tomography images were retrospectively reviewed. We measured HU of the soft tissue mass in the middle ear cavity. In addition, we evaluated the largest diameter and location of the mass, the presence of bony erosion in the ear ossicle, and the status of the tympanic membrane in the cholesteatoma group. The mean HU was 37.36 ± 6.11 (range, 27.5-52.5) in the congenital cholesteatoma group and 76.09 ± 8.74 (range, 58.5-96) in the OM group (p < 0.001). The cut-off value was 55.5. The most common location for congenital cholesteatoma was the mesotympanum, and ear ossicle erosion was present in 24%. All patients had an intact tympanic membrane. HU measurement may be useful as an additional indicator to diagnose congenital cholesteatoma.

  15. Usefulness of computed tomography hounsfield unit measurement for diagnosis of congenital cholesteatoma

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Sang Hyuk; Kim, Yong Woo; Baik, Seung Kug; Hwang, Jae Yeon; Lee, Il Woo [Medical Research Institute, Pusan National University Yangsan Hospital, College of Medicine, Pusan National University, Yangsan (Korea, Republic of)

    2014-02-15

    To evaluate the usefulness of Hounsfield unit (HU) measurements for diagnosing of congenital cholesteatoma. A total of 43 patients who underwent surgery due to middle ear cavity lesions were enrolled. Twenty-one patients were confirmed to have congenital cholesteatoma by histopathological results and the other 22 patients were confirmed to have otitis media (OM) by operation. Their computed tomography images were retrospectively reviewed. We measured HU of the soft tissue mass in the middle ear cavity. In addition, we evaluated the largest diameter and location of the mass, the presence of bony erosion in the ear ossicle, and the status of the tympanic membrane in the cholesteatoma group. The mean HU was 37.36 ± 6.11 (range, 27.5-52.5) in the congenital cholesteatoma group and 76.09 ± 8.74 (range, 58.5-96) in the OM group (p < 0.001). The cut-off value was 55.5. The most common location for congenital cholesteatoma was the mesotympanum, and ear ossicle erosion was present in 24%. All patients had an intact tympanic membrane. HU measurement may be useful as an additional indicator to diagnose congenital cholesteatoma.

  16. Effect of Jigsaw II, Reading-Writing-Presentation, and Computer Animations on the Teaching of "Light" Unit

    Science.gov (United States)

    Koç, Yasemin; Yildiz, Emre; Çaliklar, Seyma; Simsek, Ümit

    2016-01-01

    The aim of this study is to determine the effect of Jigsaw II technique, reading-writing-presentation method, and computer animation on students' academic achievements, epistemological beliefs, attitudes towards science lesson, and the retention of knowledge in the "Light" unit covered in the 7th grade. The sample of the study consists…

  17. The Development of an Individualized Instructional Program in Beginning College Mathematics Utilizing Computer Based Resource Units. Final Report.

    Science.gov (United States)

    Rockhill, Theron D.

    Reported is an attempt to develop and evaluate an individualized instructional program in pre-calculus college mathematics. Four computer based resource units were developed in the areas of set theory, relations and function, algebra, trigonometry, and analytic geometry. Objectives were determined by experienced calculus teachers, and…

  18. Videndeling, læringskultur og digitale læringsressourcer på Absalons Skole “Projekt VILD” Af Mette Hannibal & Britta Vejen, Professionshøjskolen UCC

    DEFF Research Database (Denmark)

    Hannibal, Mette; Vejen, Britta

    2016-01-01

    Denne rapport er resultat af et udviklingsprojekt gennemført på Absalons Skole i et samarbejde mellem skolen og de eksterne konsulenter Mette Hannibal og Britta Vejen fra Professionshøjskolen UCC i København. Projektet er støttet af folke- og skolebibliotekspuljen ved Kulturstyrelsen, nu Udviklin......Denne rapport er resultat af et udviklingsprojekt gennemført på Absalons Skole i et samarbejde mellem skolen og de eksterne konsulenter Mette Hannibal og Britta Vejen fra Professionshøjskolen UCC i København. Projektet er støttet af folke- og skolebibliotekspuljen ved Kulturstyrelsen, nu...

  19. Performance characterization of megavoltage computed tomography imaging on a helical tomotherapy unit

    International Nuclear Information System (INIS)

    Meeks, Sanford L.; Harmon, Joseph F. Jr.; Langen, Katja M.; Willoughby, Twyla R.; Wagner, Thomas H.; Kupelian, Patrick A.

    2005-01-01

    Helical tomotherapy is an innovative means of delivering IGRT and IMRT using a device that combines features of a linear accelerator and a helical computed tomography (CT) scanner. The HI-ART II can generate CT images from the same megavoltage x-ray beam it uses for treatment. These megavoltage CT (MVCT) images offer verification of the patient position prior to and potentially during radiation therapy. Since the unit uses the actual treatment beam as the x-ray source for image acquisition, no surrogate telemetry systems are required to register image space to treatment space. The disadvantage to using the treatment beam for imaging, however, is that the physics of radiation interactions in the megavoltage energy range may force compromises between the dose delivered and the image quality in comparison to diagnostic CT scanners. The performance of the system is therefore characterized in terms of objective measures of noise, uniformity, contrast, and spatial resolution as a function of the dose delivered by the MVCT beam. The uniformity and spatial resolutions of MVCT images generated by the HI-ART II are comparable to that of diagnostic CT images. Furthermore, the MVCT scan contrast is linear with respect to the electron density of material imaged. MVCT images do not have the same performance characteristics as state-of-the art diagnostic CT scanners when one objectively examines noise and low-contrast resolution. These inferior results may be explained, at least partially, by the low doses delivered by our unit; the dose is 1.1 cGy in a 20 cm diameter cylindrical phantom. In spite of the poorer low-contrast resolution, these relatively low-dose MVCT scans provide sufficient contrast to delineate many soft-tissue structures. Hence, these images are useful not only for verifying the patient's position at the time of therapy, but they are also sufficient for delineating many anatomic structures. In conjunction with the ability to recalculate radiotherapy doses on

  20. Characterization of the genetic locus responsible for the production of ABP-118, a novel bacteriocin produced by the probiotic bacterium Lactobacillus salivarius subsp. salivarius UCC118.

    Science.gov (United States)

    Flynn, Sarah; van Sinderen, Douwe; Thornton, Gerardine M; Holo, Helge; Nes, Ingolf F; Collins, J Kevin

    2002-04-01

    ABP-118, a small heat-stable bacteriocin produced by Lactobacillus salivarius subsp. salivarius UCC118, a strain isolated from the ileal-caecal region of the human gastrointestinal tract, was purified to homogeneity. Using reverse genetics, a DNA fragment specifying part of ABP-118 was identified on a 10769 bp chromosomal region. Analysis of this region revealed that ABP-118 was a Class IIb two-peptide bacteriocin composed of Abp118alpha, which exhibited the antimicrobial activity, and Abp118beta, which enhanced the antimicrobial activity. The gene conferring strain UCC118 immunity to the action of ABP-118, abpIM, was identified downstream of the abp118beta gene. Located further downstream of abp118beta, several ORFs were identified whose deduced proteins resembled those of proteins involved in bacteriocin regulation and secretion. Heterologous expression of ABP-118 was achieved in Lactobacillus plantarum, Lactococcus lactis and Bacillus cereus. In addition, the abp118 locus encoded an inducing peptide, AbpIP, which was shown to play a role in the regulation of ABP-118 production. This novel bacteriocin is, to the authors' knowledge, the first to be isolated from a known human probiotic bacterium and to be characterized at the genetic level.

  1. Suitable exposure conditions for CB Throne? New model cone beam computed tomography unit for dental use

    International Nuclear Information System (INIS)

    Tanabe, Kouji; Nishikawa, Keiichi; Yajima, Aya; Mizuta, Shigeru; Sano, Tsukasa; Yajima, Yasutomo; Nakagawa, Kanichi; Kousuge, Yuuji

    2008-01-01

    The CB Throne is a cone beam computed tomography unit for dental use, and the smaller version of the CB MercuRay developed by Hitachi Medico Co. We investigated which exposure conditions were suitable in the clinical use. Suitable exposure conditions were determined by simple subjective comparisons. The right temporomandibular joint of the head phantom was scanned at all possible combinations of tube voltage (60, 80, 100, 120 kV) and tube current (10, 15 mA). Oblique-sagittal images of the same position were obtained using multiplanar reconstruction (MPR) function. Images obtained at 120 kV and 15 mA, which are the highest exposure conditions and certain to produce images of the best quality, were used to establish the standard. Eight oral radiologists observed each image and standard image on a LCD monitor. They compared subjectively spatial resolution and noise between each image and standard image using a 10 cm scale. Evaluation points were obtained from the check positions on the scales. The Steel method was used to determine significant differences. The images at 60 kV/10 mA and 80 kV/15 mA showed significantly lower evaluation points on spatial resolution. The images at 60 kV/10 mA, 60 kV/15 mA and 80 kV/10 mA showed significantly lower evaluation points on noise. In conclusion, even if exposure conditions are reduced to 100 kV/10 mA, 100 kV/15 mA or 120 kV/10 mA, the CB Throne will produce images of the best quality. (author)

  2. Absolute Hounsfield unit measurement on noncontrast computed tomography cannot accurately predict struvite stone composition.

    Science.gov (United States)

    Marchini, Giovanni Scala; Gebreselassie, Surafel; Liu, Xiaobo; Pynadath, Cindy; Snyder, Grace; Monga, Manoj

    2013-02-01

    The purpose of our study was to determine, in vivo, whether single-energy noncontrast computed tomography (NCCT) can accurately predict the presence/percentage of struvite stone composition. We retrospectively searched for all patients with struvite components on stone composition analysis between January 2008 and March 2012. Inclusion criteria were NCCT prior to stone analysis and stone size ≥4 mm. A single urologist, blinded to stone composition, reviewed all NCCT to acquire stone location, dimensions, and Hounsfield unit (HU). HU density (HUD) was calculated by dividing mean HU by the stone's largest transverse diameter. Stone analysis was performed via Fourier transform infrared spectrometry. Independent sample Student's t-test and analysis of variance (ANOVA) were used to compare HU/HUD among groups. Spearman's correlation test was used to determine the correlation between HU and stone size and also HU/HUD to % of each component within the stone. Significance was considered if pR=0.017; p=0.912) and negative with HUD (R=-0.20; p=0.898). Overall, 3 (6.8%) had stones (n=5) with other miscellaneous stones (n=39), no difference was found for HU (p=0.09) but HUD was significantly lower for pure stones (27.9±23.6 v 72.5±55.9, respectively; p=0.006). Again, significant overlaps were seen. Pure struvite stones have significantly lower HUD than mixed struvite stones, but overlap exists. A low HUD may increase the suspicion for a pure struvite calculus.

  3. Mechanical properties of regular porous biomaterials made from truncated cube repeating unit cells: Analytical solutions and computational models.

    Science.gov (United States)

    Hedayati, R; Sadighi, M; Mohammadi-Aghdam, M; Zadpoor, A A

    2016-03-01

    Additive manufacturing (AM) has enabled fabrication of open-cell porous biomaterials based on repeating unit cells. The micro-architecture of the porous biomaterials and, thus, their physical properties could then be precisely controlled. Due to their many favorable properties, porous biomaterials manufactured using AM are considered as promising candidates for bone substitution as well as for several other applications in orthopedic surgery. The mechanical properties of such porous structures including static and fatigue properties are shown to be strongly dependent on the type of the repeating unit cell based on which the porous biomaterial is built. In this paper, we study the mechanical properties of porous biomaterials made from a relatively new unit cell, namely truncated cube. We present analytical solutions that relate the dimensions of the repeating unit cell to the elastic modulus, Poisson's ratio, yield stress, and buckling load of those porous structures. We also performed finite element modeling to predict the mechanical properties of the porous structures. The analytical solution and computational results were found to be in agreement with each other. The mechanical properties estimated using both the analytical and computational techniques were somewhat higher than the experimental data reported in one of our recent studies on selective laser melted Ti-6Al-4V porous biomaterials. In addition to porosity, the elastic modulus and Poisson's ratio of the porous structures were found to be strongly dependent on the ratio of the length of the inclined struts to that of the uninclined (i.e. vertical or horizontal) struts, α, in the truncated cube unit cell. The geometry of the truncated cube unit cell approaches the octahedral and cube unit cells when α respectively approaches zero and infinity. Consistent with those geometrical observations, the analytical solutions presented in this study approached those of the octahedral and cube unit cells when

  4. Porting of the transfer-matrix method for multilayer thin-film computations on graphics processing units

    Science.gov (United States)

    Limmer, Steffen; Fey, Dietmar

    2013-07-01

    Thin-film computations are often a time-consuming task during optical design. An efficient way to accelerate these computations with the help of graphics processing units (GPUs) is described. It turned out that significant speed-ups can be achieved. We investigate the circumstances under which the best speed-up values can be expected. Therefore we compare different GPUs among themselves and with a modern CPU. Furthermore, the effect of thickness modulation on the speed-up and the runtime behavior depending on the input data is examined.

  5. The Performance Improvement of the Lagrangian Particle Dispersion Model (LPDM) Using Graphics Processing Unit (GPU) Computing

    Science.gov (United States)

    2017-08-01

    used for its GPU computing capability during the experiment. It has Nvidia Tesla K40 GPU accelerators containing 32 GPU nodes consisting of 1024...cores. CUDA is a parallel computing platform and application programming interface (API) model that was created and designed by Nvidia to give direct...Agricultural and Forest Meteorology. 1995:76:277–291, ISSN 0168-1923. 3. GPU vs. CPU? What is GPU computing? Santa Clara (CA): Nvidia Corporation; 2017

  6. Application of Computer Technology to Educational Administration in the United States.

    Science.gov (United States)

    Bozeman, William C.; And Others

    1991-01-01

    Description of evolution of computer applications in U.S. educational administration is followed by an overview of the structure and governance of public education and Visscher's developmental framework. Typical administrative computer applications in education are discussed, including student records, personnel management, budgeting, library…

  7. Measurement of parameters for the quality control of X-ray units by using PIN diodes and a personal computer

    International Nuclear Information System (INIS)

    Ramirez, F.; Gaytan, E.; Mercado, I.; Estrada, M.; Cerdeira, A.

    2000-01-01

    The design of a new system for the measurement of the main parameters of X-ray units used in medicine is presented. The system measures automatically the exposure time, high voltage applied, waveform of the detected signal, exposure ratio and the total exposure (dose). The X-ray detectors employed are PIN diodes developed at CINVESTAV, the measurements are done in one single shot, without invasion of the X-ray unit. The results are shown in the screen of the computer and can be saved in a file for later analysis. The proposed system is intended to be used in the quality control of X-rays units for clinical radio-diagnosis. It is a simple and inexpensive equipment if compared with available commercial equipment that uses ionization chambers and accurate electrometers that small facilities and hospitals cannot afford

  8. Computer-aided modeling of aluminophosphate zeolites as packings of building units

    KAUST Repository

    Peskov, Maxim; Blatov, Vladislav A.; Ilyushin, Gregory D.; Schwingenschlö gl, Udo

    2012-01-01

    New building schemes of aluminophosphate molecular sieves from packing units (PUs) are proposed. We have investigated 61 framework types discovered in zeolite-like aluminophosphates and have identified important PU combinations using a recently

  9. Critical Vulnerability: Defending the Decisive Point of United States Computer Networked Information Systems

    National Research Council Canada - National Science Library

    Virden, Roy

    2003-01-01

    .... The military's use of computer networked information systems is thus a critical strength. These systems are then critical vulnerabilities because they may lack adequate protection and are open to enemy attack...

  10. Bifidobacterium breve UCC2003 metabolises the human milk oligosaccharides lacto-N-tetraose and lacto-N-neo-tetraose through overlapping, yet distinct pathways

    Science.gov (United States)

    James, Kieran; Motherway, Mary O’Connell; Bottacini, Francesca; van Sinderen, Douwe

    2016-01-01

    In this study, we demonstrate that the prototype B. breve strain UCC2003 possesses specific metabolic pathways for the utilisation of lacto-N-tetraose (LNT) and lacto-N-neotetraose (LNnT), which represent the central moieties of Type I and Type II human milk oligosaccharides (HMOs), respectively. Using a combination of experimental approaches, the enzymatic machinery involved in the metabolism of LNT and LNnT was identified and characterised. Homologs of the key genetic loci involved in the utilisation of these HMO substrates were identified in B. breve, B. bifidum, B. longum subsp. infantis and B. longum subsp. longum using bioinformatic analyses, and were shown to be variably present among other members of the Bifidobacterium genus, with a distinct pattern of conservation among human-associated bifidobacterial species. PMID:27929046

  11. Performance evaluation for volumetric segmentation of multiple sclerosis lesions using MATLAB and computing engine in the graphical processing unit (GPU)

    Science.gov (United States)

    Le, Anh H.; Park, Young W.; Ma, Kevin; Jacobs, Colin; Liu, Brent J.

    2010-03-01

    Multiple Sclerosis (MS) is a progressive neurological disease affecting myelin pathways in the brain. Multiple lesions in the white matter can cause paralysis and severe motor disabilities of the affected patient. To solve the issue of inconsistency and user-dependency in manual lesion measurement of MRI, we have proposed a 3-D automated lesion quantification algorithm to enable objective and efficient lesion volume tracking. The computer-aided detection (CAD) of MS, written in MATLAB, utilizes K-Nearest Neighbors (KNN) method to compute the probability of lesions on a per-voxel basis. Despite the highly optimized algorithm of imaging processing that is used in CAD development, MS CAD integration and evaluation in clinical workflow is technically challenging due to the requirement of high computation rates and memory bandwidth in the recursive nature of the algorithm. In this paper, we present the development and evaluation of using a computing engine in the graphical processing unit (GPU) with MATLAB for segmentation of MS lesions. The paper investigates the utilization of a high-end GPU for parallel computing of KNN in the MATLAB environment to improve algorithm performance. The integration is accomplished using NVIDIA's CUDA developmental toolkit for MATLAB. The results of this study will validate the practicality and effectiveness of the prototype MS CAD in a clinical setting. The GPU method may allow MS CAD to rapidly integrate in an electronic patient record or any disease-centric health care system.

  12. Massively Parallel Signal Processing using the Graphics Processing Unit for Real-Time Brain-Computer Interface Feature Extraction.

    Science.gov (United States)

    Wilson, J Adam; Williams, Justin C

    2009-01-01

    The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain-computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  13. A study on the optimal replacement periods of digital control computer's components of Wolsung nuclear power plant unit 1

    International Nuclear Information System (INIS)

    Mok, Jin Il; Seong, Poong Hyun

    1993-01-01

    Due to the failure of the instrument and control devices of nuclear power plants caused by aging, nuclear power plants occasionally trip. Even a trip of a single nuclear power plant (NPP) causes an extravagant economical loss and deteriorates public acceptance of nuclear power plants. Therefore, the replacement of the instrument and control devices with proper consideration of the aging effect is necessary in order to prevent the inadvertent trip. In this paper we investigated the optimal replacement periods of the control computer's components of Wolsung nuclear power plant Unit 1. We first derived mathematical models of optimal replacement periods to the digital control computer's components of Wolsung NPP Unit 1 and calculated the optimal replacement periods analytically. We compared the periods with the replacement periods currently used at Wolsung NPP Unit 1. The periods used at Wolsung is not based on mathematical analysis, but on empirical knowledge. As a consequence, the optimal replacement periods analytically obtained and those used in the field show a little difference. (Author)

  14. Trinary arithmetic and logic unit (TALU) using savart plate and spatial light modulator (SLM) suitable for optical computation in multivalued logic

    Science.gov (United States)

    Ghosh, Amal K.; Bhattacharya, Animesh; Raul, Moumita; Basuray, Amitabha

    2012-07-01

    Arithmetic logic unit (ALU) is the most important unit in any computing system. Optical computing is becoming popular day-by-day because of its ultrahigh processing speed and huge data handling capability. Obviously for the fast processing we need the optical TALU compatible with the multivalued logic. In this regard we are communicating the trinary arithmetic and logic unit (TALU) in modified trinary number (MTN) system, which is suitable for the optical computation and other applications in multivalued logic system. Here the savart plate and spatial light modulator (SLM) based optoelectronic circuits have been used to exploit the optical tree architecture (OTA) in optical interconnection network.

  15. Computer finite element analysis of stress derived from particular units of torsionally flexible metal coupling

    Directory of Open Access Journals (Sweden)

    Mariusz KUCZAJ

    2010-01-01

    Full Text Available In this article the results of Finite Element Analysis (FEA results of stresses derived from chosen units of torsionally flexible metal coupling are presented. As model and simulation tool for particular component loads is used the Autodesk Inventor Professional 2009 program.

  16. Reduction of computing time for seismic applications based on the Helmholtz equation by Graphics Processing Units

    NARCIS (Netherlands)

    Knibbe, H.P.

    2015-01-01

    The oil and gas industry makes use of computational intensive algorithms to provide an image of the subsurface. The image is obtained by sending wave energy into the subsurface and recording the signal required for a seismic wave to reflect back to the surface from the Earth interfaces that may have

  17. Development of the Computer Code to Determine an Individual Radionuclides in the Rad-wastes Container for Ulchin Units 3 and 4

    Energy Technology Data Exchange (ETDEWEB)

    Kang, D.W.; Chi, J.H.; Goh, E.O. [Korea Electric Power Research Institute, Taejon (Korea)

    2001-07-01

    A computer program, RASSAY was developed to evaluate accurately the activities of various nuclides in the rad-waste container for Ulchin units 3 and 4. This is the final report of the project, {sup D}evelopment of the Computer Code to Determine an Individual Radionuclides in the Rad-wastes Container for Ulchin Units 3 and 4 and includes the followings; 1) Structure of the computer code, RASSAY 2) An example of surface dose calculation by computer simulation using MCNP code 3) Methods of sampling and activity measurement of various Rad-wastes. (author). 21 refs., 35 figs., 6 tabs.

  18. A dual computed tomography linear accelerator unit for stereotactic radiation therapy: a new approach without cranially fixated stereotactic frames

    International Nuclear Information System (INIS)

    Uematsu, Minoru; Fukui, Toshiharu; Shioda, Akira; Tokumitsu, Hideyuki; Takai, Kenji; Kojima, Tadaharu; Asai, Yoshiko; Kusano, Shoichi

    1996-01-01

    Purpose: To perform stereotactic radiation therapy (SRT) without cranially fixated stereotactic frames, we developed a dual computed tomography (CT) linear accelerator (linac) treatment unit. Methods and Materials: This unit is composed of a linac, CT, and motorized table. The linac and CT are set up at opposite ends of the table, which is suitable for both machines. The gantry axis of the linac is coaxial with that of the CT scanner. Thus, the center of the target detected with the CT can be matched easily with the gantry axis of the linac by rotating the table. Positioning is confirmed with the CT for each treatment session. Positioning and treatment errors with this unit were examined by phantom studies. Between August and December 1994, 8 patients with 11 lesions of primary or metastatic brain tumors received SRT with this unit. All lesions were treated with 24 Gy in three fractions to 30 Gy in 10 fractions to the 80% isodose line, with or without conventional external beam radiation therapy. Results: Phantom studies revealed that treatment errors with this unit were within 1 mm after careful positioning. The position was easily maintained using two tiny metallic balls as vertical and horizontal marks. Motion of patients was negligible using a conventional heat-flexible head mold and dental impression. The overall time for a multiple noncoplanar arcs treatment for a single isocenter was less than 1 h on the initial treatment day and usually less than 20 min on subsequent days. Treatment was outpatient-based and well tolerated with no acute toxicities. Satisfactory responses have been documented. Conclusion: Using this treatment unit, multiple fractionated SRT is performed easily and precisely without cranially fixated stereotactic frames

  19. Vortex particle method in parallel computations on graphical processing units used in study of the evolution of vortex structures

    International Nuclear Information System (INIS)

    Kudela, Henryk; Kosior, Andrzej

    2014-01-01

    Understanding the dynamics and the mutual interaction among various types of vortical motions is a key ingredient in clarifying and controlling fluid motion. In the paper several different cases related to vortex tube interactions are presented. Due to problems with very long computation times on the single processor, the vortex-in-cell (VIC) method is implemented on the multicore architecture of a graphics processing unit (GPU). Numerical results of leapfrogging of two vortex rings for inviscid and viscous fluid are presented as test cases for the new multi-GPU implementation of the VIC method. Influence of the Reynolds number on the reconnection process is shown for two examples: antiparallel vortex tubes and orthogonally offset vortex tubes. Our aim is to show the great potential of the VIC method for solutions of three-dimensional flow problems and that the VIC method is very well suited for parallel computation. (paper)

  20. Sweep efficiency improvement of waterfloods in Steelman Units V and VII through the application of computer models

    Energy Technology Data Exchange (ETDEWEB)

    Woods, W S

    1967-01-01

    The use of a digital computer program as a tool to investigate the position of flood fronts in 2 Steelman units is described. The program involves a simulated potentiometric analyzer. Several years of historical performance were utilized and alterations to the model were made to match the historical performance until a satisfactory prediction is obtained. Subsequent to matching the historical performance, future predictions were obtained to evaluate the efficiency of the ultimate sweep configuration in the reservoir. These data are used as directives for improving the operation of the waterfloods. Rather than the complicated and elaborate computer techniques currently in use, it is suggested that the results obtained in this particular application of simple techniques provide sufficient economic operating directives.

  1. Intensive-care unit lung infections: The role of imaging with special emphasis on multi-detector row computed tomography

    International Nuclear Information System (INIS)

    Romano, Luigia; Pinto, Antonio; Merola, Stefanella; Gagliardi, Nicola; Tortora, Giovanni; Scaglione, Mariano

    2008-01-01

    Nosocomial pneumonia is the most frequent hospital-acquired infection. In mechanically ventilated patients admitted to an intensive-care unit as many as 7-41% may develop pneumonia. The role of imaging is to identify the presence, location and extent of pulmonary infection and the presence of complications. However, the poor resolution of bedside plain film frequently limits the value of radiography as an accurate diagnostic tool. To date, multi-detector row computed tomography with its excellent contrast resolution is the most sensitive modality for evaluating lung parenchyma infections

  2. Computational design of metal-organic frameworks with paddlewheel-type secondary building units

    Science.gov (United States)

    Schwingenschlogl, Udo; Peskov, Maxim V.; Masghouni, Nejib

    We employ the TOPOS package to study 697 coordination polymers containing paddlewheel-type secondary building units. The underlying nets are analyzed and 3 novel nets are chosen as potential topologies for paddlewheel-type metal organic frameworks (MOFs). Dicarboxylate linkers are used to build basic structures for novel isoreticular MOF series, aiming at relatively compact structures with a low number of atoms per unit cell. The structures are optimized using density functional theory. Afterwards the Grand Canonical Monte Carlo approach is employed to generate adsorption isotherms for CO2, CO, and CH4 molecules. We utilize the universal forcefield for simulating the interaction between the molecules and hosting MOF. The diffusion behavior of the molecules inside the MOFs is analyzed by molecular dynamics simulations.

  3. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  4. Self-Organizing Units in an Interdisciplinary Course for Pervasive Computing Design

    OpenAIRE

    McNair, Lisa; Newswander, Chad; Coupey, Eloise; Dorsa, Ed; Martin, Tom; Paretti, Marie

    2009-01-01

    We conducted a case study of a design course that focused on bringing together students from engineering, industrial design, and marketing to use pervasive computing technologies to design, coordinate, and build a “smart” dorm room for disabled individuals. The class was loosely structured to encourage innovation, critical thinking and interdisciplinarity. In this environment, teams were created, disassembled, and re-created in a self-organizing fashion. With few norms, teams were expected to...

  5. Specialists' meeting on fuel element performance computer modelling, Preston, United Kingdom, 15-19 March 1982

    International Nuclear Information System (INIS)

    1983-03-01

    The 46 papers of the meeting concerned with computer models of Water Reactor fuel elements cover practically all aspects of behavior of fuel elements in normal operation and in accident condition. Each session of the meeting produced a critical evaluation of one of the 5 topics into which the subject area had been divided. The sessions' report summarize the papers and make recommendations for further work. Separate abstracts were prepared for all the papers presented at this meeting

  6. Ten years of CLIVE (Computer-Aided Learning in Veterinary Education) in the United Kingdom.

    Science.gov (United States)

    Dale, Vicki H M; McConnell, Gill; Short, Andrew; Sullivan, Martin

    2005-01-01

    This paper outlines the work of the CLIVE (Computer-Aided Learning in Veterinary Education) project over a 10-year period, set against the backdrop of changes in education policy and learning technology developments. The consortium of six UK veterinary schools and 14 international Associate Member Schools has been very successful. Sustaining these partnerships requires that the project redefine itself and adapt to cater to the diverse learning needs of today's students and to changing professional and societal needs on an international scale.

  7. Arithmetical unit, interrupt hardware and input-output channel for the computer Bel

    International Nuclear Information System (INIS)

    Fyroe, Karl-Johan

    1969-01-01

    This thesis contains a description of a small general purpose computer using characters, variable word-length and two-address instructions and which is working in decimal (NBCD). We have realized three interruption lines with a fixed priority. The channel is selective and has generally access to the entire memory. Using slow IO-devices, time sharing is possible between the channel and the processor in the central memory buffer area. (author) [fr

  8. Control and management unit for a computation platform at the PANDA experiment

    Energy Technology Data Exchange (ETDEWEB)

    Galuska, Martin; Gessler, Thomas; Kuehn, Wolfgang; Lang, Johannes; Lange, Jens Soeren; Liang, Yutie; Liu, Ming; Spruck, Bjoern; Wang, Qiang [II. Physikalisches Institut, Justus-Liebig-Universitaet Giessen (Germany)

    2010-07-01

    The FAIR facility will provide high intensity antiproton and heavy ion beams for the PANDA and HADES experiments, leading to very high reaction rates. PANDA is expected to run at 10-20 MHz with a raw data output rate of up to 200 GB/s. A sophisticated data acquisition system is needed in order to select physically relevant events online. For this purpose a network of interconnected compute nodes can be used. Each compute node can be programmed to run various algorithms, such as online particle track recognition for high level triggering. An ATCA communication shelf provides power, cooling and high-speed interconnections to up to 14 nodes. A single shelf manager supervises and regulates the power distribution and temperature inside the shelf. The shelf manager relies on a local control chip on each node to relay sensor read-outs, provide hardware adresses and power requirements etc. An IPM controller based on an Atmel microcontroller was designed for this purpose, and a prototype was produced. The neccessary software is being developed to allow local communication with the components of the compute node and remote communication with the shelf manager conform to the ATCA specification.

  9. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  10. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  11. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  12. Effect of field-of-view size on gray values derived from cone-beam computed tomography compared with the Hounsfield unit values from multidetector computed tomography scans.

    Science.gov (United States)

    Shokri, Abbas; Ramezani, Leila; Bidgoli, Mohsen; Akbarzadeh, Mahdi; Ghazikhanlu-Sani, Karim; Fallahi-Sichani, Hamed

    2018-03-01

    This study aimed to evaluate the effect of field-of-view (FOV) size on the gray values derived from conebeam computed tomography (CBCT) compared with the Hounsfield unit values from multidetector computed tomography (MDCT) scans as the gold standard. A radiographic phantom was designed with 4 acrylic cylinders. One cylinder was filled with distilled water, and the other 3 were filled with 3 types of bone substitute: namely, Nanobone, Cenobone, and Cerabone. The phantom was scanned with 2 CBCT systems using 2 different FOV sizes, and 1 MDCT system was used as the gold standard. The mean gray values (MGVs) of each cylinder were calculated in each imaging protocol. In both CBCT systems, significant differences were noted in the MGVs of all materials between the 2 FOV sizes ( P <.05) except for Cerabone in the Cranex3D system. Significant differences were found in the MGVs of each material compared with the others in both FOV sizes for each CBCT system. No significant difference was seen between the Cranex3D CBCT system and the MDCT system in the MGVs of bone substitutes on images obtained with a small FOV. The size of the FOV significantly changed the MGVs of all bone substitutes, except for Cerabone in the Cranex3D system. Both CBCT systems had the ability to distinguish the 3 types of bone substitutes based on a comparison of their MGVs. The Cranex3D CBCT system used with a small FOV had a significant correlation with MDCT results.

  13. Computer simulation with TRNSYS for a mobile refrigeration system incorporating a phase change thermal storage unit

    International Nuclear Information System (INIS)

    Liu, Ming; Saman, Wasim; Bruno, Frank

    2014-01-01

    Highlights: • A mobile refrigeration system incorporating phase change thermal storage was simulated using TRNSYS. • A TRNSYS component of a phase change thermal storage unit was created and linked to other components from TRNSYS library. • The temperature in the refrigerated space can be predicted using this TRNSYS model under various conditions. • A mobile refrigeration system incorporating PCM and an off-peak electric driven refrigeration unit is feasible. • The phase change material with the lowest melting temperature should be selected. - Abstract: This paper presents a new TRNSYS model of a refrigeration system incorporating phase change material (PCM) for mobile transport. The PCTSU is charged by an off-vehicle refrigeration unit and the PCM provides cooling when discharging and the cooling released is utilized to cool down the refrigerated space. The advantage of this refrigeration system compared to a conventional system is that it consumes less energy and produces significantly lower greenhouse gas emissions. A refrigeration system for a typical refrigerated van is modelled and simulations are performed with climatic data from four different locations. The main components of the TRNSYS model are Type 88 (cooling load estimation) and Type 300 (new PCTSU component), accompanied by other additional components. The results show that in order to maintain the temperature of the products at −18 °C for 10 h, a total of 250 kg and 390 kg of PCM are required for no door opening and 20 door openings during the transportation, respectively. In addition, a parametric study is carried out to evaluate the effects of location, size of the refrigerated space, number of door openings and melting temperature of the PCM on the thermal performance

  14. Computer aided heat transfer analysis in a laboratory scaled heat exchanger unit

    International Nuclear Information System (INIS)

    Gunes, M.

    1998-01-01

    In this study. an explanation of a laboratory scaled heat exchanger unit and a software which is developed to analyze heat transfer. especially to use it in heat transfer courses, are represented. Analyses carried out in the software through sample values measured in the heat exchanger are: (l) Determination of heat transfer rate, logarithmic mean temperature difference and overall heat transfer coefficient; (2)Determination of convection heat transfer coefficient inside and outside the tube and the effect of fluid velocity on these; (3)Investigation of the relationship between Nusselt Number. Reynolds Number and Prandtl Number by using multiple non-linear regression analysis. Results are displayed on the screen graphically

  15. Our experience in using the whole body computed tomography unit (GE CT/T)

    International Nuclear Information System (INIS)

    Murakawa, Yasuhiro; Morimoto, Mitsuo; Ishigaki, Naoya; Zaitsu, Hiroaki; Kawabata, Kohji

    1983-01-01

    Since our hospital installed the CT unit of the head and neck (SCT-100N) in April, 1979, we reported our experience in using this equipment in 1980 and 1982. After that, since the whole body CT unit (GE CT/T) was installed in our hospital in April, 1982, the total number of CT examination have reached approximately three thousand five handred till this August. Consequently, this CT image seemed to be superior in image quality to that obtained by other CT equipment. The most important characteristic of this equipment is that the retrospective and prospective reviews of the target image and the coronal and sagittal recontruction from contiguous transverse axial scans are possible. We show in this report two experimental CT photograms obtained by the reviews of the target image in using of microchart phantom and CT photograms of uterus myoma, metastatic thyroid and liver cancers and further, of contiguous transverse axial scans of cerebral embolism and the coronal recontruction. The important problems for the future of this equipment are those of absorbed X-ray dose of the patients and the scan time. (author)

  16. [Introducing computer units into the reception office as part of the Vrapce Psychiatric Hospital Information System].

    Science.gov (United States)

    Majdancić, Zeljko; Jukić, Vlado; Bojić, Miroslav

    2005-01-01

    Computerized medical record has become a necessity today, because of both the amount of present-day medical data and the need of better handling and processing them. In more than 120 years of the Vrapce Psychiatric Hospital existence, the most important changes in the working concept of the reception office took place when computer technology was introduced into the routine use. The reception office of the Hospital is the vital place where administrative activities intersect with medical care for a patient presenting to the Hospital. The importance of this segment of the Hospital is emphasized by the fact that the reception office is in function and at patients' disposition round-the-clock, for 365 days a year, with great frequency of patients. The shift from the established way of registering medical data on patient admission in handwriting or, later, typescript, to computer recording was a challenging and demanding task (from the aspects of hardware, software, network, education) for the development team as well as for the physicians because it has changed the concept (logic of the working process) of previous way of collecting the data from the patient (history, status, diagnostic procedures, therapy, etc.). The success in the development and implementation of this project and the confirmation of its usefulness during the four-year practice at Vrapce Psychiatric Hospital are best illustrated by the fact that other psychiatric hospitals in Croatia have already introduced or are introducing it in their daily practice.

  17. Teacher's Guide for Computational Models of Animal Behavior: A Computer-Based Curriculum Unit to Accompany the Elementary Science Study Guide "Behavior of Mealworms." Artificial Intelligence Memo No. 432.

    Science.gov (United States)

    Abelson, Hal; Goldenberg, Paul

    This experimental curriculum unit suggests how dramatic innovations in classroom content may be achieved through use of computers. The computational perspective is viewed as one which can enrich and transform traditional curricula, act as a focus for integrating insights from diverse disciplines, and enable learning to become more active and…

  18. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  19. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  20. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  1. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards.

    Science.gov (United States)

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G

    2011-07-01

    In this paper we describe and evaluate a fast implementation of a classical block matching motion estimation algorithm for multiple Graphical Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) computing engine. The implemented block matching algorithm (BMA) uses summed absolute difference (SAD) error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and non-integer search grids.The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a non-integer search grid. The additional speedup for non-integer search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable.In addition we compared execution time of the proposed FS GPU implementation with two existing, highly optimized non-full grid search CPU based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and Simplified Unsymmetrical multi-Hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation.We also demonstrated that for an image sequence of 720×480 pixels in resolution, commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards.

  2. Radiation dose reduction in a neonatal intensive care unit in computed radiography.

    Science.gov (United States)

    Frayre, A S; Torres, P; Gaona, E; Rivera, T; Franco, J; Molina, N

    2012-12-01

    The purpose of this study was to evaluate the dose received by chest x-rays in neonatal care with thermoluminescent dosimetry and to determine the level of exposure where the quantum noise level does not affect the diagnostic image quality in order to reduce the dose to neonates. In pediatric radiology, especially the prematurely born children are highly sensitive to the radiation because of the highly mitotic state of their cells; in general, the sensitivity of a tissue to radiation is directly proportional to its rate of proliferation. The sample consisted of 208 neonatal chest x-rays of 12 neonates admitted and treated in a Neonatal Intensive Care Unit (NICU). All the neonates were preterm in the range of 28-34 weeks, with a mean of 30.8 weeks. Entrance Surface Doses (ESD) values for chest x-rays are higher than the DRL of 50 μGy proposed by the National Radiological Protection Board (NRPB). In order to reduce the dose to neonates, the optimum image quality was achieved by determining the level of ESD where level noise does not affect the diagnostic image quality. The optimum ESD was estimated for additional 20 chest x-rays increasing kVp and reducing mAs until quantum noise affects image quality. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Computation of a Canadian SCWR unit cell with deterministic and Monte Carlo codes

    International Nuclear Information System (INIS)

    Harrisson, G.; Marleau, G.

    2012-01-01

    The Canadian SCWR has the potential to achieve the goals that the generation IV nuclear reactors must meet. As part of the optimization process for this design concept, lattice cell calculations are routinely performed using deterministic codes. In this study, the first step (self-shielding treatment) of the computation scheme developed with the deterministic code DRAGON for the Canadian SCWR has been validated. Some options available in the module responsible for the resonance self-shielding calculation in DRAGON 3.06 and different microscopic cross section libraries based on the ENDF/B-VII.0 evaluated nuclear data file have been tested and compared to a reference calculation performed with the Monte Carlo code SERPENT under the same conditions. Compared to SERPENT, DRAGON underestimates the infinite multiplication factor in all cases. In general, the original Stammler model with the Livolant-Jeanpierre approximations are the most appropriate self-shielding options to use in this case of study. In addition, the 89 groups WIMS-AECL library for slight enriched uranium and the 172 groups WLUP library for a mixture of plutonium and thorium give the most consistent results with those of SERPENT. (authors)

  4. Transportable GPU (General Processor Units) chip set technology for standard computer architectures

    Science.gov (United States)

    Fosdick, R. E.; Denison, H. C.

    1982-11-01

    The USAFR-developed GPU Chip Set has been utilized by Tracor to implement both USAF and Navy Standard 16-Bit Airborne Computer Architectures. Both configurations are currently being delivered into DOD full-scale development programs. Leadless Hermetic Chip Carrier packaging has facilitated implementation of both architectures on single 41/2 x 5 substrates. The CMOS and CMOS/SOS implementations of the GPU Chip Set have allowed both CPU implementations to use less than 3 watts of power each. Recent efforts by Tracor for USAF have included the definition of a next-generation GPU Chip Set that will retain the application-proven architecture of the current chip set while offering the added cost advantages of transportability across ISO-CMOS and CMOS/SOS processes and across numerous semiconductor manufacturers using a newly-defined set of common design rules. The Enhanced GPU Chip Set will increase speed by an approximate factor of 3 while significantly reducing chip counts and costs of standard CPU implementations.

  5. Radiation dose reduction in a neonatal intensive care unit in computed radiography

    International Nuclear Information System (INIS)

    Frayre, A.S.; Torres, P.; Gaona, E.; Rivera, T.; Franco, J.; Molina, N.

    2012-01-01

    The purpose of this study was to evaluate the dose received by chest x-rays in neonatal care with thermoluminescent dosimetry and to determine the level of exposure where the quantum noise level does not affect the diagnostic image quality in order to reduce the dose to neonates. In pediatric radiology, especially the prematurely born children are highly sensitive to the radiation because of the highly mitotic state of their cells; in general, the sensitivity of a tissue to radiation is directly proportional to its rate of proliferation. The sample consisted of 208 neonatal chest x-rays of 12 neonates admitted and treated in a Neonatal Intensive Care Unit (NICU). All the neonates were preterm in the range of 28–34 weeks, with a mean of 30.8 weeks. Entrance Surface Doses (ESD) values for chest x-rays are higher than the DRL of 50 μGy proposed by the National Radiological Protection Board (NRPB). In order to reduce the dose to neonates, the optimum image quality was achieved by determining the level of ESD where level noise does not affect the diagnostic image quality. The optimum ESD was estimated for additional 20 chest x-rays increasing kVp and reducing mAs until quantum noise affects image quality. - Highlights: ► Entrance surface doses (ESD) in neonates were measured. ► Doses measured in neonates examinations were higher than those reported by literature. ► Reference levels in neonatal studies are required. ► Radiation protection optimization was proposed.

  6. Decomposing the Hounsfield unit: probabilistic segmentation of brain tissue in computed tomography.

    Science.gov (United States)

    Kemmling, A; Wersching, H; Berger, K; Knecht, S; Groden, C; Nölte, I

    2012-03-01

    The aim of this study was to present and evaluate a standardized technique for brain segmentation of cranial computed tomography (CT) using probabilistic partial volume tissue maps based on a database of high resolution T1 magnetic resonance images (MRI). Probabilistic tissue maps of white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF) were derived from 600 normal brain MRIs (3.0 Tesla, T1-3D-turbo-field-echo) of 2 large community-based population studies (BiDirect and SEARCH Health studies). After partial tissue segmentation (FAST 4.0), MR images were linearly registered to MNI-152 standard space (FLIRT 5.5) with non-linear refinement (FNIRT 1.0) to obtain non-binary probabilistic volume images for each tissue class which were subsequently used for CT segmentation. From 150 normal cerebral CT scans a customized reference image in standard space was constructed with iterative non-linear registration to MNI-152 space. The inverse warp of tissue-specific probability maps to CT space (MNI-152 to individual CT) was used to decompose a CT image into tissue specific components (GM, WM, CSF). Potential benefits and utility of this novel approach with regard to unsupervised quantification of CT images and possible visual enhancement are addressed. Illustrative examples of tissue segmentation in different pathological cases including perfusion CT are presented. Automated tissue segmentation of cranial CT images using highly refined tissue probability maps derived from high resolution MR images is feasible. Potential applications include automated quantification of WM in leukoaraiosis, CSF in hydrocephalic patients, GM in neurodegeneration and ischemia and perfusion maps with separate assessment of GM and WM.

  7. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  8. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  9. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  10. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  11. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  12. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  13. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  14. Quantifying morphological parameters of the terminal branching units in a mouse lung by phase contrast synchrotron radiation computed tomography.

    Directory of Open Access Journals (Sweden)

    Jeongeun Hwang

    Full Text Available An effective technique of phase contrast synchrotron radiation computed tomography was established for the quantitative analysis of the microstructures in the respiratory zone of a mouse lung. Heitzman's method was adopted for the whole-lung sample preparation, and Canny's edge detector was used for locating the air-tissue boundaries. This technique revealed detailed morphology of the respiratory zone components, including terminal bronchioles and alveolar sacs, with sufficiently high resolution of 1.74 µm isotropic voxel size. The technique enabled visual inspection of the respiratory zone components and comprehension of their relative positions in three dimensions. To check the method's feasibility for quantitative imaging, morphological parameters such as diameter, surface area and volume were measured and analyzed for sixteen randomly selected terminal branching units, each consisting of a terminal bronchiole and a pair of succeeding alveolar sacs. The four types of asymmetry ratios concerning alveolar sac mouth diameter, alveolar sac surface area, and alveolar sac volume are measured. This is the first ever finding of the asymmetry ratio for the terminal bronchioles and alveolar sacs, and it is noteworthy that an appreciable degree of branching asymmetry was observed among the alveolar sacs at the terminal end of the airway tree, despite the number of samples was small yet. The series of efficient techniques developed and confirmed in this study, from sample preparation to quantification, is expected to contribute to a wider and exacter application of phase contrast synchrotron radiation computed tomography to a variety of studies.

  15. COMPUTING

    CERN Multimedia

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  16. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  17. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  18. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  19. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  20. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  1. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  2. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  3. Computer-aided design system for a complex of problems on calculation and analysis of engineering and economical indexes of NPP power units

    International Nuclear Information System (INIS)

    Stepanov, V.I.; Koryagin, A.V.; Ruzankov, V.N.

    1988-01-01

    Computer-aided design system for a complex of problems concerning calculation and analysis of engineering and economical indices of NPP power units is described. In the system there are means for automated preparation and debugging of data base software complex, which realizes th plotted algorithm in the power unit control system. Besides, in the system there are devices for automated preparation and registration of technical documentation

  4. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  5. Application of an EPID for fast daily dosimetric quality control of a fully computer-controlled treatment unit

    International Nuclear Information System (INIS)

    Dirkx, M.L.P.; Kroonwijk, M.; De Boer, J.C.J.; Heijmen, B.J.M.

    1995-01-01

    The MM50 Racetrack Microtron, suited for sophisticated three-dimensional computer-controlled conformal radiotherapy techniques, is a complex treatment unit in various respects. Therefore, for a number of gantry angles, daily quality control of the absolute output and the profiles of the scanned photon beams in mandatory. A fast method for these daily checks, based on dosimetric measurements with the Philips SRI-100 Electronic Portal Imaging Device, has been developed and tested. Open beams are checked for four different gantry angles; for gantry angle 0, a wedged field is checked as well. The fields are set up one after another under full computer control. Performing and analyzing the measurements takes about ten minutes. The applied EPID has favourable characteristics for dosimetric quality control measurements: absolute measurements reproduce within 0.5% (1 SD) and the reproducibility of a relative (2-D) fluence profile is 0.2% (1 SD). The day-to-day sensitivity stability over a period of a month is 0.6% (1 SD). EPID-signals are within 0.2% linear with the applied dose. The 2-D fluence profile of the 25 MV photon beam of the MM50 is very stable in time: during a period of one year, a maximum fluctuation of 2.6% was observed. Once, a deviation in the cGy/MU-value of 6% was detected. Only because of the performed morning quality control checks with the EPID, erroneous dose delivery to patients could be avoided; there is no interlock in the MM50-system that would have prevented patient treatment. Based on our experiences and on clinical requirements regarding the acceptability of deviations of beam characteristics, a protocol has been developed including action levels for additional investigations. Studies on the application of the SRI-100 for in vivo dosimetry on the MM50 have been started

  6. Application of an EPID for fast daily dosimetric quality control of a fully computer-controlled treatment unit

    Energy Technology Data Exchange (ETDEWEB)

    Dirkx, M L.P.; Kroonwijk, M; De Boer, J C.J.; Heijmen, B J.M. [Nederlands Kanker Inst. ` Antoni van Leeuwenhoekhuis` , Amsterdam (Netherlands)

    1995-12-01

    The MM50 Racetrack Microtron, suited for sophisticated three-dimensional computer-controlled conformal radiotherapy techniques, is a complex treatment unit in various respects. Therefore, for a number of gantry angles, daily quality control of the absolute output and the profiles of the scanned photon beams in mandatory. A fast method for these daily checks, based on dosimetric measurements with the Philips SRI-100 Electronic Portal Imaging Device, has been developed and tested. Open beams are checked for four different gantry angles; for gantry angle 0, a wedged field is checked as well. The fields are set up one after another under full computer control. Performing and analyzing the measurements takes about ten minutes. The applied EPID has favourable characteristics for dosimetric quality control measurements: absolute measurements reproduce within 0.5% (1 SD) and the reproducibility of a relative (2-D) fluence profile is 0.2% (1 SD). The day-to-day sensitivity stability over a period of a month is 0.6% (1 SD). EPID-signals are within 0.2% linear with the applied dose. The 2-D fluence profile of the 25 MV photon beam of the MM50 is very stable in time: during a period of one year, a maximum fluctuation of 2.6% was observed. Once, a deviation in the cGy/MU-value of 6% was detected. Only because of the performed morning quality control checks with the EPID, erroneous dose delivery to patients could be avoided; there is no interlock in the MM50-system that would have prevented patient treatment. Based on our experiences and on clinical requirements regarding the acceptability of deviations of beam characteristics, a protocol has been developed including action levels for additional investigations. Studies on the application of the SRI-100 for in vivo dosimetry on the MM50 have been started.

  7. The optimal parameter design for a welding unit of manufacturing industry by Taguchi method and computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Zahraee, S.M.; Chegeni, A.; Toghtamish, A.

    2016-07-01

    Manufacturing systems include a complicated combination of resources, such as materials, labors, and machines. Hence, when the manufacturing systems are faced with a problem related to the availability of resources it is difficult to identify the root of the problem accurately and effectively. Managers and engineers in companies are trying to achieve a robust production line based on the maximum productivity. The main goal of this paper is to design a robust production line, taking productivity into account in the selected manufacturing industry. This paper presents the application of Taguchi method along with computer simulation for finding an optimum factor setting for three controllable factors, which are a number of welding machines, hydraulic machines, and cutting machines by analyzing the effect of noise factors in a selected manufacturing industry. Based on the final results, the optimal design parameter of welding unit of in the selected manufacturing industry will be obtained when factor A is located at level 2 and B and C are located at level 1. Therefore, maximum productive desirability is achieved when the number of welding machines, hydraulic machines, and cutting machines is equal to 17, 2, and 1, respectively. This paper has a significant role in designing a robust production line by considering the lowest cost and timely manner based on the Taguchi method. (Author)

  8. Economic Impacts of Potential Foot and Mouth Disease Agro-terrorism in the United States: A Computable General Equilibrium Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Oladosu, Gbadebo A [ORNL; Rose, Adam [University of Southern California, Los Angeles; Bumsoo, Lee [University of Illinois

    2013-01-01

    The foot and mouth disease (FMD) virus has high agro-terrorism potential because it is contagious, can be easily transmitted via inanimate objects and can be spread by wind. An outbreak of FMD in developed countries results in massive slaughtering of animals (for disease control) and disruptions in meat supply chains and trade, with potentially large economic losses. Although the United States has been FMD-free since 1929, the potential of FMD as a deliberate terrorist weapon calls for estimates of the physical and economic damage that could result from an outbreak. This paper estimates the economic impacts of three alternative scenarios of potential FMD attacks using a computable general equilibrium (CGE) model of the US economy. The three scenarios range from a small outbreak successfully contained within a state to a large multi-state attack resulting in slaughtering of 30 percent of the national livestock. Overall, the value of total output losses in our simulations range between $37 billion (0.15% of 2006 baseline economic output) and $228 billion (0.92%). Major impacts stem from the supply constraint on livestock due to massive animal slaughtering. As expected, the economic losses are heavily concentrated in agriculture and food manufacturing sectors, with losses ranging from $23 billion to $61 billion in the two industries.

  9. The optimal parameter design for a welding unit of manufacturing industry by Taguchi method and computer simulation

    Directory of Open Access Journals (Sweden)

    Seyed Mojib Zahraee

    2016-05-01

    Full Text Available Purpose: Manufacturing systems include a complicated combination of resources, such as materials, labors, and machines. Hence, when the manufacturing systems are faced with a problem related to the availability of resources it is difficult to identify the root of the problem accurately and effectively. Managers and engineers in companies are trying to achieve a robust production line based on the maximum productivity. The main goal of this paper is to design a robust production line, taking productivity into account in the selected manufacturing industry. Design/methodology/approach: This paper presents the application of Taguchi method along with computer simulation for finding an optimum factor setting for three controllable factors, which are a number of welding machines, hydraulic machines, and cutting machines by analyzing the effect of noise factors in a selected manufacturing industry. Findings and Originality/value: Based on the final results, the optimal design parameter of welding unit of in the selected manufacturing industry will be obtained when factor A is located at level 2 and B and C are located at level 1. Therefore, maximum productive desirability is achieved when the number of welding machines, hydraulic machines, and cutting machines is equal to 17, 2, and 1, respectively. This paper has a significant role in designing a robust production line by considering the lowest cost and timely manner based on the Taguchi method.

  10. Experience with a mobile data storage device for transfer of studies from the critical care unit to a central nuclear medicine computer

    International Nuclear Information System (INIS)

    Cradduck, T.D.; Driedger, A.A.

    1981-01-01

    The introduction of mobile scintillation cameras has enabled the more immediate provision of nuclear medicine services in areas remote from the central nuclear medicine laboratory. Since a large number of such studies involve the use of a computer for data analysis, the concurrent problem of how to transmit those data to the computer becomes critical. A device is described using hard magnetic discs as the recording media and which can be wheeled from the patient's bedside to the central computer for playback. Some initial design problems, primarily associated with the critical timing which is necessary for the collection of gated studies, were overcome and the unit has been in service for the past two years. The major limitations are the relatively small capacity of the discs and the fact that the data are recorded in list mode. These constraints result in studies having poor statistical validity. The slow turn-around time, which results from the necessity to transport the system to the department and replay the study into the computer before analysis can begin, is also of particular concern. The use of this unit has clearly demonstrated the very important role that nuclear medicine can play in the care of the critically ill patient. The introduction of a complete acquisition and analysis unit is planned so that prompt diagnostic decisions can be made available within the intensive care unit. (author)

  11. Effect of Computer Animation Technique on Students' Comprehension of the "Solar System and Beyond" Unit in the Science and Technology Course

    Science.gov (United States)

    Aksoy, Gokhan

    2013-01-01

    The purpose of this study is to determine the effect of computer animation technique on academic achievement of students in the "Solar System and Beyond" unit lecture as part of the Science and Technology course of the seventh grade in primary education. The sample of the study consists of 60 students attending to the 7th grade of primary school…

  12. Predicting the stone composition of children preoperatively by Hounsfield unit detection on non-contrast computed tomography.

    Science.gov (United States)

    Altan, Mesut; Çitamak, Burak; Bozaci, Ali Cansu; Güneş, Altan; Doğan, Hasan Serkan; Haliloğlu, Mithat; Tekgül, Serdar

    2017-10-01

    Many studies have been performed on adult patients to reveal the relationship between Hounsfield unit (HU) value and composition of stone, but none have focused on childhood. We aimed to predict stone composition by HU properties in pre-intervention non-contrast computed tomography (NCCT) in children. This could help to orient patients towards more successful interventions. Data of 94 children, whose pre-intervention NCCT and post-interventional stone analysis were available were included. Stones were grouped into three groups: calcium oxalate (CaOx), cystine, and struvite. Besides spot urine PH value, core HU, periphery HU, and Hounsfield density (HUD) values were measured and groups were compared statistically. The mean age of patients was 7 ± 4 (2-17) years and the female/male ratio was 51/43. The mean stone size was 11.7 ± 5 (4-24) mm. There were 50, 38, and 6 patients in the CaOx, cystine, and struvite groups, respectively. The median values for core HU, periphery HU, and mean HU in the CaOx group were significantly higher than the corresponding median values in the cystine and struvite groups. Significant median HUD difference was seen only between the CaOx and cystine groups. No difference was seen between the cystine and struvite groups in terms of HU parameters. To distinguish these groups, mean spot urine PH values were compared and were found to be higher in the struvite group than the cystine group (Table). The retrospective nature and small number of patients in some groups are limitations of this study, which also does not include all stone compositions. Our cystine stone rate was higher than childhood stone composition distribution in the literature. This is because our center is a reference center in a region with high recurrence rates of cystine stones. In fact, high numbers of cystine stones helped us to compare them with calcium stones more accurately and became an advantage for this study. NCCT at diagnosis can provide some information for

  13. Design, Assembly, Integration, and Testing of a Power Processing Unit for a Cylindrical Hall Thruster, the NORSAT-2 Flatsat, and the Vector Gravimeter for Asteroids Instrument Computer

    Science.gov (United States)

    Svatos, Adam Ladislav

    This thesis describes the author's contributions to three separate projects. The bus of the NORSAT-2 satellite was developed by the Space Flight Laboratory (SFL) for the Norwegian Space Centre (NSC) and Space Norway. The author's contributions to the mission were performing unit tests for the components of all the spacecraft subsystems as well as designing and assembling the flatsat from flight spares. Gedex's Vector Gravimeter for Asteroids (VEGA) is an accelerometer for spacecraft. The author's contributions to this payload were modifying the instrument computer board schematic, designing the printed circuit board, developing and applying test software, and performing thermal acceptance testing of two instrument computer boards. The SFL's cylindrical Hall effect thruster combines the cylindrical configuration for a Hall thruster and uses permanent magnets to achieve miniaturization and low power consumption, respectively. The author's contributions were to design, build, and test an engineering model power processing unit.

  14. Use of computational methods for assessing the radiological status of units 1-4 of Kozloduy NPP. Evaluation of activated materials and computation of surface contamination

    International Nuclear Information System (INIS)

    Radovanov, P.

    2015-01-01

    For planning purposes, the calculation methods are a good approach for the predicting of: the amount of RAW and radionuclide inventory; dose rates from the equipment, as personnel exposure is decreased to the minimum (even to 0). In future, the development of the computing software and hardware will result in even better predictions to contribute to more accurate planning of the decommissioning process

  15. Semiempirical and DFT computations of the influence of Tb(III) dopant on unit cell dimensions of cerium(III) fluoride.

    Science.gov (United States)

    Shyichuk, Andrii; Runowski, Marcin; Lis, Stefan; Kaczkowski, Jakub; Jezierski, Andrzej

    2015-01-30

    Several computational methods, both semiempirical and ab initio, were used to study the influence of the amount of dopant on crystal cell dimensions of CeF3 doped with Tb(3+) ions (CeF3 :Tb(3+) ). AM1, RM1, PM3, PM6, and PM7 semiempirical parameterization models were used, while the Sparkle model was used to represent the lanthanide cations in all cases. Ab initio calculations were performed by means of GGA+U/PBE projector augmented wave density functional theory. The computational results agree well with the experimental data. According to both computation and experiment, the crystal cell parameters undergo a linear decrease with increasing amount of the dopant. The computations performed using Sparkle/PM3 and DFT methods resulted in the best agreement with the experiment with the average deviation of about 1% in both cases. Typical Sparkle/PM3 computation on a 2×2×2 supercell of CeF3:Tb3+ lasted about two orders of magnitude shorter than the DFT computation concerning a unit cell of this material. © 2014 Wiley Periodicals, Inc.

  16. Distribution of lithostratigraphic units within the central block of Yucca Mountain, Nevada: A three-dimensional computer-based model, Version YMP.R2.0

    International Nuclear Information System (INIS)

    Buesch, D.C.; Nelson, J.E.; Dickerson, R.P.; Drake, R.M. II; San Juan, C.A.; Spengler, R.W.; Geslin, J.K.; Moyer, T.C.

    1996-01-01

    Yucca Mountain, Nevada is underlain by 14.0 to 11.6 Ma volcanic rocks tilted eastward 3 degree to 20 degree and cut by faults that were primarily active between 12.7 and 11.6 Ma. A three-dimensional computer-based model of the central block of the mountain consists of seven structural subblocks composed of six formations and the interstratified-bedded tuffaceous deposits. Rocks from the 12.7 Ma Tiva Canyon Tuff, which forms most of the exposed rocks on the mountain, to the 13.1 Ma Prow Pass Tuff are modeled with 13 surfaces. Modeled units represent single formations such as the Pah Canyon Tuff, grouped units such as the combination of the Yucca Mountain Tuff with the superjacent bedded tuff, and divisions of the Topopah Spring Tuff such as the crystal-poor vitrophyre interval. The model is based on data from 75 boreholes from which a structure contour map at the base of the Tiva Canyon Tuff and isochore maps for each unit are constructed to serve as primary input. Modeling consists of an iterative cycle that begins with the primary structure-contour map from which isochore values of the subjacent model unit are subtracted to produce the structure contour map on the base of the unit. This new structure contour map forms the input for another cycle of isochore subtraction to produce the next structure contour map. In this method of solids modeling, the model units are presented by surfaces (structure contour maps), and all surfaces are stored in the model. Surfaces can be converted to form volumes of model units with additional effort. This lithostratigraphic and structural model can be used for (1) storing data from, and planning future, site characterization activities, (2) preliminary geometry of units for design of Exploratory Studies Facility and potential repository, and (3) performance assessment evaluations

  17. Assessment of Two Desk-Top Computer Simulations Used to Train Tactical Decision Making (TDM) of Small Unit Infantry Leaders

    National Research Council Canada - National Science Library

    Beal, Scott A

    2007-01-01

    Fifty-two leaders in the Basic Non-Commissioned Officer Course (BNCOC) at Fort Benning, Georgia, participated in an assessment of two desk-top computer simulations used to train tactical decision making...

  18. SHIVGAMI : Simplifying tHe titanIc blastx process using aVailable GAthering of coMputational unIts

    Directory of Open Access Journals (Sweden)

    Naman Mangukia

    2017-10-01

    Full Text Available Assembling novel genomes from scratch is a never ending process unless and until the homo sapiens cover all the living organisms! On top of that, this denovo approach is employed by RNASeq and Metagenomics analysis. Functional identification of the scaffolds or transcripts from such drafted assemblies is a substantial step routinely employes a well-known BlastX program which facilitates a user to search DNA query against NCBI-Protein (NR:~120Gb database. In spite of having multicore-processing option, blastX is an elongated process for the bulk of lengthy Queryinputs. Tremendous efforts are constantly being applied to solve this problem by increasing computational power, GPU-Based computing, Cloud computing and Hadoop based approach which ultimately requires gigantic cost in terms of money and processing. To address this issue, here we have come up with SHIVGAMI, which automates the entire process using perl and shell scripts, which divide, distribute and process the input FASTA sequences as per the CPU-cores availability amongst the computational units individually. Linux operating system, NR database and blastX program installations are prerequisites for each system.  The beauty of this stand-alone automation program SHIVGAMI is it requires the LAN connection exactly twice: During ‘query distribution’ and at the time of ‘proces completion’. In initial phase, it divides the fasta sequences according to the individual computer's core-capability. Then it will evenly distribute all the data along with small automation scripts which will run the blastX process to the respective computational unit and send back the results file to the master computer. The master computer finally combines and compiles the files into a single result. This simple automation converts a computer lab into a GRID without investment of any software, hardware and man-power. In short, SHIVGAMI is a time and cost savior tool for all users starting from commercial firm

  19. Computer aided design of operational units for tritium recovery from Li17Pb83 blanket of a DEMO fusion reactor

    International Nuclear Information System (INIS)

    Malara, C.; Viola, A.

    1995-01-01

    The problem of tritium recovery from Li 17 Pb 83 blanket of a DEMO fusion reactor is analyzed with the objective of limiting tritium permeation into the cooling water to acceptable levels. To this aim, a mathematical model describing the tritium behavior in blanket/recovery unit circuit has been formulated. By solving the model equations, tritium permeation rate into the cooling water and tritium inventory in the blanket are evaluated as a function of dimensionless parameters describing the combined effects of overall resistance for tritium transfer from Li 17 Pb 83 alloy to cooling water, circulating rate of the molten alloy in blanket/recovery unit circuit and extraction efficiency of tritium recovery unit. The extraction efficiency is, in turn, evaluated as a function of the operating conditions of recovery unit. The design of tritium recovery unit is then optimized on the basis of the above parametric analysis and the results are herein reported and discussed for a tritium permeation limit of 10 g/day into the cooling water. 14 refs., 9 figs., 2 tabs

  20. The Dynamic Interplay between Spatialization of Written Units in Writing Activity and Functions of Tools on the Computer

    Science.gov (United States)

    Huh, Joo Hee

    2012-01-01

    I criticize the typewriting model and linear writing structure of Microsoft Word software for writing in the computer. I problematize bodily movement in writing that the error of the software disregards. In this research, writing activity is viewed as bodily, spatial and mediated activity under the premise of the unity of consciousness and…

  1. Demographics of undergraduates studying games in the United States: a comparison of computer science students and the general population

    Science.gov (United States)

    McGill, Monica M.; Settle, Amber; Decker, Adrienne

    2013-06-01

    Our study gathered data to serve as a benchmark of demographics of undergraduate students in game degree programs. Due to the high number of programs that are cross-disciplinary with computer science programs or that are housed in computer science departments, the data is presented in comparison to data from computing students (where available) and the US population. Participants included students studying games at four nationally recognized postsecondary institutions. The results of the study indicate that there is no significant difference between the ratio of men to women studying in computing programs or in game degree programs, with women being severely underrepresented in both. Women, blacks, Hispanics/Latinos, and heterosexuals are underrepresented compared to the US population. Those with moderate and conservative political views and with religious affiliations are underrepresented in the game student population. Participants agree that workforce diversity is important and that their programs are adequately diverse, but only one-half of the participants indicated that diversity has been discussed in any of their courses.

  2. Massively parallel signal processing using the graphics processing unit for real-time brain-computer interface feature extraction

    Directory of Open Access Journals (Sweden)

    J. Adam Wilson

    2009-07-01

    Full Text Available The clock speeds of modern computer processors have nearly plateaued in the past five years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card (GPU was developed for real-time neural signal processing of a brain-computer interface (BCI. The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter, followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally-intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a CPU-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  3. Mobile phones and computer keyboards: unlikely reservoirs of multidrug-resistant organisms in the tertiary intensive care unit.

    Science.gov (United States)

    Smibert, O C; Aung, A K; Woolnough, E; Carter, G P; Schultz, M B; Howden, B P; Seemann, T; Spelman, D; McGloughlin, S; Peleg, A Y

    2018-03-02

    Few studies have used molecular epidemiological methods to study transmission links to clinical isolates in intensive care units. Ninety-four multidrug-resistant organisms (MDROs) cultured from routine specimens from intensive care unit (ICU) patients over 13 weeks were stored (11 meticillin-resistant Staphylococcus aureus (MRSA), two vancomycin-resistant enterococci and 81 Gram-negative bacteria). Medical staff personal mobile phones, departmental phones, and ICU keyboards were swabbed and cultured for MDROs; MRSA was isolated from two phones. Environmental and patient isolates of the same genus were selected for whole genome sequencing. On whole genome sequencing, the mobile phone isolates had a pairwise single nucleotide polymorphism (SNP) distance of 183. However, >15,000 core genome SNPs separated the mobile phone and clinical isolates. In a low-endemic setting, mobile phones and keyboards appear unlikely to contribute to hospital-acquired MDROs. Copyright © 2018 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  4. Functional needs which led to the use of digital computing devices in the protection system of 1300 MW units

    International Nuclear Information System (INIS)

    Dalle, H.

    1986-01-01

    After a review of classical protection functions used in 900 MW power plants, it is concluded that in order to have functioning margins it is useful to calculate more finely the controled parameters. These calculating needs lead to the use of digital computing devices. Drawing profit from the new possibilities one can improve the general performances of the protection system with regard to availability, safety and maintenance. These options in the case of PALUEL led to the realization of SPIN, described here

  5. Prostate contouring uncertainty in megavoltage computed tomography images acquired with a helical tomotherapy unit during image-guided radiation therapy

    International Nuclear Information System (INIS)

    Song, William Y.; Chiu, Bernard; Bauman, Glenn S.; Lock, Michael; Rodrigues, George; Ash, Robert; Lewis, Craig; Fenster, Aaron; Battista, Jerry J.; Van Dyk, Jake

    2006-01-01

    Purpose: To evaluate the image-guidance capabilities of megavoltage computed tomography (MVCT), this article compares the interobserver and intraobserver contouring uncertainty in kilovoltage computed tomography (KVCT) used for radiotherapy planning with MVCT acquired with helical tomotherapy. Methods and Materials: Five prostate-cancer patients were evaluated. Each patient underwent a KVCT and an MVCT study, a total of 10 CT studies. For interobserver variability analysis, four radiation oncologists, one physicist, and two radiation therapists (seven observers in total) contoured the prostate and seminal vesicles (SV) in the 10 studies. The intraobserver variability was assessed by asking all observers to repeat the contouring of 1 patient's KVCT and MVCT studies. Quantitative analysis of contour variations was performed by use of volumes and radial distances. Results: The interobserver and intraobserver contouring uncertainty was larger in MVCT compared with KVCT. Observers consistently segmented larger volumes on MVCT where the ratio of average prostate and SV volumes was 1.1 and 1.2, respectively. On average (interobserver and intraobserver), the local delineation variability, in terms of standard deviations [Δσ = √(σ 2 MVCT - σ 2 KVCT )], increased by 0.32 cm from KVCT to MVCT. Conclusions: Although MVCT was inferior to KVCT for prostate delineation, the application of MVCT in prostate radiotherapy remains useful

  6. Polyhedral meshing as an innovative approach to computational domain discretization of a cyclone in a fluidized bed CLC unit

    Directory of Open Access Journals (Sweden)

    Sosnowski Marcin

    2017-01-01

    Full Text Available Chemical Looping Combustion (CLC is a technology that allows the separation of CO2, which is generated by the combustion of fossil fuels. The majority of process designs currently under investigation are systems of coupled fluidized beds. Advances in the development of power generation system using CLC cannot be introduced without using numerical modelling as a research tool. The primary and critical activity in numerical modelling is the computational domain discretization. It influences the numerical diffusion as well as convergence of the model and therefore the overall accuracy of the obtained results. Hence an innovative approach of computational domain discretization using polyhedral (POLY mesh is proposed in the paper. This method reduces both the numerical diffusion of the mesh as well as the time cost of preparing the model for subsequent calculation. The major advantage of POLY mesh is that each individual cell has many neighbours, so gradients can be much better approximated in comparison to commonly-used tetrahedral (TET mesh. POLYs are also less sensitive to stretching than TETs which results in better numerical stability of the model. Therefore detailed comparison of numerical modelling results concerning subsection of CLC system using tetrahedral and polyhedral mesh is covered in the paper.

  7. Making Friends in Dark Shadows: An Examination of the Use of Social Computing Strategy Within the United States Intelligence Community Since 9/11

    Directory of Open Access Journals (Sweden)

    Andrew Chomik

    2011-01-01

    Full Text Available The tragic events of 9/11/2001 in the United States highlighted failures in communication and cooperation in the U.S. intelligence community. Agencies within the community failed to “connect the dots” by not collaborating in intelligence gathering efforts, which resulted in severe gaps in data sharing that eventually contributed to the terrorist attack on American soil. Since then, and under the recommendation made by the 9/11 Commission Report, the United States intelligence community has made organizational and operational changes to intelligence gathering and sharing, primarily with the creation of the Office of the Director of National Intelligence (ODNI. The ODNI has since introduced a series of web-based social computing tools to be used by all members of the intelligence community, primarily with its closed-access wiki entitled “Intellipedia” and their social networking service called “A-Space”. This paper argues that, while the introduction of these and other social computing tools have been adopted successfully into the intelligence workplace, they have reached a plateau in their use and serve only as complementary tools to otherwise pre-existing information sharing processes. Agencies continue to ‘stove-pipe’ their respective data, a chronic challenge that plagues the community due to bureaucratic policy, technology use and workplace culture. This paper identifies and analyzes these challenges, and recommends improvements in the use of these tools, both in the business processes behind them and the technology itself. These recommendations aim to provide possible solutions for using these social computing tools as part of a more trusted, collaborative information sharing process.

  8. Development of new process network for gas chromatograph and analyzers connected with SCADA system and Digital Control Computers at Cernavoda NPP Unit 1

    International Nuclear Information System (INIS)

    Deneanu, Cornel; Popa Nemoiu, Dragos; Nica, Dana; Bucur, Cosmin

    2007-01-01

    The continuous monitoring of gas mixture concentrations (deuterium/ hydrogen/oxygen/nitrogen) accumulated in 'Moderator Cover Gas', 'Liquid Control Zone' and 'Heat Transport D 2 O Storage Tank Cover Gas', as well as the continuous monitoring of Heavy Water into Light Water concentration in 'Boilers Steam', 'Boilers Blown Down', 'Moderator heat exchangers', and 'Recirculated Water System', sensing any leaks of Cernavoda NPP U1 led to requirement of developing a new process network for gas chromatograph and analyzers connected to the SCADA system and Digital Control Computers of Cernavoda NPP Unit 1. In 2005 it was designed and implemented the process network for gas chromatograph which connected the gas chromatograph equipment to the SCADA system and Digital Control Computers of the Cernavoda NPP Unit 1. Later this process network for gas chromatograph has been extended to connect the AE13 and AE14 Fourier Transform Infrared (FTIR) analyzers with either. The Gas Chromatograph equipment measures with best accuracy the mixture gases (deuterium/ hydrogen/oxygen/nitrogen) concentration. The Fourier Transform Infrared (FTIR) AE13 and AE14 Analyzers measure the Heavy Water into Light Water concentration in Boilers Steam, Boilers BlownDown, Moderator heat exchangers, and Recirculated Water System, monitoring and signaling any leaks. The Gas Chromatograph equipment and Fourier Transform Infrared (FTIR) AE13 and AE14 Analyzers use the new OPC (Object Link Embedded for Process Control) technologies available in ABB's VistaNet network for interoperability with automation equipment. This new process network has interconnected the ABB chromatograph and Fourier Transform Infrared analyzers with plant Digital Control Computers using new technology. The result was an increased reliability and capability for inspection and improved system safety

  9. Planning intensive care unit design using computer simulation modeling: optimizing integration of clinical, operational, and architectural requirements.

    Science.gov (United States)

    OʼHara, Susan

    2014-01-01

    Nurses have increasingly been regarded as critical members of the planning team as architects recognize their knowledge and value. But the nurses' role as knowledge experts can be expanded to leading efforts to integrate the clinical, operational, and architectural expertise through simulation modeling. Simulation modeling allows for the optimal merge of multifactorial data to understand the current state of the intensive care unit and predict future states. Nurses can champion the simulation modeling process and reap the benefits of a cost-effective way to test new designs, processes, staffing models, and future programming trends prior to implementation. Simulation modeling is an evidence-based planning approach, a standard, for integrating the sciences with real client data, to offer solutions for improving patient care.

  10. Computational procedure of optimal inventory model involving controllable backorder rate and variable lead time with defective units

    Science.gov (United States)

    Lee, Wen-Chuan; Wu, Jong-Wuu; Tsou, Hsin-Hui; Lei, Chia-Ling

    2012-10-01

    This article considers that the number of defective units in an arrival order is a binominal random variable. We derive a modified mixture inventory model with backorders and lost sales, in which the order quantity and lead time are decision variables. In our studies, we also assume that the backorder rate is dependent on the length of lead time through the amount of shortages and let the backorder rate be a control variable. In addition, we assume that the lead time demand follows a mixture of normal distributions, and then relax the assumption about the form of the mixture of distribution functions of the lead time demand and apply the minimax distribution free procedure to solve the problem. Furthermore, we develop an algorithm procedure to obtain the optimal ordering strategy for each case. Finally, three numerical examples are also given to illustrate the results.

  11. The impact of increased efficiency in the industrial use of energy: A computable general equilibrium analysis for the United Kingdom

    International Nuclear Information System (INIS)

    Allan, Grant; Hanley, Nick; McGregor, Peter; Swales, Kim; Turner, Karen

    2007-01-01

    The conventional wisdom is that improving energy efficiency will lower energy use. However, there is an extensive debate in the energy economics/policy literature concerning 'rebound' effects. These occur because an improvement in energy efficiency produces a fall in the effective price of energy services. The response of the economic system to this price fall at least partially offsets the expected beneficial impact of the energy efficiency gain. In this paper we use an economy-energy-environment computable general equilibrium (CGE) model for the UK to measure the impact of a 5% across the board improvement in the efficiency of energy use in all production sectors. We identify rebound effects of the order of 30-50%, but no backfire (no increase in energy use). However, these results are sensitive to the assumed structure of the labour market, key production elasticities, the time period under consideration and the mechanism through which increased government revenues are recycled back to the economy

  12. PURDU-WINCOF: A computer code for establishing the performance of a fan-compressor unit with water ingestion

    Science.gov (United States)

    Leonardo, M.; Tsuchiya, T.; Murthy, S. N. B.

    1982-01-01

    A model for predicting the performance of a multi-spool axial-flow compressor with a fan during operation with water ingestion was developed incorporating several two-phase fluid flow effects as follows: (1) ingestion of water, (2) droplet interaction with blades and resulting changes in blade characteristics, (3) redistribution of water and water vapor due to centrifugal action, (4) heat and mass transfer processes, and (5) droplet size adjustment due to mass transfer and mechanical stability considerations. A computer program, called the PURDU-WINCOF code, was generated based on the model utilizing a one-dimensional formulation. An illustrative case serves to show the manner in which the code can be utilized and the nature of the results obtained.

  13. United States Adolescents' Television, Computer, Videogame, Smartphone, and Tablet Use: Associations with Sugary Drinks, Sleep, Physical Activity, and Obesity.

    Science.gov (United States)

    Kenney, Erica L; Gortmaker, Steven L

    2017-03-01

    To quantify the relationships between youth use of television (TV) and other screen devices, including smartphones and tablets, and obesity risk factors. TV and other screen device use, including smartphones, tablets, computers, and/or videogames, was self-reported by a nationally representative, cross-sectional sample of 24 800 US high school students (2013-2015 Youth Risk Behavior Surveys). Students also reported on health behaviors including sugar-sweetened beverage (SSB) intake, physical activity, sleep, and weight and height. Sex-stratified logistic regression models, adjusting for the sampling design, estimated associations between TV and other screen device use and SSB intake, physical activity, sleep, and obesity. Approximately 20% of participants used other screen devices for ≥5 hours daily. Watching TV ≥5 hours daily was associated with daily SSB consumption (aOR = 2.72, 95% CI: 2.23, 3.32) and obesity (aOR = 1.78, 95% CI: 1.40, 2.27). Using other screen devices ≥5 hours daily was associated with daily SSB consumption (aOR = 1.98, 95% CI: 1.69, 2.32), inadequate physical activity (aOR = 1.94, 95% CI: 1.69, 2.25), and inadequate sleep (aOR = 1.79, 95% CI: 1.54, 2.08). Using smartphones, tablets, computers, and videogames is associated with several obesity risk factors. Although further study is needed, families should be encouraged to limit both TV viewing and newer screen devices. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Portable Brain-Computer Interface for the Intensive Care Unit Patient Communication Using Subject-Dependent SSVEP Identification.

    Science.gov (United States)

    Dehzangi, Omid; Farooq, Muhamed

    2018-01-01

    A major predicament for Intensive Care Unit (ICU) patients is inconsistent and ineffective communication means. Patients rated most communication sessions as difficult and unsuccessful. This, in turn, can cause distress, unrecognized pain, anxiety, and fear. As such, we designed a portable BCI system for ICU communications (BCI4ICU) optimized to operate effectively in an ICU environment. The system utilizes a wearable EEG cap coupled with an Android app designed on a mobile device that serves as visual stimuli and data processing module. Furthermore, to overcome the challenges that BCI systems face today in real-world scenarios, we propose a novel subject-specific Gaussian Mixture Model- (GMM-) based training and adaptation algorithm. First, we incorporate subject-specific information in the training phase of the SSVEP identification model using GMM-based training and adaptation. We evaluate subject-specific models against other subjects. Subsequently, from the GMM discriminative scores, we generate the transformed vectors, which are passed to our predictive model. Finally, the adapted mixture mean scores of the subject-specific GMMs are utilized to generate the high-dimensional supervectors. Our experimental results demonstrate that the proposed system achieved 98.7% average identification accuracy, which is promising in order to provide effective and consistent communication for patients in the intensive care.

  15. Mode selectivity in the intramolecular cyclization of ketenimines bearing N-acylimino units: a computational and experimental study.

    Science.gov (United States)

    Alajarín, Mateo; Sánchez-Andrada, Pilar; Vidal, Angel; Tovar, Fulgencio

    2005-02-18

    [reaction: see text] The mode selectivity in the intramolecular cyclization of a particular class of ketenimines bearing N-acylimino units has been studied by ab initio and DFT calculations. In the model compounds the carbonyl carbon atom and the keteniminic nitrogen atom are linked either by a vinylic or an o-phenylene tether. Two cyclization modes have been analyzed: the [2+2] cycloaddition furnishing compounds with an azeto[2,1-b]pyrimidinone moiety and a 6pi-electrocyclic ring closure leading to compounds enclosing a 1,3-oxazine ring. The [2+2] cycloaddition reaction takes place via a two-step process with formation of a zwitterionic intermediate, which has been characterized as a cross-conjugated mesomeric betaine. The 6pi-electrocyclic ring closure occurs via a transition state whose pseudopericyclic character has been established on the basis of its magnetic properties, geometry, and NBO analysis. The 6pi-electrocyclic ring closure is energetically favored over the [2+2] cycloaddition, although the [2+2] cycloadducts are the thermodynamically controlled products. A quantitative kinetic analysis predicts that 1,3-oxazines would be the kinetically controlled products, but they should transform rapidly and totally into the [2+2] cycloadducts at room temperature. In the experimental study, a number of N-acylimino-ketenimines, in which both reactive functions are supported on an o-phenylene scaffold, have been successfully synthesized in three steps starting from 2-azidobenzoyl chloride. These compounds rapidly convert into azeto[2,1-b]quinazolin-8-ones in moderate to good yields as a result of a formal [2+2] cycloaddition.

  16. Design of a linear detector array unit for high energy x-ray helical computed tomography and linear scanner

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jeong Tae; Park, Jong Hwan; Kim, Gi Yoon [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon (Korea, Republic of); Kim, Dong Geun [Medical Imaging Department, ASTEL Inc., Seongnam (Korea, Republic of); Park, Shin Woong; Yi, Yun [Dept. of Electronics and Information Eng, Korea University, Seoul (Korea, Republic of); Kim, Hyun Duk [Research Center, Luvantix ADM Co., Ltd., Daejeon (Korea, Republic of)

    2016-11-15

    A linear detector array unit (LdAu) was proposed and designed for the high energy X-ray 2-d and 3-d imaging systems for industrial non-destructive test. Specially for 3-d imaging, a helical CT with a 15 MeV linear accelerator and a curved detector is proposed. the arc-shape detector can be formed by many LdAus all of which are arranged to face the focal spot when the source-to-detector distance is fixed depending on the application. An LdAu is composed of 10 modules and each module has 48 channels of CdWO{sub 4} (CWO) blocks and Si PIn photodiodes with 0.4 mm pitch. this modular design was made for easy manufacturing and maintenance. through the Monte carlo simulation, the CWO detector thickness of 17 mm was optimally determined. the silicon PIn photodiodes were designed as 48 channel arrays and fabricated with NTD (neutron transmutation doping) wafers of high resistivity and showed excellent leakage current properties below 1 nA at 10 V reverse bias. to minimize the low-voltage breakdown, the edges of the active layer and the guard ring were designed as a curved shape. the data acquisition system was also designed and fabricated as three independent functional boards; a sensor board, a capture board and a communication board to a Pc. this paper describes the design of the detectors (CWO blocks and Si PIn photodiodes) and the 3-board data acquisition system with their simulation results.

  17. Comparison of adult and child radiation equivalent doses from 2 dental cone-beam computed tomography units.

    Science.gov (United States)

    Al Najjar, Anas; Colosi, Dan; Dauer, Lawrence T; Prins, Robert; Patchell, Gayle; Branets, Iryna; Goren, Arthur D; Faber, Richard D

    2013-06-01

    With the advent of cone-beam computed tomography (CBCT) scans, there has been a transition toward these scans' replacing traditional radiographs for orthodontic diagnosis and treatment planning. Children represent a significant proportion of orthodontic patients. Similar CBCT exposure settings are predicted to result in higher equivalent doses to the head and neck organs in children than in adults. The purpose of this study was to measure the difference in equivalent organ doses from different scanners under similar settings in children compared with adults. Two phantom heads were used, representing a 33-year-old woman and a 5-year-old boy. Optically stimulated dosimeters were placed at 8 key head and neck organs, and equivalent doses to these organs were calculated after scanning. The manufacturers' predefined exposure settings were used. One scanner had a pediatric preset option; the other did not. Scanning the child's phantom head with the adult settings resulted in significantly higher equivalent radiation doses to children compared with adults, ranging from a 117% average ratio of equivalent dose to 341%. Readings at the cervical spine level were decreased significantly, down to 30% of the adult equivalent dose. When the pediatric preset was used for the scans, there was a decrease in the ratio of equivalent dose to the child mandible and thyroid. CBCT scans with adult settings on both phantom heads resulted in higher radiation doses to the head and neck organs in the child compared with the adult. In practice, this might result in excessive radiation to children scanned with default adult settings. Collimation should be used when possible to reduce the radiation dose to the patient. While CBCT scans offer a valuable tool, use of CBCT scans should be justified on a specific case-by-case basis. Copyright © 2013 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  18. Computer says 2.5 litres--how best to incorporate intelligent software into clinical decision making in the intensive care unit?

    Science.gov (United States)

    Lane, Katie; Boyd, Owen

    2009-01-01

    What will be the role of the intensivist when computer-assisted decision support reaches maturity? Celi's group reports that Bayesian theory can predict a patient's fluid requirement on day 2 in 78% of cases, based on data collected on day 1 and the known associations between those data, based on observations in previous patients in their unit. There are both advantages and limitations to the Bayesian approach, and this test study identifies areas for improvement in future models. Although such models have the potential to improve diagnostic and therapeutic accuracy, they must be introduced judiciously and locally to maximize their effect on patient outcome. Efficacy is thus far undetermined, and these novel approaches to patient management raise new challenges, not least medicolegal ones.

  19. A COMPUTATIONAL FLUID DYNAMICS ANALYSIS OF AIR FLOW THROUGH A TELECOM BACK-UP UNIT POWERED BY AN AIR-COOLED PROTON EXCHANGE MEMBRANE FUEL CELL

    DEFF Research Database (Denmark)

    Gao, Xin; Berning, Torsten; Kær, Søren Knudsen

    2016-01-01

    Proton exchange membrane fuel cells (PEMFC’s) are currently being commercialized for various applications ranging from automotive to stationary such as powering telecom back-up units. In PEMFC’s, oxygen from air is internally combined with hydrogen to form water and produce electricity and heat....... This product heat has to be effectively removed from the fuel cell, and while automotive fuel cells are usually liquid-cooled using a secondary coolant loop similar to the internal combustion engines, stationary fuel cell systems as they are used for telecom back-up applications often rely on excessive air fed...... to the fuel cell cathode to remove the heat. Thereby, the fuel cell system is much simpler and cheaper while the fuel cell performance is substantially lower compared to automotive fuel cells. This work presents a computational fluid dynamics analysis on the heat management of an air-cooled fuel cell powered...

  20. Internal fit of three-unit fixed dental prostheses produced by computer-aided design/computer-aided manufacturing and the lost-wax metal casting technique assessed using the triple-scan protocol.

    Science.gov (United States)

    Dahl, Bjørn E; Dahl, Jon E; Rønold, Hans J

    2018-02-01

    Suboptimal adaptation of fixed dental prostheses (FDPs) can lead to technical and biological complications. It is unclear if the computer-aided design/computer-aided manufacturing (CAD/CAM) technique improves adaptation of FDPs compared with FDPs made using the lost-wax and metal casting technique. Three-unit FDPs were manufactured by CAD/CAM based on digital impression of a typodont model. The FDPs were made from one of five materials: pre-sintered zirconium dioxide; hot isostatic pressed zirconium dioxide; lithium disilicate glass-ceramic; milled cobalt-chromium; and laser-sintered cobalt-chromium. The FDPs made using the lost-wax and metal casting technique were used as reference. The fit of the FDPs was analysed using the triple-scan method. The fit was evaluated for both single abutments and three-unit FDPs. The average cement space varied between 50 μm and 300 μm. Insignificant differences in internal fit were observed between the CAD/CAM-manufactured FDPs, and none of the FPDs had cement spaces that were statistically significantly different from those of the reference FDP. For all FDPs, the cement space at a marginal band 0.5-1.0 mm from the preparation margin was less than 100 μm. The milled cobalt-chromium FDP had the closest fit. The cement space of FDPs produced using the CAD/CAM technique was similar to that of FDPs produced using the conventional lost-wax and metal casting technique. © 2017 Eur J Oral Sci.

  1. Green Computing

    Directory of Open Access Journals (Sweden)

    K. Shalini

    2013-01-01

    Full Text Available Green computing is all about using computers in a smarter and eco-friendly way. It is the environmentally responsible use of computers and related resources which includes the implementation of energy-efficient central processing units, servers and peripherals as well as reduced resource consumption and proper disposal of electronic waste .Computers certainly make up a large part of many people lives and traditionally are extremely damaging to the environment. Manufacturers of computer and its parts have been espousing the green cause to help protect environment from computers and electronic waste in any way.Research continues into key areas such as making the use of computers as energy-efficient as Possible, and designing algorithms and systems for efficiency-related computer technologies.

  2. Identification and control of factors influencing flow-accelerated corrosion in HRSG units using computational fluid dynamics modeling, full-scale air flow testing, and risk analysis

    Energy Technology Data Exchange (ETDEWEB)

    Pietrowski, Ronald L. [The Consolidated Edison Company of New York, Inc., New York, NY (United States)

    2010-11-15

    In 2009, Consolidated Edison's East River heat recovery steam generator units 10 and 20 both experienced economizer tube failures which forced each unit offline. Extensive inspections indicated that the primary failure mechanism was flow-accelerated corrosion (FAC). The inspections revealed evidence of active FAC in all 7 of the economizer modules, with the most advanced stages of degradation being noted in center modules. Analysis determined that various factors were influencing and enabling this corrosion mechanism. Computational fluid dynamics and full-scale air flow testing showed very turbulent feedwater flow prevalent in areas of the modules corresponding with the pattern of FAC damage observed through inspection. It also identified preferential flow paths, with higher flow velocities, in certain tubes directly under the inlet nozzles. A FAC risk analysis identified more general susceptibility to FAC in the areas experiencing damage due to feedwater pH, operating temperatures, local shear fluid forces, and the chemical composition of the original materials of construction. These, in combination, were the primary root causes of the failures. Corrective actions were identified, analyzed, and implemented, resulting in equipment replacements and repairs. (orig.)

  3. Motor unit number index (MUNIX) derivation from the relationship between the area and power of surface electromyogram: a computer simulation and clinical study

    Science.gov (United States)

    Miralles, Francesc

    2018-06-01

    Objective. The motor unit number index (MUNIX) is a technique based on the surface electromyogram (sEMG) that is gaining acceptance as a method for monitoring motor neuron loss, because it is reliable and produces less discomfort than other electrodiagnostic techniques having the same intended purpose. MUNIX assumes that the relationship between the area of sEMG obtained at increasing levels of muscle activation and the values of a variable called ‘ideal case motor unit count’ (ICMUC), defined as the product of the ratio between area and power of the compound muscle action potential (CMAP) by that of the sEMG, is described by a decreasing power function. Nevertheless, the reason for this comportment is unknown. The objective of this work is to investigate if the definition of MUNIX could derive from more basic properties of the sEMG. Approach. The CMAP and sEMG epochs obtained at different levels of muscle activation from (1) the abductor pollicis brevis (APB) muscle of persons with and without a carpal tunnel syndrome (CTS) and (2) from a computer model of sEMG generation previously published were analysed. Main results. MUNIX reflects the power relationship existing between the area and power of a sEMG. The exponent of this function was smaller in patients with motor CTS than in the rest of the subjects. The analysis of the relationship between the area and power of a sEMG could aid in distinguishing a MUNIX reduction due to a motoneuron loss from that due to a loss of muscle fibre. Significance. MUNIX is derived from the relationship between the area and power of a sEMG. This relationship changes when there is a loss of motor units (MUs), which partially explains the diagnostic sensibility of MUNIX. Although the reasons for this change are unknown, it could reflect an increase in the proportion of MUs of great amplitude.

  4. Computer experimental analysis of the CHP performance of a 100 kW e SOFC Field Unit by a factorial design

    Science.gov (United States)

    Calì, M.; Santarelli, M. G. L.; Leone, P.

    Gas Turbine Technologies (GTT) and Politecnico di Torino, both located in Torino (Italy), have been involved in the design and installation of a SOFC laboratory in order to analyse the operation, in cogenerative configuration, of the CHP 100 kW e SOFC Field Unit, built by Siemens-Westinghouse Power Corporation (SWPC), which is at present (May 2005) starting its operation and which will supply electric and thermal power to the GTT factory. In order to take the better advantage from the analysis of the on-site operation, and especially to correctly design the scheduled experimental tests on the system, we developed a mathematical model and run a simulated experimental campaign, applying a rigorous statistical approach to the analysis of the results. The aim of this work is the computer experimental analysis, through a statistical methodology (2 k factorial experiments), of the CHP 100 performance. First, the mathematical model has been calibrated with the results acquired during the first CHP100 demonstration at EDB/ELSAM in Westerwoort. After, the simulated tests have been performed in the form of computer experimental session, and the measurement uncertainties have been simulated with perturbation imposed to the model independent variables. The statistical methodology used for the computer experimental analysis is the factorial design (Yates' Technique): using the ANOVA technique the effect of the main independent variables (air utilization factor U ox, fuel utilization factor U F, internal fuel and air preheating and anodic recycling flow rate) has been investigated in a rigorous manner. Analysis accounts for the effects of parameters on stack electric power, thermal recovered power, single cell voltage, cell operative temperature, consumed fuel flow and steam to carbon ratio. Each main effect and interaction effect of parameters is shown with particular attention on generated electric power and stack heat recovered.

  5. Impact on breast cancer diagnosis in a multidisciplinary unit after the incorporation of mammography digitalization and computer-aided detection systems.

    Science.gov (United States)

    Romero, Cristina; Varela, Celia; Muñoz, Enriqueta; Almenar, Asunción; Pinto, Jose María; Botella, Miguel

    2011-12-01

    The purpose of this article is to evaluate the impact on the diagnosis of breast cancer of implementing full-field digital mammography (FFDM) in a multidisciplinary breast pathology unit and, 1 year later, the addition of a computer-aided detection (CAD) system. A total of 13,453 mammograms performed between January and July of the years 2004, 2006, and 2007 were retrospectively reviewed using conventional mammography, digital mammography, and digital mammography plus CAD techniques. Mammograms were classified into two subsets: screening and diagnosis. Variables analyzed included cancer detection rate, rate of in situ carcinoma, tumor size at detection, biopsy rate, and positive predictive value of biopsy. FFDM increased the cancer detection rate, albeit not statistically significantly. The detection rate of in situ carcinoma increased significantly using FFDM plus CAD compared with conventional technique (36.8% vs 6.7%; p = 0.05 without Bonferroni statistical correction) for the screening dataset. Relative to conventional mammography, tumor size at detection decreased with digital mammography (T1, 61.5% vs 88%; p = 0.018) and with digital mammography plus CAD (T1, 79.7%; p = 0.03 without Bonferroni statistical correction). Biopsy rates in the general population increased significantly using CAD (10.6/1000 for conventional mammography, 14.7/1000 for digital mammography, and 17.9/1000 for digital mammography plus CAD; p = 0.02). The positive predictive value of biopsy decreased slightly, but not significantly, for both subsets. The incorporation of new techniques has improved the performance of the breast unit by increasing the overall detection rates and earlier detection (smaller tumors), both leading to an increase in interventionism.

  6. Study on efficiency of time computation in x-ray imaging simulation base on Monte Carlo algorithm using graphics processing unit

    International Nuclear Information System (INIS)

    Setiani, Tia Dwi; Suprijadi; Haryanto, Freddy

    2016-01-01

    Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic images and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10"8 and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.

  7. Study on efficiency of time computation in x-ray imaging simulation base on Monte Carlo algorithm using graphics processing unit

    Energy Technology Data Exchange (ETDEWEB)

    Setiani, Tia Dwi, E-mail: tiadwisetiani@gmail.com [Computational Science, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132 (Indonesia); Suprijadi [Computational Science, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132 (Indonesia); Nuclear Physics and Biophysics Reaserch Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132 (Indonesia); Haryanto, Freddy [Nuclear Physics and Biophysics Reaserch Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132 (Indonesia)

    2016-03-11

    Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic images and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10{sup 8} and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.

  8. Study on motion artifacts in coronary arteries with an anthropomorphic moving heart phantom on an ECG-gated multidetector computed tomography unit

    International Nuclear Information System (INIS)

    Greuter, Marcel J.W.; Dorgelo, Joost; Tukker, Wim G.J.; Oudkerk, Matthijs

    2005-01-01

    Acquisition time plays a key role in the quality of cardiac multidetector computed tomography (MDCT) and is directly related to the rotation time of the scanner. The purpose of this study is to examine the influence of heart rate and a multisector reconstruction algorithm on the image quality of coronary arteries of an anthropomorphic adjustable moving heart phantom on an ECG-gated MDCT unit. The heart phantom and a coronary artery phantom were used on a MDCT unit with a rotation time of 500 ms. The movement of the heart was determined by analysis of the images taken at different phases. The results indicate that the movement of the coronary arteries on the heart phantom is comparable to that in a clinical setting. The influence of the heart rate on image quality and artifacts was determined by analysis of several heart rates between 40 and 80 bpm where the movement of the heart was synchronized using a retrospective ECG-gated acquisition protocol. The resulting reformatted volume rendering images of the moving heart and the coronary arteries were qualitatively compared as a result of the heart rate. The evaluation was performed on three independent series by two independent radiologists for the image quality of the coronary arteries and the presence of artifacts. The evaluation shows that at heart rates above 50 bpm the influence of motion artifacts in the coronary arteries becomes apparent. In addition the influence of a dedicated multisector reconstruction technique on image quality was determined. The results show that the image quality of the coronary arteries is not only related to the heart rate and that the influence of the multisector reconstruction technique becomes significant above 70 bpm. Therefore, this study proves that from the actual acquisition time per heart cycle one cannot determine an actual acquisition time, but only a mathematical acquisition time. (orig.)

  9. Detection of Cement Leakage After Vertebroplasty with a Non-Flat-Panel Angio Unit Compared to Multidetector Computed Tomography - An Ex Vivo Study

    International Nuclear Information System (INIS)

    Baumann, Clemens; Fuchs, Heiko; Westphalen, Kerstin; Hierholzer, Johannes

    2008-01-01

    The purpose of this study was to investigate the detection of cement leakages after vertebroplasty using angiographic computed tomography (ACT) in a non-flat-panel angio unit compared to multidetector computed tomography (MDCT). Vertebroplasty was performed in 19 of 33 cadaver vertebrae (23 thoracic and 10 lumbar segments). In the angio suite, ACT (190 o ; 1.5 o per image) was performed to obtain volumetric data. Another volumetric data set of the specimen was obtained by MDCT using a standard algorithm. Nine multiplanar reconstructions in standardized axial, coronal, and sagittal planes of every vertebra were generated from both data sets. Images were evaluated on the basis of a nominal scale with 18 criteria, comprising osseous properties (e.g., integrity of the end plate) and cement distribution (e.g., presence of intraspinal cement). MDCT images were regarded as gold standard and analyzed by two readers in a consensus mode. Rotational acquisitions were analyzed by six blinded readers. Results were correlated with the gold standard using Cohen's κ-coefficient analysis. Furthermore, interobserver variability was calculated. Correlation with the gold standard ranged from no correlation (osseous margins of the neuroforamen, κ = 0.008) to intermediate (trace of vertebroplasty canula; κ = 0.615) for criteria referring to osseous morphology. However, there was an excellent correlation for those criteria referring to cement distribution, with κ values ranging from 0.948 (paravertebral cement distribution) to 0.972 (intraspinal cement distribution). With a minimum of κ = 0.768 ('good correlation') and a maximum of κ = 0.91 ('excellent'), interobserver variability was low. In conclusion, ACT in an angio suite without a flat-panel detector depicts a cement leakage after vertebroplasty as well as MDCT. However, the method does not provide sufficient depiction of osseous morphology.

  10. Analog and hybrid computing

    CERN Document Server

    Hyndman, D E

    2013-01-01

    Analog and Hybrid Computing focuses on the operations of analog and hybrid computers. The book first outlines the history of computing devices that influenced the creation of analog and digital computers. The types of problems to be solved on computers, computing systems, and digital computers are discussed. The text looks at the theory and operation of electronic analog computers, including linear and non-linear computing units and use of analog computers as operational amplifiers. The monograph examines the preparation of problems to be deciphered on computers. Flow diagrams, methods of ampl

  11. Mapping the Information Trace in Local Field Potentials by a Computational Method of Two-Dimensional Time-Shifting Synchronization Likelihood Based on Graphic Processing Unit Acceleration.

    Science.gov (United States)

    Zhao, Zi-Fang; Li, Xue-Zhu; Wan, You

    2017-12-01

    The local field potential (LFP) is a signal reflecting the electrical activity of neurons surrounding the electrode tip. Synchronization between LFP signals provides important details about how neural networks are organized. Synchronization between two distant brain regions is hard to detect using linear synchronization algorithms like correlation and coherence. Synchronization likelihood (SL) is a non-linear synchronization-detecting algorithm widely used in studies of neural signals from two distant brain areas. One drawback of non-linear algorithms is the heavy computational burden. In the present study, we proposed a graphic processing unit (GPU)-accelerated implementation of an SL algorithm with optional 2-dimensional time-shifting. We tested the algorithm with both artificial data and raw LFP data. The results showed that this method revealed detailed information from original data with the synchronization values of two temporal axes, delay time and onset time, and thus can be used to reconstruct the temporal structure of a neural network. Our results suggest that this GPU-accelerated method can be extended to other algorithms for processing time-series signals (like EEG and fMRI) using similar recording techniques.

  12. Use of computed tomography scout film and Hounsfield unit of computed tomography scan in predicting the radio-opacity of urinary calculi in plain kidney, ureter and bladder radiographs.

    Science.gov (United States)

    Chua, Michael E; Gomez, Odina R; Sapno, Lorelei D; Lim, Steve L; Morales, Marcelino L

    2014-07-01

    The objective of this study is to determine the diagnostic utility of computed tomography (CT)- scout film with an optimal non-contrast helical CT scan Hounsfield unit (HU) in predicting the appearance of urinary calculus in the plain kidneys, ureter, urinary bladder (KUB)-radiograph. A prospective cross-sectional study was executed and data were collected from June 2007 to June 2012 at a tertiary hospital. The included subjects were diagnosed to have value, CT-scout film and KUB radiograph appearance were recorded independently by two observers. Univariate logistic analysis with receiver operating characteristic curve was generated to determine the best cut-off HU value of urolithiases not identified in CT-scout film, but determined radio-opaque in KUB X-ray. Subsequently, its sensitivity, specificity, predictive values and likelihood ratios were calculated. Statistical significance was set at P value of 0.05 or less. Two hundred and three valid cases were included. 73 out of 75 CT-scout film detected urolithiasis were identified on plain radiograph and determined as radio-opaque. The determined best cut off value of HU utilized for prediction of radiographic characteristics was 630HU at which urinary calculi were not seen at CT-scout film and were KUB X-ray radio-opaque. The set HU cut-off was established of ideal accuracy with an overall sensitivity of 82.2%, specificity of 96.9% and a positive predictive value of 96.5% and negative predictive value of 83.5%. Urolithiases identified on the CT-scout film were also seen as radiopaque on the KUB radiograph while those stones not visible on the CT-scout film, but above the optimal HU cut-off value of 630 are also likely to be radiopaque.

  13. Generalidades de un Sistema de Monitorización Informático para Unidades de Cuidados Intensivos Generalities of a Computer Monitoring System for Intensive Cares Units

    Directory of Open Access Journals (Sweden)

    María del Carmen Tellería Prieto

    2012-02-01

    Full Text Available El empleo de las tecnologías de la información y las comunicaciones en el sector de la salud adquiere cada día una importancia mayor. Se exponen en el trabajo los requisitos generales a partir de los cuales se desarrolla un Sistema Informático para la Monitorización de pacientes críticos en los diferentes servicios de atención al grave, aunque inicialmente está dirigido a las unidades de terapia intensiva. El trabajo es parte de un proyecto ramal que ejecuta la Dirección Nacional de Urgencias Médicas del Ministerio de Salud Pública de Cuba, con la participación de emergencistas e intensivistas de todo el país. El sistema se implementa por informáticos de la salud en Pinar del Río, cumplimentando las regulaciones establecidas por la Dirección Nacional de Informática y la empresa Softel. El sistema de monitorización facilitará la captura, gestión, tratamiento y almacenamiento de la información generada para cada paciente, integrando toda la información que se maneja en el servicio. Se hace hincapié en las evoluciones médicas y de enfermería, la prescripción de los tratamientos, así como en la evaluación clínica de los pacientes, lo que permitirá la toma de decisiones terapéuticas más efectivas. En las generalidades a partir de las cuales se desarrollará el sistema de monitorización, se ha especificado que el sistema sea modular, de manejo sencillo e intuitivo, e implementado con software libre.The application of information and communication technologies in the health sector gains a greater importance every day. General requisites to develop a Computer System to perform the monitoring of critically-ill patients throughout the different services of intensive care were considered; though it was firstly designed to the intensive care units. This paper is part of a branch project conducted by the National Direction of Medical Emergencies belonging to Cuban Ministry of Public Health, and with the participation of

  14. Cloud computing for radiologists

    OpenAIRE

    Amit T Kharat; Amjad Safvi; S S Thind; Amarjit Singh

    2012-01-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as...

  15. 26 CFR 1.861-8 - Computation of taxable income from sources within the United States and from other sources and...

    Science.gov (United States)

    2010-04-01

    ..., such as expenses in the nature of day-to-day management, and paragraph (e)(5) of this section generally... different principles. A corporate taxpayer's deduction for a state franchise tax that is computed on the...

  16. Cloud Computing for radiologists.

    Science.gov (United States)

    Kharat, Amit T; Safvi, Amjad; Thind, Ss; Singh, Amarjit

    2012-07-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.

  17. Cloud Computing for radiologists

    International Nuclear Information System (INIS)

    Kharat, Amit T; Safvi, Amjad; Thind, SS; Singh, Amarjit

    2012-01-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future

  18. Cloud computing for radiologists

    Directory of Open Access Journals (Sweden)

    Amit T Kharat

    2012-01-01

    Full Text Available Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.

  19. Cranial computed tomography findings in patients admitted to the emergency unit of Hospital Universitario Cajuru; Achados tomograficos de pacientes submetidos a tomografia de cranio no pronto-socorro do Hospital Universitario Cajuru

    Energy Technology Data Exchange (ETDEWEB)

    Lara Filho, Lauro Aparecido; Omar, Samir Sari; Biguelini, Rodrigo Foletto; Santos, Rony Augusto de Oliveira, E-mail: samir176@gmail.com [Pontificia Universidade Catolica do Parana (PUCPR), Curitiba, PR (Brazil). Cuso de Medicina

    2013-05-15

    Objective: to identify and analyze the prevalence of cranial computed tomography findings in patients admitted to the emergency unit of Hospital Universitario Cajuru. Materials and methods: cross-sectional study analyzing 200 consecutive non contrast-enhanced cranial computed tomography reports of patients admitted to the emergency unit of Hospital Universitario Cajuru. Results: alterations were observed in 76.5% of the patients. Among them, the following findings were most frequently observed: extracranial soft tissue swelling (22%), bone fracture (16.5%), subarachnoid hemorrhage (15%), nonspecific hypodensity (14.5%), paranasal sinuses opacification (11.5%), diffuse cerebral edema (10.5%), subdural hematoma (9.5%), cerebral contusion (8.5%), hydrocephalus (8%), retractable hypodensity /gliosis/ encephalomalacia (8%). Conclusion: the authors recognize that the most common findings in emergency departments reported in the literature are similar to the ones described in the present study. This information is important for professionals to recognize the main changes to be identified at cranial computed tomography, and for future planning and hospital screening aiming at achieving efficiency and improvement in services. (author)

  20. Teachers' Perceptions of the Use of Computer Assisted Language Learning to Develop Children's Reading Skills in English as a Second Language in the United Arab Emirates

    Science.gov (United States)

    Al-Awidi, Hamed Mubarak; Ismail, Sadiq Abdulwahed

    2014-01-01

    This study investigated ESL teachers' perceptions regarding the use of Computer Assisted Language Learning (CALL) in teaching reading to children. A random sample of 145 teachers participated in the study by completing a survey developed by the researchers. To explore the situation in depth, 16 teachers were later interviewed. Results indicated…

  1. Image processing unit with fall-back.

    NARCIS (Netherlands)

    2011-01-01

    An image processing unit ( 100,200,300 ) for computing a sequence of output images on basis of a sequence of input images, comprises: a motion estimation unit ( 102 ) for computing a motion vector field on basis of the input images; a quality measurement unit ( 104 ) for computing a value of a

  2. Computation code TEP 1 for automated evaluation of technical and economic parameters of operation of WWER-440 nuclear power plant units

    International Nuclear Information System (INIS)

    Zadrazil, J.; Cvan, M.; Strimelsky, V.

    1987-01-01

    The TEP 1 program is used for automated evaluation of the technical and economic parameters of nuclear power plant units with WWER-440 reactors. This is an application program developed by the Research Institute for Nuclear Power Plants in Jaslovske Bohunice for the KOMPLEX-URAN 2M information system, delivered by the USSR to the V-2 nuclear power plants in Jaslovske Bohunice and in Dukovany. The TEP 1 program is written in FORTRAN IV and its operation has two parts. First the evaluation of technical and economic parameters of operation for a calculation interval of 10 mins and second, the control of the calculation procedure, follow-up on input data, determination of technical and economic parameters for a lengthy time interval, and data printout and storage. The TEP 1 program was tested at the first unit of the V-2 power plant and no serious faults appeared in the process of the evaluation of technical and economic parameters. A modification of the TEP 1 programme for the Dukovany nuclear power plant is now being tested on the first unit of the plant. (Z.M.)

  3. 圖書館事業與交流/Location Choice Patterns of Computer Use in the United States/Zhixian Yi; Philip Q. Yang

    Directory of Open Access Journals (Sweden)

    Zhixian Yi; Philip Q. Yang Zhixian Yi; Philip Q. Yang

    2009-10-01

    Full Text Available There is little research on the patterns of computer use outside home or work. This study examines who is more or less likely to use a computer at a location other than work or home by using the 2002–2004 General Social Survey data and logistic regression analysis. Demographic variables (such as age, race, marital status, and region, socioeconomic status (such as education and family income, self employment, and satisfaction with financial situation are significant predictors of computer use at locations other than home or work; but occupation and gender make no difference. The findings will help institutions to provide computer infrastructure support and services for customers in public places, and especially help schools and libraries to improve computer labs and services. 過去在探討非居家或上班電腦使用地點的研究微之又微。本文針對2002-2004 年總體社會調查資料進行對數回歸分析,分析更可能或不可能在家外或工作外使用電腦的族群。研究結果顯示,人口變量如年齡、種族、婚姻狀況、地區,社經地位(教育程度、家庭收入),是否自謀職業,是否對財政狀況滿意等,為解釋是否在家外或工作外使用電腦的有效自變量,然而職業和性別的影響則不顯著。本研究結果對於社會機構在公共場所提供計算機基礎設施和服務,以及對學校和圖書館改善電腦使用室及服務具有參考價值。 頁次:116-130

  4. Computer simulation of deasphalting vacuum residues in a pilot unit; Simulacao computacional de desasfaltacao de residuo de vacuo realizada em unidade piloto

    Energy Technology Data Exchange (ETDEWEB)

    Concha, Viktor Oswaldo Cardenas; Quirino, Filipe Augusto Barral; Koroisgi, Erika Tomie; Rivarola, Florencia Wisnivesky Rocca; Maciel, Maria Regina Wolf; Maciel Filho, Rubens [Universidade Estadual de Campinas (UNICAMP), SP (Brazil). Faculdade de Engenharia Quimica; Medina, Lilian Carmen; Barros, Ricardo Soares de [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil). Centro de Pesquisas (CENPES)

    2008-07-01

    In the context of the national petroleum industry, it is interesting to keep the production of the paraffinic lubricant oil type I, which implies in the identification of new loads to ensure the feeding of the existing units. Therefore, it is important to carry out carefully the characterization of the oils, defining its potential for fuel, asphalt and lubricant. Aiming to introduce in the characterization and evaluation of petroleum for lubricant, carried out by PETROBRAS/CENPES, information of basic oils, more similar to industrial oils, was built up in the Laboratory of Process Separation Development - LDPS of UNICAMP/FEQ, a deasphalting pilot unit. In this work, the deasphalting process of a vacuum residue of Brazilian petroleum is simulated, using Aspen Plus{sup R} simulator, in order to remove asphaltenes, resins and other heavy components of vacuum residue. The simulations were carried out considering the configuration of the pilot plant, evaluating the extraction in near-critical operational condition applied to a petroleum, using propane as the solvent. The extraction efficiency and the solvent power were evaluated considering variations in temperature, pressure and in the solvent/feed ratio in order to obtain yields with more efficiency in the production of deasphalted oil (DAO), what means more asphaltene removal. (author)

  5. Computing News

    CERN Multimedia

    McCubbin, N

    2001-01-01

    We are still five years from the first LHC data, so we have plenty of time to get the computing into shape, don't we? Well, yes and no: there is time, but there's an awful lot to do! The recently-completed CERN Review of LHC Computing gives the flavour of the LHC computing challenge. The hardware scale for each of the LHC experiments is millions of 'SpecInt95' (SI95) units of cpu power and tens of PetaBytes of data storage. PCs today are about 20-30SI95, and expected to be about 100 SI95 by 2005, so it's a lot of PCs. This hardware will be distributed across several 'Regional Centres' of various sizes, connected by high-speed networks. How to realise this in an orderly and timely fashion is now being discussed in earnest by CERN, Funding Agencies, and the LHC experiments. Mixed in with this is, of course, the GRID concept...but that's a topic for another day! Of course hardware, networks and the GRID constitute just one part of the computing. Most of the ATLAS effort is spent on software development. What we ...

  6. Economic Model For a Return on Investment Analysis of United States Government High Performance Computing (HPC) Research and Development (R & D) Investment

    Energy Technology Data Exchange (ETDEWEB)

    Joseph, Earl C. [IDC Research Inc., Framingham, MA (United States); Conway, Steve [IDC Research Inc., Framingham, MA (United States); Dekate, Chirag [IDC Research Inc., Framingham, MA (United States)

    2013-09-30

    This study investigated how high-performance computing (HPC) investments can improve economic success and increase scientific innovation. This research focused on the common good and provided uses for DOE, other government agencies, industry, and academia. The study created two unique economic models and an innovation index: 1 A macroeconomic model that depicts the way HPC investments result in economic advancements in the form of ROI in revenue (GDP), profits (and cost savings), and jobs. 2 A macroeconomic model that depicts the way HPC investments result in basic and applied innovations, looking at variations by sector, industry, country, and organization size. A new innovation index that provides a means of measuring and comparing innovation levels. Key findings of the pilot study include: IDC collected the required data across a broad set of organizations, with enough detail to create these models and the innovation index. The research also developed an expansive list of HPC success stories.

  7. Significance of findings of both emergency chest X-ray and thoracic computed tomography routinely performed at the emergency unit in 102 polytrauma patients. A prospective study

    International Nuclear Information System (INIS)

    Grieser, T.; Buehne, K.H.; Haeuser, H.; Bohndorf, K.

    2001-01-01

    Purpose: To evaluate prospectively whether and to what extent both thoracic computed tomography (Tx-CT) and supine X-ray of the chest (Rx-Tx) are able to show additional findings that are therapeutically relevant. Patients and Methods: According to a fixed study protocol, we performed Rx-Tx and Tx-CT in 102 consecutive, haemodynamically stable polytrauma patients (mean age, 41.2 yrs; age range, 12-93 yrs). Findings of therapeutical relevance drawn from both Tx-CT and Rx-Tx, and urgent interventions indicated by an attending trauma team were documented on a standardized evaluation sheet immediately. Any change in the patient's management that is different from routine life-saving procedures, and any therapeutical intervention done in the emergency room or elsewhere (operating theatre, angiographic facility) were considered therapeutically relevant. Results: Of 102 patients, 43 (42.2%) had a total of 51 therapeutically relevant findings. Rx-Tx alone yielded 23 relevant findings (45.1%) in 23 patients (22.5%). Of them, Tx-CT has shown additional important findings in 7 patients (30.4%). When Tx-CT alone is considered, it revealed 22 new findings of therapeutical relevance (43.2%) in 20 patients (46.5%). Altogether, Tx-CT was able to show 30 relevant findings in 27 patients, i.e., there was a therapeutical benefit for 26.5% of all polytrauma patients included. Most frequently, there was a need for chest-tube insertion (n=29). Conclusions: Polytrauma patients if haemodynamically stable may profit from computed tomography of the chest when therapeutically relevant thoracic injuries are looked for or early therapeutical interventions are to be checked. However, chest X-ray should stay as a 'front-line' screening method because of its superbly quick feasibility and availability. (orig.) [de

  8. Computer Simulation Western

    International Nuclear Information System (INIS)

    Rasmussen, H.

    1992-01-01

    Computer Simulation Western is a unit within the Department of Applied Mathematics at the University of Western Ontario. Its purpose is the development of computational and mathematical methods for practical problems in industry and engineering and the application and marketing of such methods. We describe the unit and our efforts at obtaining research and development grants. Some representative projects will be presented and future plans discussed. (author)

  9. mPUMA: a computational approach to microbiota analysis by de novo assembly of operational taxonomic units based on protein-coding barcode sequences.

    Science.gov (United States)

    Links, Matthew G; Chaban, Bonnie; Hemmingsen, Sean M; Muirhead, Kevin; Hill, Janet E

    2013-08-15

    Formation of operational taxonomic units (OTU) is a common approach to data aggregation in microbial ecology studies based on amplification and sequencing of individual gene targets. The de novo assembly of OTU sequences has been recently demonstrated as an alternative to widely used clustering methods, providing robust information from experimental data alone, without any reliance on an external reference database. Here we introduce mPUMA (microbial Profiling Using Metagenomic Assembly, http://mpuma.sourceforge.net), a software package for identification and analysis of protein-coding barcode sequence data. It was developed originally for Cpn60 universal target sequences (also known as GroEL or Hsp60). Using an unattended process that is independent of external reference sequences, mPUMA forms OTUs by DNA sequence assembly and is capable of tracking OTU abundance. mPUMA processes microbial profiles both in terms of the direct DNA sequence as well as in the translated amino acid sequence for protein coding barcodes. By forming OTUs and calculating abundance through an assembly approach, mPUMA is capable of generating inputs for several popular microbiota analysis tools. Using SFF data from sequencing of a synthetic community of Cpn60 sequences derived from the human vaginal microbiome, we demonstrate that mPUMA can faithfully reconstruct all expected OTU sequences and produce compositional profiles consistent with actual community structure. mPUMA enables analysis of microbial communities while empowering the discovery of novel organisms through OTU assembly.

  10. Analyses of insulin-potentiating fragments of human growth hormone by computative simulation; essential unit for insulin-involved biological responses.

    Science.gov (United States)

    Ohkura, K; Hori, H

    2000-07-01

    We analyzed the structural features of insulin-potentiating fragments of human growth hormone by computative simulations. The peptides were designated from the N-terminus sequences of the hormone positions at 1-15 (hGH(1-15); H2N-Phe1-Pro2-Thr3-Ile4-Pro5-Leu6-Ser7-Arg8-L eu9-Phe10-Asp11-Asn12-Ala13-Met14-Leu15 -COOH), 6-13 (hGH(6-13)), 7-13 (hGH(7-13)) and 8-13 (hGH(8-13)), which enhanced insulin-producing hypoglycemia. In these peptide molecules, ionic bonds were predicted to form between 8th-arginyl residue and 11th-aspartic residue, and this intramolecular interaction caused the formation of a macrocyclic structure containing a tetrapeptide Arg8-Leu9-Phe10-Asp11. The peptide positions at 6-10 (hGH(6-10)), 9-13 (hGH(9-13)) and 10-13 (hGH(10-13)) did not lead to a macrocyclic formation in the molecules, and had no effect on the insulin action. Although beta-Ala13hGH(1-15), in which the 13th-alanine was replaced by a beta-alanyl residue, had no effect on insulin-producing hypoglycemia, the macrocyclic region (Arg8-Leu9-Phe10-Asp11) was observed by the computative simulation. An isothermal vibration analysis of both of beta-Ala13hGH(1-15) and hGH(1-15) peptide suggested that beta-Ala13hGH(1-15) is molecule was more flexible than hGH(1-15); C-terminal carboxyl group of Leu15 easily accessed to Arg8 and inhibited the ionic bond formation between Arg8 and Asp11 in beta-Ala13hGH(1-15). The peptide of hGH(8-13) dose-dependently enhanced the insulin-involved fatty acid synthesis in rat white adipocytes, and stabilized the C6-NBD-PC (1-acyl-2-[6-[(7-nitro-2,1,3benzoxadiazol-4-yl)amino]-caproyl]-sn- glycero-3-phosphatidylcholine) model membranes. In contrast, hGH(9-13) had no effect both on the fatty acid synthesis and the membrane stability. In the same culture conditions as the fatty acid synthesis assay, hGH(8-13) had no effect on the transcript levels of glucose transporter isoforms (GLUT 1, 4) and hexokinase isozymes (HK I, II) in rat white adipocytes. Judging from

  11. Effect of object location on the density measurement and Hounsfield conversion in a NewTom 3G cone beam computed tomography unit.

    Science.gov (United States)

    Lagravère, M O; Carey, J; Ben-Zvi, M; Packota, G V; Major, P W

    2008-09-01

    The purpose of this study was to determine the effect of an object's location in a cone beam CT imaging chamber (CBCT-NewTom 3G) on its apparent density and to develop a linear conversion coefficient for Hounsfield units (HU) to material density (g cm(-3)) for the NewTom 3G Scanner. Three cylindrical models of materials with different densities were constructed and scanned at five different locations in a NewTom 3G Volume Scanner. The average HU value for each model at each location was obtained using two different types of software. Next, five cylinders of different known densities were scanned at the exact centre of a NewTom 3G Scanner. The collected data were analysed using the same two types of software to determine a standard linear relationship between density and HU for each type of software. There is no statistical significance of location of an object within the CBCT scanner on determination of its density. A linear relationship between the density of an object and the HU of a scan was rho = 0.001(HU)+1.19 with an R2 value of 0.893 (where density, rho, is measured in g cm(-3)). This equation is to be used on a range between 1.42 g cm(-3) and 0.4456 g cm(-3). A linear relationship can be used to determine the density of materials (in the density range of bone) from the HU values of a CBCT scan. This relationship is not affected by the object's location within the scanner itself.

  12. [Contribution of computer-aided design for the conception of custom-made implants in Pectus Excavatum surgical treatment. Experience of the Nantes plastic surgery unit].

    Science.gov (United States)

    Tilliet Le Dentu, H; Lancien, U; Sellal, O; Duteille, F; Perrot, P

    2018-02-01

    Pectus excavatum is the most common congenital chest malformation and is a common reason for consultation in plastic surgery. Our attitude is most often a filling of the depression with a custom-made silicone prosthesis. The objective of this work was to evaluate the interest of computer-aided design (CAD) of implants compared to the conventional plaster molds method. We have collected all the cases of custom-made silicone implants to treat funnel chests in our plastic surgery department. The quality of the results was evaluated by the patient, and in a blind manner by the surgical team using photographs and standardized surveys. The pre-operative delays, the operating time and length of hospital stays, the number of surgical recoveries, and the post-operative surgical outcomes were recorded. Between 1990 and 2016, we designed 29 silicone thoracic implants in our department. Before 2012, implants were made from plaster chest molds (n=13). After this date, implants were designed by CAD (n=16). Patients rated their results as "good" or "excellent" in 77% and 86% of cases respectively in the plaster and CAD groups. The surgical team's ratings for CAD implant reconstructions were better than in the plaster group: 8.17 versus 6.96 (P=0.001). CAD implants were significantly less detectable than the plaster group implants. The operating time was reduced in the CAO group: 60.2 compared to 74.7minutes in the plaster group (P=0.04), as was the length of hospitalization: 3.5 versus 5.3 days (P=0.01). There were no significant differences between the two groups in terms of post-operative complications. The management of pectus excavatum by a custom-made silicone implant is a minimally invasive method that provides good cosmetic results. The design of these implants is facilitated and qualitatively improved by CAD. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  13. Internode data communications in a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Miller, Douglas R.; Parker, Jeffrey J.; Ratterman, Joseph D.; Smith, Brian E.

    2013-09-03

    Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.

  14. Aliens in the promised land? Keynote address for the 1986 National Gathering of the United Church of Christ's Coalition for Lesbian/Gay Concerns.

    Science.gov (United States)

    Comstock, G D

    The following article is a condensed version of the keynote address given at the 1986 National Gathering of the Lesbian/Gay Coalition of the United Church of Christ (UCC). Problems encountered by lesbians and gay men in organized religion, especially within the liberal tradition, are identified by a method of inquiry developed by Christian educator John Westerhoff for assessing egalitarianism within institutions. The story of Queen Vashti from the Book of Esther in Hebrew scripture, and the emerging tradition of coming-out experiences by lesbians and gay men; provide the norm and model for declaring independence from denominations that neglect the concerns of lesbians and gay men and for constructing religious alternatives.

  15. Interface unit

    NARCIS (Netherlands)

    Keyson, D.V.; Freudenthal, A.; De Hoogh, M.P.A.; Dekoven, E.A.M.

    2001-01-01

    The invention relates to an interface unit comprising at least a display unit for communication with a user, which is designed for being coupled with a control unit for at least one or more parameters in a living or working environment, such as the temperature setting in a house, which control unit

  16. Computational force, mass, and energy

    International Nuclear Information System (INIS)

    Numrich, R.W.

    1997-01-01

    This paper describes a correspondence between computational quantities commonly used to report computer performance measurements and mechanical quantities from classical Newtonian mechanics. It defines a set of three fundamental computational quantities that are sufficient to establish a system of computational measurement. From these quantities, it defines derived computational quantities that have analogous physical counterparts. These computational quantities obey three laws of motion in computational space. The solutions to the equations of motion, with appropriate boundary conditions, determine the computational mass of the computer. Computational forces, with magnitudes specific to each instruction and to each computer, overcome the inertia represented by this mass. The paper suggests normalizing the computational mass scale by picking the mass of a register on the CRAY-1 as the standard unit of mass

  17. Design of Computer Experiments

    DEFF Research Database (Denmark)

    Dehlendorff, Christian

    The main topic of this thesis is design and analysis of computer and simulation experiments and is dealt with in six papers and a summary report. Simulation and computer models have in recent years received increasingly more attention due to their increasing complexity and usability. Software...... packages make the development of rather complicated computer models using predefined building blocks possible. This implies that the range of phenomenas that are analyzed by means of a computer model has expanded significantly. As the complexity grows so does the need for efficient experimental designs...... and analysis methods, since the complex computer models often are expensive to use in terms of computer time. The choice of performance parameter is an important part of the analysis of computer and simulation models and Paper A introduces a new statistic for waiting times in health care units. The statistic...

  18. Decision unit program

    International Nuclear Information System (INIS)

    Madjar, N.; Pastor, C.; Chambon, B.; Drain, D.; Giorni, A.; Dauchy, A.

    1981-01-01

    A decision unit has been built to simplify the electronic logic set-up in multi-detectors experiments. This unit, designed with fast memories used as decision making tables, replaces conventional logic modules. Nine inputs are provided for receiving the fast detector signals (charged particles, gammas, neutrons, ...). Fifteen independent outputs allow the identification of the choosen events among the 2 9 possible events. A CAMAC interface between the unit and the computer, or a manual control auxiliary module, is used to load, in the memory, the pattern of the choosen events [fr

  19. 77 FR 20047 - Certain Computer and Computer Peripheral Devices and Components Thereof and Products Containing...

    Science.gov (United States)

    2012-04-03

    ... INTERNATIONAL TRADE COMMISSION [DN 2889] Certain Computer and Computer Peripheral Devices and... Certain Computer and Computer Peripheral Devices and Components Thereof and Products Containing the Same... importation, and the sale within the United States after importation of certain computer and computer...

  20. METRIC context unit architecture

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, R.O.

    1988-01-01

    METRIC is an architecture for a simple but powerful Reduced Instruction Set Computer (RISC). Its speed comes from the simultaneous processing of several instruction streams, with instructions from the various streams being dispatched into METRIC's execution pipeline as they become available for execution. The pipeline is thus kept full, with a mix of instructions for several contexts in execution at the same time. True parallel programming is supported within a single execution unit, the METRIC Context Unit. METRIC's architecture provides for expansion through the addition of multiple Context Units and of specialized Functional Units. The architecture thus spans a range of size and performance from a single-chip microcomputer up through large and powerful multiprocessors. This research concentrates on the specification of the METRIC Context Unit at the architectural level. Performance tradeoffs made during METRIC's design are discussed, and projections of METRIC's performance are made based on simulation studies.

  1. The Need for Computer Science

    Science.gov (United States)

    Margolis, Jane; Goode, Joanna; Bernier, David

    2011-01-01

    Broadening computer science learning to include more students is a crucial item on the United States' education agenda, these authors say. Although policymakers advocate more computer science expertise, computer science offerings in high schools are few--and actually shrinking. In addition, poorly resourced schools with a high percentage of…

  2. On-demand rather than daily-routine chest radiography prescription may change neither the number nor the impact of chest computed tomography and ultrasound studies in a multidisciplinary intensive care unit.

    Science.gov (United States)

    Kröner, Anke; Binnekade, Jan M; Graat, Marleen E; Vroom, Margreeth B; Stoker, Jaap; Spronk, Peter E; Schultz, Marcus J

    2008-01-01

    Elimination of daily-routine chest radiographs (CXRs) may influence chest computed tomography (CT) and ultrasound practice in critically ill patients. This was a retrospective cohort study including all patients admitted to a university-affiliated intensive care unit during two consecutive periods of 5 months, one before and one after elimination of daily-routine CXR. Chest CT and ultrasound studies were identified retrospectively by using the radiology department information system. Indications for and the diagnostic/therapeutic yield of chest CT and ultrasound studies were collected. Elimination of daily-routine CXR resulted in a decrease of CXRs per patient day from 1.1 +/- 0.3 to 0.6 +/- 0.4 (P chest CT studies nor the ratio of chest CT studies per patient day changed with the intervention: Before elimination of daily-routine CXR, 52 chest CT studies were obtained from 747 patients; after elimination, 54 CT studies were obtained from 743 patients. Similarly, chest ultrasound practice was not affected by the change of CXR strategy: Before and after elimination, 21 and 27 chest ultrasound studies were performed, respectively. Also, timing of chest CT and ultrasound studies was not different between the two study periods. During the two periods, 40 of 106 chest CT studies (38%) and 18 of 48 chest ultrasound studies (38%) resulted in a change in therapy. The combined therapeutic yield of chest CT and ultrasound studies did not change with elimination of daily-routine CXR. Elimination of daily-routine CXRs may not affect chest CT and ultrasound practice in a multidisciplinary intensive care unit.

  3. A 970 Hounsfield units (HU) threshold of kidney stone density on non-contrast computed tomography (NCCT) improves patients' selection for extracorporeal shockwave lithotripsy (ESWL): evidence from a prospective study.

    Science.gov (United States)

    Ouzaid, Idir; Al-qahtani, Said; Dominique, Sébastien; Hupertan, Vincent; Fernandez, Pédro; Hermieu, Jean-François; Delmas, Vincent; Ravery, Vincent

    2012-12-01

    What's known on the subject? and What does the study add? Stone density on non-contrast computed tomography (NCCT) is reported to be a prognosis factor for extracorporeal shockwave lithotripsy (ESWL). In this prospective study, we determined that a 970 HU threshold of stone density is a very specific and sensitive threshold beyond which the likelihood to be rendered stone free is poor. Thus, NCCT evaluation of stone density before ESWL may useful to identify which patients should be offered alternative treatment to optimise their outcome. • To evaluate the usefulness of measuring urinary calculi attenuation values by non-contrast computed tomography (NCCT) for predicting the outcome of treatment by extracorporeal shockwave lithotripsy (ESWL). • We prospectively evaluated 50 patients with urinary calculi of 5-22 mm undergoing ESWL. • All patients had NCCT at 120 kV and 100 mA on a spiral CT scanner. Patient age, sex, body mass index, stone laterality, stone size, stone attenuation values (Hounsfield units [HU]), stone location, and presence of JJ stent were studied as potential predictors. • The outcome was evaluated 4 weeks after the ESWL session by NCCT. • ESWL success was defined as patients being stone-free (SF) or with remaining stone fragments of ESWL treatment. • Stones of patients who became SF or had CIRF had a lower density compared with stones in patients with residual fragments [mean (sd) 715 (260) vs 1196 (171) HU, P ESWL was identified. • The use of NCCT to determine the attenuation values of urinary calculi before ESWL helps to predict treatment outcome, and, consequently, could be helpful in planning alternative treatment for patients with a likelihood of a poor outcome from ESWL. © 2012 THE AUTHORS. BJU INTERNATIONAL © 2012 BJU INTERNATIONAL.

  4. SU-E-J-70: Feasibility Study of Dynamic Arc and IMRT Treatment Plans Utilizing Vero Treatment Unit and IPlan Planning Computer for SRS/FSRT Brain Cancer Patients

    International Nuclear Information System (INIS)

    Huh, S; Lee, S; Dagan, R; Malyapa, R; Mendenhall, N; Mendenhall, W; Ho, M; Hough, D; Yam, M; Li, Z

    2014-01-01

    Purpose: To investigate the feasibility of utilizing Dynamic Arc (DA) and IMRT with 5mm MLC leaf of VERO treatment unit for SRS/FSRT brain cancer patients with non-invasive stereotactic treatments. The DA and IMRT plans using the VERO unit (BrainLab Inc, USA) are compared with cone-based planning and proton plans to evaluate their dosimetric advantages. Methods: The Vero treatment has unique features like no rotational or translational movements of the table during treatments, Dynamic Arc/IMRT, tracking of IR markers, limitation of Ring rotation. Accuracies of the image fusions using CBCT, orthogonal x-rays, and CT are evaluated less than ∼ 0.7mm with a custom-made target phantom with 18 hidden targets. 1mm margin is given to GTV to determine PTV for planning constraints considering all the uncertainties of planning computer and mechanical uncertainties of the treatment unit. Also, double-scattering proton plans with 6F to 9F beams and typical clinical parameters, multiple isocenter plans with 6 to 21 isocenters, and DA/IMRT plans are evaluated to investigate the dosimetric advantages of the DA/IMRT for complex shape of targets. Results: 3 Groups of the patients are divided: (1) Group A (complex target shape), CI's are same for IMRT, and DGI of the proton plan are better by 9.5% than that of the IMRT, (2) Group B, CI of the DA plans (1.91+/−0.4) are better than cone-based plan, while DGI of the DA plan is 4.60+/−1.1 is better than cone-based plan (5.32+/−1.4), (3) Group C (small spherical targets), CI of the DA and cone-based plans are almost the same. Conclusion: For small spherical targets, cone-based plans are superior to other 2 plans: DS proton and DA plans. For complex or irregular plans, dynamic and IMRT plans are comparable to cone-based and proton plans for complex targets

  5. Optical Computing

    OpenAIRE

    Woods, Damien; Naughton, Thomas J.

    2008-01-01

    We consider optical computers that encode data using images and compute by transforming such images. We give an overview of a number of such optical computing architectures, including descriptions of the type of hardware commonly used in optical computing, as well as some of the computational efficiencies of optical devices. We go on to discuss optical computing from the point of view of computational complexity theory, with the aim of putting some old, and some very recent, re...

  6. First principle calculations of effective exchange integrals: Comparison between SR (BS) and MR computational results

    Energy Technology Data Exchange (ETDEWEB)

    Yamaguchi, Kizashi [Institute for Nano Science Design Center, Osaka University, 1-3 Machikaneyama, Toyonaka, Osaka 560-8531, Japan and TOYOTA Physical and Chemical Research Institute, Nagakute, Aichi, 480-1192 (Japan); Nishihara, Satomichi; Saito, Toru; Yamanaka, Shusuke; Kitagawa, Yasutaka; Kawakami, Takashi; Yamada, Satoru; Isobe, Hiroshi; Okumura, Mitsutaka [Department of Chemistry, Graduate School of Science, Osaka University, 1-1 Machikaneyama, Toyonaka, Osaka 560-0043 (Japan)

    2015-01-22

    First principle calculations of effective exchange integrals (J) in the Heisenberg model for diradical species were performed by both symmetry-adapted (SA) multi-reference (MR) and broken-symmetry (BS) single reference (SR) methods. Mukherjee-type (Mk) state specific (SS) MR coupled-cluster (CC) calculations by the use of natural orbital (NO) references of ROHF, UHF, UDFT and CASSCF solutions were carried out to elucidate J values for di- and poly-radical species. Spin-unrestricted Hartree Fock (UHF) based coupled-cluster (CC) computations were also performed to these species. Comparison between UHF-NO(UNO)-MkMRCC and BS UHF-CC computational results indicated that spin-contamination of UHF-CC solutions still remains at the SD level. In order to eliminate the spin contamination, approximate spin-projection (AP) scheme was applied for UCC, and the AP procedure indeed corrected the error to yield good agreement with MkMRCC in energy. The CC double with spin-unrestricted Brueckner's orbital (UBD) was furthermore employed for these species, showing that spin-contamination involved in UHF solutions is largely suppressed, and therefore AP scheme for UBCCD removed easily the rest of spin-contamination. We also performed spin-unrestricted pure- and hybrid-density functional theory (UDFT) calculations of diradical and polyradical species. Three different computational schemes for total spin angular momentums were examined for the AP correction of the hybrid (H) UDFT. HUDFT calculations followed by AP, HUDFT(AP), yielded the S-T gaps that were qualitatively in good agreement with those of MkMRCCSD, UHF-CC(AP) and UB-CC(AP). Thus a systematic comparison among MkMRCCSD, UCC(AP) UBD(AP) and UDFT(AP) was performed concerning with the first principle calculations of J values in di- and poly-radical species. It was found that BS (AP) methods reproduce MkMRCCSD results, indicating their applicability to large exchange coupled systems.

  7. Homology modeling, docking studies and molecular dynamic simulations using graphical processing unit architecture to probe the type-11 phosphodiesterase catalytic site: a computational approach for the rational design of selective inhibitors.

    Science.gov (United States)

    Cichero, Elena; D'Ursi, Pasqualina; Moscatelli, Marco; Bruno, Olga; Orro, Alessandro; Rotolo, Chiara; Milanesi, Luciano; Fossa, Paola

    2013-12-01

    Phosphodiesterase 11 (PDE11) is the latest isoform of the PDEs family to be identified, acting on both cyclic adenosine monophosphate and cyclic guanosine monophosphate. The initial reports of PDE11 found evidence for PDE11 expression in skeletal muscle, prostate, testis, and salivary glands; however, the tissue distribution of PDE11 still remains a topic of active study and some controversy. Given the sequence similarity between PDE11 and PDE5, several PDE5 inhibitors have been shown to cross-react with PDE11. Accordingly, many non-selective inhibitors, such as IBMX, zaprinast, sildenafil, and dipyridamole, have been documented to inhibit PDE11. Only recently, a series of dihydrothieno[3,2-d]pyrimidin-4(3H)-one derivatives proved to be selective toward the PDE11 isoform. In the absence of experimental data about PDE11 X-ray structures, we found interesting to gain a better understanding of the enzyme-inhibitor interactions using in silico simulations. In this work, we describe a computational approach based on homology modeling, docking, and molecular dynamics simulation to derive a predictive 3D model of PDE11. Using a Graphical Processing Unit architecture, it is possible to perform long simulations, find stable interactions involved in the complex, and finally to suggest guideline for the identification and synthesis of potent and selective inhibitors. © 2013 John Wiley & Sons A/S.

  8. Computer group

    International Nuclear Information System (INIS)

    Bauer, H.; Black, I.; Heusler, A.; Hoeptner, G.; Krafft, F.; Lang, R.; Moellenkamp, R.; Mueller, W.; Mueller, W.F.; Schati, C.; Schmidt, A.; Schwind, D.; Weber, G.

    1983-01-01

    The computer groups has been reorganized to take charge for the general purpose computers DEC10 and VAX and the computer network (Dataswitch, DECnet, IBM - connections to GSI and IPP, preparation for Datex-P). (orig.)

  9. Computer Engineers.

    Science.gov (United States)

    Moncarz, Roger

    2000-01-01

    Looks at computer engineers and describes their job, employment outlook, earnings, and training and qualifications. Provides a list of resources related to computer engineering careers and the computer industry. (JOW)

  10. Computer Music

    Science.gov (United States)

    Cook, Perry R.

    This chapter covers algorithms, technologies, computer languages, and systems for computer music. Computer music involves the application of computers and other digital/electronic technologies to music composition, performance, theory, history, and the study of perception. The field combines digital signal processing, computational algorithms, computer languages, hardware and software systems, acoustics, psychoacoustics (low-level perception of sounds from the raw acoustic signal), and music cognition (higher-level perception of musical style, form, emotion, etc.).

  11. Computed tomography device

    International Nuclear Information System (INIS)

    Ohhashi, A.

    1985-01-01

    A computed tomography device comprising a subtraction unit which obtains differential data strings representing the difference between each time-serial projection data string of a group of projection data strings corresponding to a prospective reconstruction image generated by projection data strings acquired by a data acquisition system, a convolution unit which convolves each time-serial projection data string of the group of projection data strings corresponding to the prospective reconstruction image, and a back-projection unit which back-projects the convolved data strings

  12. CONCEPT computer code

    International Nuclear Information System (INIS)

    Delene, J.

    1984-01-01

    CONCEPT is a computer code that will provide conceptual capital investment cost estimates for nuclear and coal-fired power plants. The code can develop an estimate for construction at any point in time. Any unit size within the range of about 400 to 1300 MW electric may be selected. Any of 23 reference site locations across the United States and Canada may be selected. PWR, BWR, and coal-fired plants burning high-sulfur and low-sulfur coal can be estimated. Multiple-unit plants can be estimated. Costs due to escalation/inflation and interest during construction are calculated

  13. Single neuron computation

    CERN Document Server

    McKenna, Thomas M; Zornetzer, Steven F

    1992-01-01

    This book contains twenty-two original contributions that provide a comprehensive overview of computational approaches to understanding a single neuron structure. The focus on cellular-level processes is twofold. From a computational neuroscience perspective, a thorough understanding of the information processing performed by single neurons leads to an understanding of circuit- and systems-level activity. From the standpoint of artificial neural networks (ANNs), a single real neuron is as complex an operational unit as an entire ANN, and formalizing the complex computations performed by real n

  14. Generating Units

    Data.gov (United States)

    Department of Homeland Security — Generating Units are any combination of physically connected generators, reactors, boilers, combustion turbines, and other prime movers operated together to produce...

  15. Analog computing

    CERN Document Server

    Ulmann, Bernd

    2013-01-01

    This book is a comprehensive introduction to analog computing. As most textbooks about this powerful computing paradigm date back to the 1960s and 1970s, it fills a void and forges a bridge from the early days of analog computing to future applications. The idea of analog computing is not new. In fact, this computing paradigm is nearly forgotten, although it offers a path to both high-speed and low-power computing, which are in even more demand now than they were back in the heyday of electronic analog computers.

  16. (Some) Computer Futures: Mainframes.

    Science.gov (United States)

    Joseph, Earl C.

    Possible futures for the world of mainframe computers can be forecast through studies identifying forces of change and their impact on current trends. Some new prospects for the future have been generated by advances in information technology; for example, recent United States successes in applied artificial intelligence (AI) have created new…

  17. 32-Bit FASTBUS computer

    International Nuclear Information System (INIS)

    Blossom, J.M.; Hong, J.P.; Kellner, R.G.

    1985-01-01

    Los Alamos National Laboratory is building a 32-bit FASTBUS computer using the NATIONAL SEMICONDUCTOR 32032 central processing unit (CPU) and containing 16 million bytes of memory. The board can act both as a FASTBUS master and as a FASTBUS slave. It contains a custom direct memory access (DMA) channel which can perform 80 million bytes per second block transfers across the FASTBUS

  18. Computational composites

    DEFF Research Database (Denmark)

    Vallgårda, Anna K. A.; Redström, Johan

    2007-01-01

    Computational composite is introduced as a new type of composite material. Arguing that this is not just a metaphorical maneuver, we provide an analysis of computational technology as material in design, which shows how computers share important characteristics with other materials used in design...... and architecture. We argue that the notion of computational composites provides a precise understanding of the computer as material, and of how computations need to be combined with other materials to come to expression as material. Besides working as an analysis of computers from a designer’s point of view......, the notion of computational composites may also provide a link for computer science and human-computer interaction to an increasingly rapid development and use of new materials in design and architecture....

  19. Quantum Computing

    OpenAIRE

    Scarani, Valerio

    1998-01-01

    The aim of this thesis was to explain what quantum computing is. The information for the thesis was gathered from books, scientific publications, and news articles. The analysis of the information revealed that quantum computing can be broken down to three areas: theories behind quantum computing explaining the structure of a quantum computer, known quantum algorithms, and the actual physical realizations of a quantum computer. The thesis reveals that moving from classical memor...

  20. Unit Manning

    National Research Council Canada - National Science Library

    McGinniss, Mike

    2003-01-01

    .... This decision combines two crucial initiatives: first, transforming the Army from an individual soldier replacement system to a unit manning system that enhances cohesion and keeps trained soldiers, leaders, and commanders together longer, thereby...

  1. Detector Unit

    CERN Multimedia

    1960-01-01

    Original detector unit of the Instituut voor Kernfysisch Onderzoek (IKO) BOL project. This detector unit shows that silicon detectors for nuclear physics particle detection were already developed and in use in the 1960's in Amsterdam. Also the idea of putting 'strips' onto the silicon for high spatial resolution of a particle's impact on the detector were implemented in the BOL project which used 64 of these detector units. The IKO BOL project with its silicon particle detectors was designed, built and operated from 1965 to roughly 1977. Detector Unit of the BOL project: These detectors, notably the ‘checkerboard detector’, were developed during the years 1964-1968 in Amsterdam, The Netherlands, by the Natuurkundig Laboratorium of the N.V. Philips Gloeilampen Fabrieken. This was done in close collaboration with the Instituut voor Kernfysisch Onderzoek (IKO) where the read-out electronics for their use in the BOL Project was developed and produced.

  2. Teaching Psychology Students Computer Applications.

    Science.gov (United States)

    Atnip, Gilbert W.

    This paper describes an undergraduate-level course designed to teach the applications of computers that are most relevant in the social sciences, especially psychology. After an introduction to the basic concepts and terminology of computing, separate units were devoted to word processing, data analysis, data acquisition, artificial intelligence,…

  3. Drilling unit

    Energy Technology Data Exchange (ETDEWEB)

    Umanchik, N P; Demin, A V; Khrustalev, N N; Linnik, G N; Lovchev, S V; Rozin, M M; Sidorov, R V; Sokolov, S I; Tsaregradskiy, Yu P

    1981-01-01

    A drilling unit is proposed which includes a hydraulic lifter, hydraulic multiple-cylinder pump with valve distribution and sectional drilling pump with separators of the working and flushing fluid. In order to reduce metal consumption and the overall dimensions of the drilling unit, the working cavity of each cylinder of the hydraulic multiple-cylinder pump is equipped with suction and injection valves and is hydraulically connected to the working cavity by one of the sections of the drilling pump.

  4. Computational Medicine

    DEFF Research Database (Denmark)

    Nygaard, Jens Vinge

    2017-01-01

    The Health Technology Program at Aarhus University applies computational biology to investigate the heterogeneity of tumours......The Health Technology Program at Aarhus University applies computational biology to investigate the heterogeneity of tumours...

  5. Grid Computing

    Indian Academy of Sciences (India)

    A computing grid interconnects resources such as high performancecomputers, scientific databases, and computercontrolledscientific instruments of cooperating organizationseach of which is autonomous. It precedes and is quitedifferent from cloud computing, which provides computingresources by vendors to customers ...

  6. Quantum computers and quantum computations

    International Nuclear Information System (INIS)

    Valiev, Kamil' A

    2005-01-01

    This review outlines the principles of operation of quantum computers and their elements. The theory of ideal computers that do not interact with the environment and are immune to quantum decohering processes is presented. Decohering processes in quantum computers are investigated. The review considers methods for correcting quantum computing errors arising from the decoherence of the state of the quantum computer, as well as possible methods for the suppression of the decohering processes. A brief enumeration of proposed quantum computer realizations concludes the review. (reviews of topical problems)

  7. Quantum Computing for Computer Architects

    CERN Document Server

    Metodi, Tzvetan

    2011-01-01

    Quantum computers can (in theory) solve certain problems far faster than a classical computer running any known classical algorithm. While existing technologies for building quantum computers are in their infancy, it is not too early to consider their scalability and reliability in the context of the design of large-scale quantum computers. To architect such systems, one must understand what it takes to design and model a balanced, fault-tolerant quantum computer architecture. The goal of this lecture is to provide architectural abstractions for the design of a quantum computer and to explore

  8. Pervasive Computing

    NARCIS (Netherlands)

    Silvis-Cividjian, N.

    This book provides a concise introduction to Pervasive Computing, otherwise known as Internet of Things (IoT) and Ubiquitous Computing (Ubicomp) which addresses the seamless integration of computing systems within everyday objects. By introducing the core topics and exploring assistive pervasive

  9. Computational vision

    CERN Document Server

    Wechsler, Harry

    1990-01-01

    The book is suitable for advanced courses in computer vision and image processing. In addition to providing an overall view of computational vision, it contains extensive material on topics that are not usually covered in computer vision texts (including parallel distributed processing and neural networks) and considers many real applications.

  10. Spatial Computation

    Science.gov (United States)

    2003-12-01

    Computation and today’s microprocessors with the approach to operating system architecture, and the controversy between microkernels and monolithic kernels...Both Spatial Computation and microkernels break away a relatively monolithic architecture into in- dividual lightweight pieces, well specialized...for their particular functionality. Spatial Computation removes global signals and control, in the same way microkernels remove the global address

  11. Mission: Define Computer Literacy. The Illinois-Wisconsin ISACS Computer Coordinators' Committee on Computer Literacy Report (May 1985).

    Science.gov (United States)

    Computing Teacher, 1985

    1985-01-01

    Defines computer literacy and describes a computer literacy course which stresses ethics, hardware, and disk operating systems throughout. Core units on keyboarding, word processing, graphics, database management, problem solving, algorithmic thinking, and programing are outlined, together with additional units on spreadsheets, simulations,…

  12. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  13. Human Computation

    CERN Multimedia

    CERN. Geneva

    2008-01-01

    What if people could play computer games and accomplish work without even realizing it? What if billions of people collaborated to solve important problems for humanity or generate training data for computers? My work aims at a general paradigm for doing exactly that: utilizing human processing power to solve computational problems in a distributed manner. In particular, I focus on harnessing human time and energy for addressing problems that computers cannot yet solve. Although computers have advanced dramatically in many respects over the last 50 years, they still do not possess the basic conceptual intelligence or perceptual capabilities...

  14. Quantum computation

    International Nuclear Information System (INIS)

    Deutsch, D.

    1992-01-01

    As computers become ever more complex, they inevitably become smaller. This leads to a need for components which are fabricated and operate on increasingly smaller size scales. Quantum theory is already taken into account in microelectronics design. This article explores how quantum theory will need to be incorporated into computers in future in order to give them their components functionality. Computation tasks which depend on quantum effects will become possible. Physicists may have to reconsider their perspective on computation in the light of understanding developed in connection with universal quantum computers. (UK)

  15. Computer software.

    Science.gov (United States)

    Rosenthal, L E

    1986-10-01

    Software is the component in a computer system that permits the hardware to perform the various functions that a computer system is capable of doing. The history of software and its development can be traced to the early nineteenth century. All computer systems are designed to utilize the "stored program concept" as first developed by Charles Babbage in the 1850s. The concept was lost until the mid-1940s, when modern computers made their appearance. Today, because of the complex and myriad tasks that a computer system can perform, there has been a differentiation of types of software. There is software designed to perform specific business applications. There is software that controls the overall operation of a computer system. And there is software that is designed to carry out specialized tasks. Regardless of types, software is the most critical component of any computer system. Without it, all one has is a collection of circuits, transistors, and silicone chips.

  16. Computer sciences

    Science.gov (United States)

    Smith, Paul H.

    1988-01-01

    The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.

  17. Computing with synthetic protocells.

    Science.gov (United States)

    Courbet, Alexis; Molina, Franck; Amar, Patrick

    2015-09-01

    In this article we present a new kind of computing device that uses biochemical reactions networks as building blocks to implement logic gates. The architecture of a computing machine relies on these generic and composable building blocks, computation units, that can be used in multiple instances to perform complex boolean functions. Standard logical operations are implemented by biochemical networks, encapsulated and insulated within synthetic vesicles called protocells. These protocells are capable of exchanging energy and information with each other through transmembrane electron transfer. In the paradigm of computation we propose, protoputing, a machine can solve only one problem and therefore has to be built specifically. Thus, the programming phase in the standard computing paradigm is represented in our approach by the set of assembly instructions (specific attachments) that directs the wiring of the protocells that constitute the machine itself. To demonstrate the computing power of protocellular machines, we apply it to solve a NP-complete problem, known to be very demanding in computing power, the 3-SAT problem. We show how to program the assembly of a machine that can verify the satisfiability of a given boolean formula. Then we show how to use the massive parallelism of these machines to verify in less than 20 min all the valuations of the input variables and output a fluorescent signal when the formula is satisfiable or no signal at all otherwise.

  18. 77 FR 26041 - Certain Computers and Computer Peripheral Devices and Components Thereof and Products Containing...

    Science.gov (United States)

    2012-05-02

    ... INTERNATIONAL TRADE COMMISSION [Inv. No. 337-TA-841] Certain Computers and Computer Peripheral... after importation of certain computers and computer peripheral devices and components thereof and... industry in the United States exists as required by subsection (a)(2) of section 337. The complainant...

  19. On teaching computer ethics within a computer science department.

    Science.gov (United States)

    Quinn, Michael J

    2006-04-01

    The author has surveyed a quarter of the accredited undergraduate computer science programs in the United States. More than half of these programs offer a 'social and ethical implications of computing' course taught by a computer science faculty member, and there appears to be a trend toward teaching ethics classes within computer science departments. Although the decision to create an 'in house' computer ethics course may sometimes be a pragmatic response to pressure from the accreditation agency, this paper argues that teaching ethics within a computer science department can provide students and faculty members with numerous benefits. The paper lists topics that can be covered in a computer ethics course and offers some practical suggestions for making the course successful.

  20. Computer programming and computer systems

    CERN Document Server

    Hassitt, Anthony

    1966-01-01

    Computer Programming and Computer Systems imparts a "reading knowledge? of computer systems.This book describes the aspects of machine-language programming, monitor systems, computer hardware, and advanced programming that every thorough programmer should be acquainted with. This text discusses the automatic electronic digital computers, symbolic language, Reverse Polish Notation, and Fortran into assembly language. The routine for reading blocked tapes, dimension statements in subroutines, general-purpose input routine, and efficient use of memory are also elaborated.This publication is inten

  1. [Conservation Units.

    Science.gov (United States)

    Texas Education Agency, Austin.

    Each of the six instructional units deals with one aspect of conservation: forests, water, rangeland, minerals (petroleum), and soil. The area of the elementary school curriculum with which each correlates is indicated. Lists of general and specific objectives are followed by suggested teaching procedures, including ideas for introducing the…

  2. Instruction Set Architectures for Quantum Processing Units

    OpenAIRE

    Britt, Keith A.; Humble, Travis S.

    2017-01-01

    Progress in quantum computing hardware raises questions about how these devices can be controlled, programmed, and integrated with existing computational workflows. We briefly describe several prominent quantum computational models, their associated quantum processing units (QPUs), and the adoption of these devices as accelerators within high-performance computing systems. Emphasizing the interface to the QPU, we analyze instruction set architectures based on reduced and complex instruction s...

  3. Organic Computing

    CERN Document Server

    Würtz, Rolf P

    2008-01-01

    Organic Computing is a research field emerging around the conviction that problems of organization in complex systems in computer science, telecommunications, neurobiology, molecular biology, ethology, and possibly even sociology can be tackled scientifically in a unified way. From the computer science point of view, the apparent ease in which living systems solve computationally difficult problems makes it inevitable to adopt strategies observed in nature for creating information processing machinery. In this book, the major ideas behind Organic Computing are delineated, together with a sparse sample of computational projects undertaken in this new field. Biological metaphors include evolution, neural networks, gene-regulatory networks, networks of brain modules, hormone system, insect swarms, and ant colonies. Applications are as diverse as system design, optimization, artificial growth, task allocation, clustering, routing, face recognition, and sign language understanding.

  4. Computational biomechanics

    International Nuclear Information System (INIS)

    Ethier, C.R.

    2004-01-01

    Computational biomechanics is a fast-growing field that integrates modern biological techniques and computer modelling to solve problems of medical and biological interest. Modelling of blood flow in the large arteries is the best-known application of computational biomechanics, but there are many others. Described here is work being carried out in the laboratory on the modelling of blood flow in the coronary arteries and on the transport of viral particles in the eye. (author)

  5. ELASTIC CLOUD COMPUTING ARCHITECTURE AND SYSTEM FOR HETEROGENEOUS SPATIOTEMPORAL COMPUTING

    Directory of Open Access Journals (Sweden)

    X. Shi

    2017-10-01

    Full Text Available Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs, while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.

  6. Elastic Cloud Computing Architecture and System for Heterogeneous Spatiotemporal Computing

    Science.gov (United States)

    Shi, X.

    2017-10-01

    Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs), while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC) or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA) may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.

  7. Computational Composites

    DEFF Research Database (Denmark)

    Vallgårda, Anna K. A.

    to understand the computer as a material like any other material we would use for design, like wood, aluminum, or plastic. That as soon as the computer forms a composition with other materials it becomes just as approachable and inspiring as other smart materials. I present a series of investigations of what...... Computational Composite, and Telltale). Through the investigations, I show how the computer can be understood as a material and how it partakes in a new strand of materials whose expressions come to be in context. I uncover some of their essential material properties and potential expressions. I develop a way...

  8. Numerical computations with GPUs

    CERN Document Server

    Kindratenko, Volodymyr

    2014-01-01

    This book brings together research on numerical methods adapted for Graphics Processing Units (GPUs). It explains recent efforts to adapt classic numerical methods, including solution of linear equations and FFT, for massively parallel GPU architectures. This volume consolidates recent research and adaptations, covering widely used methods that are at the core of many scientific and engineering computations. Each chapter is written by authors working on a specific group of methods; these leading experts provide mathematical background, parallel algorithms and implementation details leading to

  9. AGRIS: Description of computer programs

    International Nuclear Information System (INIS)

    Schmid, H.; Schallaboeck, G.

    1976-01-01

    The set of computer programs used at the AGRIS (Agricultural Information System) Input Unit at the IAEA, Vienna, Austria to process the AGRIS computer-readable data is described. The processing flow is illustrated. The configuration of the IAEA's computer, a list of error messages generated by the computer, the EBCDIC code table extended for AGRIS and INIS, the AGRIS-6 bit code, the work sheet format, and job control listings are included as appendixes. The programs are written for an IBM 370, model 145, operating system OS or VS, and require a 130K partition. The programming languages are PL/1 (F-compiler) and Assembler

  10. Enhanced Master Controller Unit Tester

    Science.gov (United States)

    Benson, Patricia; Johnson, Yvette; Johnson, Brian; Williams, Philip; Burton, Geoffrey; McCoy, Anthony

    2007-01-01

    The Enhanced Master Controller Unit Tester (EMUT) software is a tool for development and testing of software for a master controller (MC) flight computer. The primary function of the EMUT software is to simulate interfaces between the MC computer and external analog and digital circuitry (including other computers) in a rack of equipment to be used in scientific experiments. The simulations span the range of nominal, off-nominal, and erroneous operational conditions, enabling the testing of MC software before all the equipment becomes available.

  11. Contamination analysis unit

    International Nuclear Information System (INIS)

    Gregg, H.R.; Meltzer, M.P.

    1996-01-01

    The portable Contamination Analysis Unit (CAU) measures trace quantities of surface contamination in real time. The detector head of the portable contamination analysis unit has an opening with an O-ring seal, one or more vacuum valves and a small mass spectrometer. With the valve closed, the mass spectrometer is evacuated with one or more pumps. The O-ring seal is placed against a surface to be tested and the vacuum valve is opened. Data is collected from the mass spectrometer and a portable computer provides contamination analysis. The CAU can be used to decontaminate and decommission hazardous and radioactive surfaces by measuring residual hazardous surface contamination, such as tritium and trace organics. It provides surface contamination data for research and development applications as well as real-time process control feedback for industrial cleaning operations and can be used to determine the readiness of a surface to accept bonding or coatings. 1 fig

  12. Laser color recording unit

    Science.gov (United States)

    Jung, E.

    1984-05-01

    A color recording unit was designed for output and control of digitized picture data within computer controlled reproduction and picture processing systems. In order to get a color proof picture of high quality similar to a color print, together with reduced time and material consumption, a photographic color film material was exposed pixelwise by modulated laser beams of three wavelengths for red, green and blue light. Components of different manufacturers for lasers, acousto-optic modulators and polygon mirrors were tested, also different recording methods as (continuous tone mode or screened mode and with a drum or flatbed recording principle). Besides the application for the graphic arts - the proof recorder CPR 403 with continuous tone color recording with a drum scanner - such a color hardcopy peripheral unit with large picture formats and high resolution can be used in medicine, communication, and satellite picture processing.

  13. GPGPU COMPUTING

    Directory of Open Access Journals (Sweden)

    BOGDAN OANCEA

    2012-05-01

    Full Text Available Since the first idea of using GPU to general purpose computing, things have evolved over the years and now there are several approaches to GPU programming. GPU computing practically began with the introduction of CUDA (Compute Unified Device Architecture by NVIDIA and Stream by AMD. These are APIs designed by the GPU vendors to be used together with the hardware that they provide. A new emerging standard, OpenCL (Open Computing Language tries to unify different GPU general computing API implementations and provides a framework for writing programs executed across heterogeneous platforms consisting of both CPUs and GPUs. OpenCL provides parallel computing using task-based and data-based parallelism. In this paper we will focus on the CUDA parallel computing architecture and programming model introduced by NVIDIA. We will present the benefits of the CUDA programming model. We will also compare the two main approaches, CUDA and AMD APP (STREAM and the new framwork, OpenCL that tries to unify the GPGPU computing models.

  14. Quantum Computing

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 5; Issue 9. Quantum Computing - Building Blocks of a Quantum Computer. C S Vijay Vishal Gupta. General Article Volume 5 Issue 9 September 2000 pp 69-81. Fulltext. Click here to view fulltext PDF. Permanent link:

  15. Platform computing

    CERN Multimedia

    2002-01-01

    "Platform Computing releases first grid-enabled workload management solution for IBM eServer Intel and UNIX high performance computing clusters. This Out-of-the-box solution maximizes the performance and capability of applications on IBM HPC clusters" (1/2 page) .

  16. Quantum Computing

    Indian Academy of Sciences (India)

    In the first part of this article, we had looked at how quantum physics can be harnessed to make the building blocks of a quantum computer. In this concluding part, we look at algorithms which can exploit the power of this computational device, and some practical difficulties in building such a device. Quantum Algorithms.

  17. Quantum computing

    OpenAIRE

    Burba, M.; Lapitskaya, T.

    2017-01-01

    This article gives an elementary introduction to quantum computing. It is a draft for a book chapter of the "Handbook of Nature-Inspired and Innovative Computing", Eds. A. Zomaya, G.J. Milburn, J. Dongarra, D. Bader, R. Brent, M. Eshaghian-Wilner, F. Seredynski (Springer, Berlin Heidelberg New York, 2006).

  18. Computational Pathology

    Science.gov (United States)

    Louis, David N.; Feldman, Michael; Carter, Alexis B.; Dighe, Anand S.; Pfeifer, John D.; Bry, Lynn; Almeida, Jonas S.; Saltz, Joel; Braun, Jonathan; Tomaszewski, John E.; Gilbertson, John R.; Sinard, John H.; Gerber, Georg K.; Galli, Stephen J.; Golden, Jeffrey A.; Becich, Michael J.

    2016-01-01

    Context We define the scope and needs within the new discipline of computational pathology, a discipline critical to the future of both the practice of pathology and, more broadly, medical practice in general. Objective To define the scope and needs of computational pathology. Data Sources A meeting was convened in Boston, Massachusetts, in July 2014 prior to the annual Association of Pathology Chairs meeting, and it was attended by a variety of pathologists, including individuals highly invested in pathology informatics as well as chairs of pathology departments. Conclusions The meeting made recommendations to promote computational pathology, including clearly defining the field and articulating its value propositions; asserting that the value propositions for health care systems must include means to incorporate robust computational approaches to implement data-driven methods that aid in guiding individual and population health care; leveraging computational pathology as a center for data interpretation in modern health care systems; stating that realizing the value proposition will require working with institutional administrations, other departments, and pathology colleagues; declaring that a robust pipeline should be fostered that trains and develops future computational pathologists, for those with both pathology and non-pathology backgrounds; and deciding that computational pathology should serve as a hub for data-related research in health care systems. The dissemination of these recommendations to pathology and bioinformatics departments should help facilitate the development of computational pathology. PMID:26098131

  19. Cloud Computing

    DEFF Research Database (Denmark)

    Krogh, Simon

    2013-01-01

    with technological changes, the paradigmatic pendulum has swung between increased centralization on one side and a focus on distributed computing that pushes IT power out to end users on the other. With the introduction of outsourcing and cloud computing, centralization in large data centers is again dominating...... the IT scene. In line with the views presented by Nicolas Carr in 2003 (Carr, 2003), it is a popular assumption that cloud computing will be the next utility (like water, electricity and gas) (Buyya, Yeo, Venugopal, Broberg, & Brandic, 2009). However, this assumption disregards the fact that most IT production......), for instance, in establishing and maintaining trust between the involved parties (Sabherwal, 1999). So far, research in cloud computing has neglected this perspective and focused entirely on aspects relating to technology, economy, security and legal questions. While the core technologies of cloud computing (e...

  20. Computability theory

    CERN Document Server

    Weber, Rebecca

    2012-01-01

    What can we compute--even with unlimited resources? Is everything within reach? Or are computations necessarily drastically limited, not just in practice, but theoretically? These questions are at the heart of computability theory. The goal of this book is to give the reader a firm grounding in the fundamentals of computability theory and an overview of currently active areas of research, such as reverse mathematics and algorithmic randomness. Turing machines and partial recursive functions are explored in detail, and vital tools and concepts including coding, uniformity, and diagonalization are described explicitly. From there the material continues with universal machines, the halting problem, parametrization and the recursion theorem, and thence to computability for sets, enumerability, and Turing reduction and degrees. A few more advanced topics round out the book before the chapter on areas of research. The text is designed to be self-contained, with an entire chapter of preliminary material including re...

  1. Computational Streetscapes

    Directory of Open Access Journals (Sweden)

    Paul M. Torrens

    2016-09-01

    Full Text Available Streetscapes have presented a long-standing interest in many fields. Recently, there has been a resurgence of attention on streetscape issues, catalyzed in large part by computing. Because of computing, there is more understanding, vistas, data, and analysis of and on streetscape phenomena than ever before. This diversity of lenses trained on streetscapes permits us to address long-standing questions, such as how people use information while mobile, how interactions with people and things occur on streets, how we might safeguard crowds, how we can design services to assist pedestrians, and how we could better support special populations as they traverse cities. Amid each of these avenues of inquiry, computing is facilitating new ways of posing these questions, particularly by expanding the scope of what-if exploration that is possible. With assistance from computing, consideration of streetscapes now reaches across scales, from the neurological interactions that form among place cells in the brain up to informatics that afford real-time views of activity over whole urban spaces. For some streetscape phenomena, computing allows us to build realistic but synthetic facsimiles in computation, which can function as artificial laboratories for testing ideas. In this paper, I review the domain science for studying streetscapes from vantages in physics, urban studies, animation and the visual arts, psychology, biology, and behavioral geography. I also review the computational developments shaping streetscape science, with particular emphasis on modeling and simulation as informed by data acquisition and generation, data models, path-planning heuristics, artificial intelligence for navigation and way-finding, timing, synthetic vision, steering routines, kinematics, and geometrical treatment of collision detection and avoidance. I also discuss the implications that the advances in computing streetscapes might have on emerging developments in cyber

  2. COMPUTATIONAL THINKING

    Directory of Open Access Journals (Sweden)

    Evgeniy K. Khenner

    2016-01-01

    Full Text Available Abstract. The aim of the research is to draw attention of the educational community to the phenomenon of computational thinking which actively discussed in the last decade in the foreign scientific and educational literature, to substantiate of its importance, practical utility and the right on affirmation in Russian education.Methods. The research is based on the analysis of foreign studies of the phenomenon of computational thinking and the ways of its formation in the process of education; on comparing the notion of «computational thinking» with related concepts used in the Russian scientific and pedagogical literature.Results. The concept «computational thinking» is analyzed from the point of view of intuitive understanding and scientific and applied aspects. It is shown as computational thinking has evolved in the process of development of computers hardware and software. The practice-oriented interpretation of computational thinking which dominant among educators is described along with some ways of its formation. It is shown that computational thinking is a metasubject result of general education as well as its tool. From the point of view of the author, purposeful development of computational thinking should be one of the tasks of the Russian education.Scientific novelty. The author gives a theoretical justification of the role of computational thinking schemes as metasubject results of learning. The dynamics of the development of this concept is described. This process is connected with the evolution of computer and information technologies as well as increase of number of the tasks for effective solutions of which computational thinking is required. Author substantiated the affirmation that including «computational thinking » in the set of pedagogical concepts which are used in the national education system fills an existing gap.Practical significance. New metasubject result of education associated with

  3. Microcontroller Unit

    International Nuclear Information System (INIS)

    Tulaev, A.B.

    1994-01-01

    The general purpose micro controller unit based on 8-bit single-chip microcomputer of the MCS-51 family is described. The controller has the data and program memories, a serial interface and an external bus for functional I/O extensions. The controller consists of a microcomputer chip, up to 4 ROM-RAM chips and 10 SSI and MSI chips, and it measures 160x120 mm. Both hardware and software micro system debugging tools are described. (author). 8 refs., 1 fig., 1 tab

  4. Computer interfacing

    CERN Document Server

    Dixey, Graham

    1994-01-01

    This book explains how computers interact with the world around them and therefore how to make them a useful tool. Topics covered include descriptions of all the components that make up a computer, principles of data exchange, interaction with peripherals, serial communication, input devices, recording methods, computer-controlled motors, and printers.In an informative and straightforward manner, Graham Dixey describes how to turn what might seem an incomprehensible 'black box' PC into a powerful and enjoyable tool that can help you in all areas of your work and leisure. With plenty of handy

  5. Computational physics

    CERN Document Server

    Newman, Mark

    2013-01-01

    A complete introduction to the field of computational physics, with examples and exercises in the Python programming language. Computers play a central role in virtually every major physics discovery today, from astrophysics and particle physics to biophysics and condensed matter. This book explains the fundamentals of computational physics and describes in simple terms the techniques that every physicist should know, such as finite difference methods, numerical quadrature, and the fast Fourier transform. The book offers a complete introduction to the topic at the undergraduate level, and is also suitable for the advanced student or researcher who wants to learn the foundational elements of this important field.

  6. Computational physics

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1987-01-15

    Computers have for many years played a vital role in the acquisition and treatment of experimental data, but they have more recently taken up a much more extended role in physics research. The numerical and algebraic calculations now performed on modern computers make it possible to explore consequences of basic theories in a way which goes beyond the limits of both analytic insight and experimental investigation. This was brought out clearly at the Conference on Perspectives in Computational Physics, held at the International Centre for Theoretical Physics, Trieste, Italy, from 29-31 October.

  7. Cloud Computing

    CERN Document Server

    Baun, Christian; Nimis, Jens; Tai, Stefan

    2011-01-01

    Cloud computing is a buzz-word in today's information technology (IT) that nobody can escape. But what is really behind it? There are many interpretations of this term, but no standardized or even uniform definition. Instead, as a result of the multi-faceted viewpoints and the diverse interests expressed by the various stakeholders, cloud computing is perceived as a rather fuzzy concept. With this book, the authors deliver an overview of cloud computing architecture, services, and applications. Their aim is to bring readers up to date on this technology and thus to provide a common basis for d

  8. Computational Viscoelasticity

    CERN Document Server

    Marques, Severino P C

    2012-01-01

    This text is a guide how to solve problems in which viscoelasticity is present using existing commercial computational codes. The book gives information on codes’ structure and use, data preparation  and output interpretation and verification. The first part of the book introduces the reader to the subject, and to provide the models, equations and notation to be used in the computational applications. The second part shows the most important Computational techniques: Finite elements formulation, Boundary elements formulation, and presents the solutions of Viscoelastic problems with Abaqus.

  9. Optical computing.

    Science.gov (United States)

    Stroke, G. W.

    1972-01-01

    Applications of the optical computer include an approach for increasing the sharpness of images obtained from the most powerful electron microscopes and fingerprint/credit card identification. The information-handling capability of the various optical computing processes is very great. Modern synthetic-aperture radars scan upward of 100,000 resolvable elements per second. Fields which have assumed major importance on the basis of optical computing principles are optical image deblurring, coherent side-looking synthetic-aperture radar, and correlative pattern recognition. Some examples of the most dramatic image deblurring results are shown.

  10. Computational physics

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    Computers have for many years played a vital role in the acquisition and treatment of experimental data, but they have more recently taken up a much more extended role in physics research. The numerical and algebraic calculations now performed on modern computers make it possible to explore consequences of basic theories in a way which goes beyond the limits of both analytic insight and experimental investigation. This was brought out clearly at the Conference on Perspectives in Computational Physics, held at the International Centre for Theoretical Physics, Trieste, Italy, from 29-31 October

  11. Phenomenological Computation?

    DEFF Research Database (Denmark)

    Brier, Søren

    2014-01-01

    Open peer commentary on the article “Info-computational Constructivism and Cognition” by Gordana Dodig-Crnkovic. Upshot: The main problems with info-computationalism are: (1) Its basic concept of natural computing has neither been defined theoretically or implemented practically. (2. It cannot...... encompass human concepts of subjective experience and intersubjective meaningful communication, which prevents it from being genuinely transdisciplinary. (3) Philosophically, it does not sufficiently accept the deep ontological differences between various paradigms such as von Foerster’s second- order...

  12. Computational method for thermoviscoelasticity with application to rock mechanics. [Ph. D. Thesis

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S.C.

    1984-01-01

    Large-scale numerical computations associated with rock mechanics problems have required efficient and economical models for predicting temperature, stress, failure, and deformed structural configuration under various loading conditions. To meet this requirement, the complex dependence of the properties of geological materials on the time and temperature is modified to yield a reduced time scale as a function of time and temperature under the thermorheologically simple material (TSM) postulate. The thermorheologically linear concept is adopted in the finite element formulation by uncoupling thermal and mechanical responses. The thermal responses, based on transient heat conduction or convective-diffusion, are formulated by using the two-point recurrence scheme and the upwinding scheme, respectively. An incremental solution procedure with the implicit time stepping scheme is proposed for the solution of the thermoviscoelastic response. The proposed thermoviscoelastic solution algorithm is based on the uniaxial creep experimental data and the corresponding temperature shift functions, and is intended to minimize computational efforts by allowing large time step size with stable solutions. A thermoelastic fracture formulation is also presented by introducing the degenerate quadratic isoparametric singular element for the thermally-induced line crack problems. The stress intensity factors are computed by use of the displacement method. Efficiency of the presented formulation and solution algorithm is initially demonstrated by comparison with other available solutions for a variety of problems. Subsequent field applications are made to simulate the post-burn and post-repose phases of an underground coal conversion (UCC) experiment and in-situ nuclear waste disposal management problems. 137 references, 48 figures, 6 tables.

  13. 1982 UCC-ND/GAT environmental protection seminar: proceedings

    International Nuclear Information System (INIS)

    1983-04-01

    This environmental protection seminar was divided into seven sessions: (1) general environmental protection, (2) air and water pollution control, (3) spill control and countermeasures, (4) toxic materials control, (5) hazardous materials control, (6) environmental protection projects, and (7) cost benefit analysis. Separate abstracts have been prepared for the 41 papers presented therein

  14. 1982 UCC-ND/GAT environmental protection seminar: proceedings

    Energy Technology Data Exchange (ETDEWEB)

    1983-04-01

    This environmental protection seminar was divided into seven sessions: (1) general environmental protection, (2) air and water pollution control, (3) spill control and countermeasures, (4) toxic materials control, (5) hazardous materials control, (6) environmental protection projects, and (7) cost benefit analysis. Separate abstracts have been prepared for the 41 papers presented therein. (ACR)

  15. En governmentality analyse af Professionshøjskolen UCC

    DEFF Research Database (Denmark)

    Berendsen, Ida Theodora Wolf

    2009-01-01

    pædagogseminarier. Her konkluderede jeg ud fra en Bushs typologi om "Educationel leadership and management" (Bush 2003), at følgende tre nedenstående former var karakteristiske for ledelse af lærer- og pædagogseminarier, før seminariebegrebet blev skrevet ud ved "Lov om professionshøjskoler" (LOV nr. 562 af 06...... katastrofer, men som også skabte resultater. Jeg har også set nogen, der brugte overtalelse, forførte folk til ting, som man mere eller mindre ikke skulle, men det behøvede ikke at være farrollen. Farrollen er mere der, hvor man lægger ansvaret fra sig og siger " I passer jer selv, hvis I har et problem...

  16. Essentials of cloud computing

    CERN Document Server

    Chandrasekaran, K

    2014-01-01

    ForewordPrefaceComputing ParadigmsLearning ObjectivesPreambleHigh-Performance ComputingParallel ComputingDistributed ComputingCluster ComputingGrid ComputingCloud ComputingBiocomputingMobile ComputingQuantum ComputingOptical ComputingNanocomputingNetwork ComputingSummaryReview PointsReview QuestionsFurther ReadingCloud Computing FundamentalsLearning ObjectivesPreambleMotivation for Cloud ComputingThe Need for Cloud ComputingDefining Cloud ComputingNIST Definition of Cloud ComputingCloud Computing Is a ServiceCloud Computing Is a Platform5-4-3 Principles of Cloud computingFive Essential Charact

  17. Personal Computers.

    Science.gov (United States)

    Toong, Hoo-min D.; Gupta, Amar

    1982-01-01

    Describes the hardware, software, applications, and current proliferation of personal computers (microcomputers). Includes discussions of microprocessors, memory, output (including printers), application programs, the microcomputer industry, and major microcomputer manufacturers (Apple, Radio Shack, Commodore, and IBM). (JN)

  18. Computational Literacy

    DEFF Research Database (Denmark)

    Chongtay, Rocio; Robering, Klaus

    2016-01-01

    In recent years, there has been a growing interest in and recognition of the importance of Computational Literacy, a skill generally considered to be necessary for success in the 21st century. While much research has concentrated on requirements, tools, and teaching methodologies for the acquisit......In recent years, there has been a growing interest in and recognition of the importance of Computational Literacy, a skill generally considered to be necessary for success in the 21st century. While much research has concentrated on requirements, tools, and teaching methodologies...... for the acquisition of Computational Literacy at basic educational levels, focus on higher levels of education has been much less prominent. The present paper considers the case of courses for higher education programs within the Humanities. A model is proposed which conceives of Computational Literacy as a layered...

  19. Computing Religion

    DEFF Research Database (Denmark)

    Nielbo, Kristoffer Laigaard; Braxton, Donald M.; Upal, Afzal

    2012-01-01

    The computational approach has become an invaluable tool in many fields that are directly relevant to research in religious phenomena. Yet the use of computational tools is almost absent in the study of religion. Given that religion is a cluster of interrelated phenomena and that research...... concerning these phenomena should strive for multilevel analysis, this article argues that the computational approach offers new methodological and theoretical opportunities to the study of religion. We argue that the computational approach offers 1.) an intermediary step between any theoretical construct...... and its targeted empirical space and 2.) a new kind of data which allows the researcher to observe abstract constructs, estimate likely outcomes, and optimize empirical designs. Because sophisticated mulitilevel research is a collaborative project we also seek to introduce to scholars of religion some...

  20. Computational Controversy

    NARCIS (Netherlands)

    Timmermans, Benjamin; Kuhn, Tobias; Beelen, Kaspar; Aroyo, Lora

    2017-01-01

    Climate change, vaccination, abortion, Trump: Many topics are surrounded by fierce controversies. The nature of such heated debates and their elements have been studied extensively in the social science literature. More recently, various computational approaches to controversy analysis have

  1. Grid Computing

    Indian Academy of Sciences (India)

    IAS Admin

    emergence of supercomputers led to the use of computer simula- tion as an .... Scientific and engineering applications (e.g., Tera grid secure gate way). Collaborative ... Encryption, privacy, protection from malicious software. Physical Layer.

  2. Computer tomographs

    International Nuclear Information System (INIS)

    Niedzwiedzki, M.

    1982-01-01

    Physical foundations and the developments in the transmission and emission computer tomography are presented. On the basis of the available literature and private communications a comparison is made of the various transmission tomographs. A new technique of computer emission tomography ECT, unknown in Poland, is described. The evaluation of two methods of ECT, namely those of positron and single photon emission tomography is made. (author)

  3. Computational sustainability

    CERN Document Server

    Kersting, Kristian; Morik, Katharina

    2016-01-01

    The book at hand gives an overview of the state of the art research in Computational Sustainability as well as case studies of different application scenarios. This covers topics such as renewable energy supply, energy storage and e-mobility, efficiency in data centers and networks, sustainable food and water supply, sustainable health, industrial production and quality, etc. The book describes computational methods and possible application scenarios.

  4. Computing farms

    International Nuclear Information System (INIS)

    Yeh, G.P.

    2000-01-01

    High-energy physics, nuclear physics, space sciences, and many other fields have large challenges in computing. In recent years, PCs have achieved performance comparable to the high-end UNIX workstations, at a small fraction of the price. We review the development and broad applications of commodity PCs as the solution to CPU needs, and look forward to the important and exciting future of large-scale PC computing

  5. Computational chemistry

    Science.gov (United States)

    Arnold, J. O.

    1987-01-01

    With the advent of supercomputers, modern computational chemistry algorithms and codes, a powerful tool was created to help fill NASA's continuing need for information on the properties of matter in hostile or unusual environments. Computational resources provided under the National Aerodynamics Simulator (NAS) program were a cornerstone for recent advancements in this field. Properties of gases, materials, and their interactions can be determined from solutions of the governing equations. In the case of gases, for example, radiative transition probabilites per particle, bond-dissociation energies, and rates of simple chemical reactions can be determined computationally as reliably as from experiment. The data are proving to be quite valuable in providing inputs to real-gas flow simulation codes used to compute aerothermodynamic loads on NASA's aeroassist orbital transfer vehicles and a host of problems related to the National Aerospace Plane Program. Although more approximate, similar solutions can be obtained for ensembles of atoms simulating small particles of materials with and without the presence of gases. Computational chemistry has application in studying catalysis, properties of polymers, all of interest to various NASA missions, including those previously mentioned. In addition to discussing these applications of computational chemistry within NASA, the governing equations and the need for supercomputers for their solution is outlined.

  6. Distributed GPU Computing in GIScience

    Science.gov (United States)

    Jiang, Y.; Yang, C.; Huang, Q.; Li, J.; Sun, M.

    2013-12-01

    Transactions on, 9(3), 378-394. 2. Li, J., Jiang, Y., Yang, C., Huang, Q., & Rice, M. (2013). Visualizing 3D/4D Environmental Data Using Many-core Graphics Processing Units (GPUs) and Multi-core Central Processing Units (CPUs). Computers & Geosciences, 59(9), 78-89. 3. Owens, J. D., Houston, M., Luebke, D., Green, S., Stone, J. E., & Phillips, J. C. (2008). GPU computing. Proceedings of the IEEE, 96(5), 879-899.

  7. Solar unit

    Energy Technology Data Exchange (ETDEWEB)

    Sukhanov, A M; Trushevskiy, S N; Tveryanovich, E V

    1982-01-01

    A solar unit is proposed which contains an inclined solar collector with supply and outlet pipelines, the first of which is connected to the source of a heat carrier, while the second is connected through the valve to the tank for collecting heated heat carrier equipped with a device for recovery. In order to improve the effectiveness of heating the heat carrier, it additionally contains a concentrator of solar radiation and a device for maintaining a level of the heat carrier in the collector in the zone of the focal spot of the concentrator, while the heat pipeline is connected to the source of the heat carrier with the help of a device for maintaining the level of the heat carrier.

  8. Cloud Computing with iPlant Atmosphere.

    Science.gov (United States)

    McKay, Sheldon J; Skidmore, Edwin J; LaRose, Christopher J; Mercer, Andre W; Noutsos, Christos

    2013-10-15

    Cloud Computing refers to distributed computing platforms that use virtualization software to provide easy access to physical computing infrastructure and data storage, typically administered through a Web interface. Cloud-based computing provides access to powerful servers, with specific software and virtual hardware configurations, while eliminating the initial capital cost of expensive computers and reducing the ongoing operating costs of system administration, maintenance contracts, power consumption, and cooling. This eliminates a significant barrier to entry into bioinformatics and high-performance computing for many researchers. This is especially true of free or modestly priced cloud computing services. The iPlant Collaborative offers a free cloud computing service, Atmosphere, which allows users to easily create and use instances on virtual servers preconfigured for their analytical needs. Atmosphere is a self-service, on-demand platform for scientific computing. This unit demonstrates how to set up, access and use cloud computing in Atmosphere. Copyright © 2013 John Wiley & Sons, Inc.

  9. Computational creativity

    Directory of Open Access Journals (Sweden)

    López de Mántaras Badia, Ramon

    2013-12-01

    Full Text Available New technologies, and in particular artificial intelligence, are drastically changing the nature of creative processes. Computers are playing very significant roles in creative activities such as music, architecture, fine arts, and science. Indeed, the computer is already a canvas, a brush, a musical instrument, and so on. However, we believe that we must aim at more ambitious relations between computers and creativity. Rather than just seeing the computer as a tool to help human creators, we could see it as a creative entity in its own right. This view has triggered a new subfield of Artificial Intelligence called Computational Creativity. This article addresses the question of the possibility of achieving computational creativity through some examples of computer programs capable of replicating some aspects of creative behavior in the fields of music and science.Las nuevas tecnologías y en particular la Inteligencia Artificial están cambiando de forma importante la naturaleza del proceso creativo. Los ordenadores están jugando un papel muy significativo en actividades artísticas tales como la música, la arquitectura, las bellas artes y la ciencia. Efectivamente, el ordenador ya es el lienzo, el pincel, el instrumento musical, etc. Sin embargo creemos que debemos aspirar a relaciones más ambiciosas entre los ordenadores y la creatividad. En lugar de verlos solamente como herramientas de ayuda a la creación, los ordenadores podrían ser considerados agentes creativos. Este punto de vista ha dado lugar a un nuevo subcampo de la Inteligencia Artificial denominado Creatividad Computacional. En este artículo abordamos la cuestión de la posibilidad de alcanzar dicha creatividad computacional mediante algunos ejemplos de programas de ordenador capaces de replicar algunos aspectos relacionados con el comportamiento creativo en los ámbitos de la música y la ciencia.

  10. Quantum computing

    International Nuclear Information System (INIS)

    Steane, Andrew

    1998-01-01

    The subject of quantum computing brings together ideas from classical information theory, computer science, and quantum physics. This review aims to summarize not just quantum computing, but the whole subject of quantum information theory. Information can be identified as the most general thing which must propagate from a cause to an effect. It therefore has a fundamentally important role in the science of physics. However, the mathematical treatment of information, especially information processing, is quite recent, dating from the mid-20th century. This has meant that the full significance of information as a basic concept in physics is only now being discovered. This is especially true in quantum mechanics. The theory of quantum information and computing puts this significance on a firm footing, and has led to some profound and exciting new insights into the natural world. Among these are the use of quantum states to permit the secure transmission of classical information (quantum cryptography), the use of quantum entanglement to permit reliable transmission of quantum states (teleportation), the possibility of preserving quantum coherence in the presence of irreversible noise processes (quantum error correction), and the use of controlled quantum evolution for efficient computation (quantum computation). The common theme of all these insights is the use of quantum entanglement as a computational resource. It turns out that information theory and quantum mechanics fit together very well. In order to explain their relationship, this review begins with an introduction to classical information theory and computer science, including Shannon's theorem, error correcting codes, Turing machines and computational complexity. The principles of quantum mechanics are then outlined, and the Einstein, Podolsky and Rosen (EPR) experiment described. The EPR-Bell correlations, and quantum entanglement in general, form the essential new ingredient which distinguishes quantum from

  11. Quantum computing

    Energy Technology Data Exchange (ETDEWEB)

    Steane, Andrew [Department of Atomic and Laser Physics, University of Oxford, Clarendon Laboratory, Oxford (United Kingdom)

    1998-02-01

    The subject of quantum computing brings together ideas from classical information theory, computer science, and quantum physics. This review aims to summarize not just quantum computing, but the whole subject of quantum information theory. Information can be identified as the most general thing which must propagate from a cause to an effect. It therefore has a fundamentally important role in the science of physics. However, the mathematical treatment of information, especially information processing, is quite recent, dating from the mid-20th century. This has meant that the full significance of information as a basic concept in physics is only now being discovered. This is especially true in quantum mechanics. The theory of quantum information and computing puts this significance on a firm footing, and has led to some profound and exciting new insights into the natural world. Among these are the use of quantum states to permit the secure transmission of classical information (quantum cryptography), the use of quantum entanglement to permit reliable transmission of quantum states (teleportation), the possibility of preserving quantum coherence in the presence of irreversible noise processes (quantum error correction), and the use of controlled quantum evolution for efficient computation (quantum computation). The common theme of all these insights is the use of quantum entanglement as a computational resource. It turns out that information theory and quantum mechanics fit together very well. In order to explain their relationship, this review begins with an introduction to classical information theory and computer science, including Shannon's theorem, error correcting codes, Turing machines and computational complexity. The principles of quantum mechanics are then outlined, and the Einstein, Podolsky and Rosen (EPR) experiment described. The EPR-Bell correlations, and quantum entanglement in general, form the essential new ingredient which distinguishes quantum from

  12. Multiparty Computations

    DEFF Research Database (Denmark)

    Dziembowski, Stefan

    here and discuss other problems caused by the adaptiveness. All protocols in the thesis are formally specified and the proofs of their security are given. [1]Ronald Cramer, Ivan Damgård, Stefan Dziembowski, Martin Hirt, and Tal Rabin. Efficient multiparty computations with dishonest minority......In this thesis we study a problem of doing Verifiable Secret Sharing (VSS) and Multiparty Computations in a model where private channels between the players and a broadcast channel is available. The adversary is active, adaptive and has an unbounded computing power. The thesis is based on two...... to a polynomial time black-box reduction, the complexity of adaptively secure VSS is the same as that of ordinary secret sharing (SS), where security is only required against a passive, static adversary. Previously, such a connection was only known for linear secret sharing and VSS schemes. We then show...

  13. Scientific computing

    CERN Document Server

    Trangenstein, John A

    2017-01-01

    This is the third of three volumes providing a comprehensive presentation of the fundamentals of scientific computing. This volume discusses topics that depend more on calculus than linear algebra, in order to prepare the reader for solving differential equations. This book and its companions show how to determine the quality of computational results, and how to measure the relative efficiency of competing methods. Readers learn how to determine the maximum attainable accuracy of algorithms, and how to select the best method for computing problems. This book also discusses programming in several languages, including C++, Fortran and MATLAB. There are 90 examples, 200 exercises, 36 algorithms, 40 interactive JavaScript programs, 91 references to software programs and 1 case study. Topics are introduced with goals, literature references and links to public software. There are descriptions of the current algorithms in GSLIB and MATLAB. This book could be used for a second course in numerical methods, for either ...

  14. Computational Psychiatry

    Science.gov (United States)

    Wang, Xiao-Jing; Krystal, John H.

    2014-01-01

    Psychiatric disorders such as autism and schizophrenia arise from abnormalities in brain systems that underlie cognitive, emotional and social functions. The brain is enormously complex and its abundant feedback loops on multiple scales preclude intuitive explication of circuit functions. In close interplay with experiments, theory and computational modeling are essential for understanding how, precisely, neural circuits generate flexible behaviors and their impairments give rise to psychiatric symptoms. This Perspective highlights recent progress in applying computational neuroscience to the study of mental disorders. We outline basic approaches, including identification of core deficits that cut across disease categories, biologically-realistic modeling bridging cellular and synaptic mechanisms with behavior, model-aided diagnosis. The need for new research strategies in psychiatry is urgent. Computational psychiatry potentially provides powerful tools for elucidating pathophysiology that may inform both diagnosis and treatment. To achieve this promise will require investment in cross-disciplinary training and research in this nascent field. PMID:25442941

  15. GPU Computing Gems Emerald Edition

    CERN Document Server

    Hwu, Wen-mei W

    2011-01-01

    ".the perfect companion to Programming Massively Parallel Processors by Hwu & Kirk." -Nicolas Pinto, Research Scientist at Harvard & MIT, NVIDIA Fellow 2009-2010 Graphics processing units (GPUs) can do much more than render graphics. Scientists and researchers increasingly look to GPUs to improve the efficiency and performance of computationally-intensive experiments across a range of disciplines. GPU Computing Gems: Emerald Edition brings their techniques to you, showcasing GPU-based solutions including: Black hole simulations with CUDA GPU-accelerated computation and interactive display of

  16. Computational artifacts

    DEFF Research Database (Denmark)

    Schmidt, Kjeld; Bansler, Jørgen P.

    2016-01-01

    The key concern of CSCW research is that of understanding computing technologies in the social context of their use, that is, as integral features of our practices and our lives, and to think of their design and implementation under that perspective. However, the question of the nature...... of that which is actually integrated in our practices is often discussed in confusing ways, if at all. The article aims to try to clarify the issue and in doing so revisits and reconsiders the notion of ‘computational artifact’....

  17. Computer security

    CERN Document Server

    Gollmann, Dieter

    2011-01-01

    A completely up-to-date resource on computer security Assuming no previous experience in the field of computer security, this must-have book walks you through the many essential aspects of this vast topic, from the newest advances in software and technology to the most recent information on Web applications security. This new edition includes sections on Windows NT, CORBA, and Java and discusses cross-site scripting and JavaScript hacking as well as SQL injection. Serving as a helpful introduction, this self-study guide is a wonderful starting point for examining the variety of competing sec

  18. Cloud Computing

    CERN Document Server

    Antonopoulos, Nick

    2010-01-01

    Cloud computing has recently emerged as a subject of substantial industrial and academic interest, though its meaning and scope is hotly debated. For some researchers, clouds are a natural evolution towards the full commercialisation of grid systems, while others dismiss the term as a mere re-branding of existing pay-per-use technologies. From either perspective, 'cloud' is now the label of choice for accountable pay-per-use access to third party applications and computational resources on a massive scale. Clouds support patterns of less predictable resource use for applications and services a

  19. Computational Logistics

    DEFF Research Database (Denmark)

    Pacino, Dario; Voss, Stefan; Jensen, Rune Møller

    2013-01-01

    This book constitutes the refereed proceedings of the 4th International Conference on Computational Logistics, ICCL 2013, held in Copenhagen, Denmark, in September 2013. The 19 papers presented in this volume were carefully reviewed and selected for inclusion in the book. They are organized in to...... in topical sections named: maritime shipping, road transport, vehicle routing problems, aviation applications, and logistics and supply chain management.......This book constitutes the refereed proceedings of the 4th International Conference on Computational Logistics, ICCL 2013, held in Copenhagen, Denmark, in September 2013. The 19 papers presented in this volume were carefully reviewed and selected for inclusion in the book. They are organized...

  20. Computational Logistics

    DEFF Research Database (Denmark)

    This book constitutes the refereed proceedings of the 4th International Conference on Computational Logistics, ICCL 2013, held in Copenhagen, Denmark, in September 2013. The 19 papers presented in this volume were carefully reviewed and selected for inclusion in the book. They are organized in to...... in topical sections named: maritime shipping, road transport, vehicle routing problems, aviation applications, and logistics and supply chain management.......This book constitutes the refereed proceedings of the 4th International Conference on Computational Logistics, ICCL 2013, held in Copenhagen, Denmark, in September 2013. The 19 papers presented in this volume were carefully reviewed and selected for inclusion in the book. They are organized...

  1. Computational engineering

    CERN Document Server

    2014-01-01

    The book presents state-of-the-art works in computational engineering. Focus is on mathematical modeling, numerical simulation, experimental validation and visualization in engineering sciences. In particular, the following topics are presented: constitutive models and their implementation into finite element codes, numerical models in nonlinear elasto-dynamics including seismic excitations, multiphase models in structural engineering and multiscale models of materials systems, sensitivity and reliability analysis of engineering structures, the application of scientific computing in urban water management and hydraulic engineering, and the application of genetic algorithms for the registration of laser scanner point clouds.

  2. Computer busses

    CERN Document Server

    Buchanan, William

    2000-01-01

    As more and more equipment is interface or'bus' driven, either by the use of controllers or directly from PCs, the question of which bus to use is becoming increasingly important both in industry and in the office. 'Computer Busses' has been designed to help choose the best type of bus for the particular application.There are several books which cover individual busses, but none which provide a complete guide to computer busses. The author provides a basic theory of busses and draws examples and applications from real bus case studies. Busses are analysed using from a top-down approach, helpin

  3. Reconfigurable Computing

    CERN Document Server

    Cardoso, Joao MP

    2011-01-01

    As the complexity of modern embedded systems increases, it becomes less practical to design monolithic processing platforms. As a result, reconfigurable computing is being adopted widely for more flexible design. Reconfigurable Computers offer the spatial parallelism and fine-grained customizability of application-specific circuits with the postfabrication programmability of software. To make the most of this unique combination of performance and flexibility, designers need to be aware of both hardware and software issues. FPGA users must think not only about the gates needed to perform a comp

  4. Termination unit

    Science.gov (United States)

    Traeholt, Chresten [Frederiksberg, DK; Willen, Dag [Klagshamn, SE; Roden, Mark [Newnan, GA; Tolbert, Jerry C [Carrollton, GA; Lindsay, David [Carrollton, GA; Fisher, Paul W [Heiskell, TN; Nielsen, Carsten Thidemann [Jaegerspris, DK

    2014-01-07

    This invention relates to a termination unit comprising an end-section of a cable. The end section of the cable defines a central longitudinal axis and comprising end-parts of N electrical phases, an end-part of a neutral conductor and a surrounding thermally insulation envelope adapted to comprising a cooling fluid. The end-parts of the N electrical phases and the end-part of the neutral conductor each comprising at least one electrical conductor and being arranged in the cable concentrically around a core former with a phase 1 located relatively innermost, and phase N relatively outermost in the cable, phase N being surrounded by the neutral conductor, electrical insulation being arrange between neighboring electrical phases and between phase N and the neutral conductor, and wherein the end-parts of the neutral conductor and the electrical phases each comprise a contacting surface electrically connected to at least one branch current lead to provide an electrical connection: The contacting surfaces each having a longitudinal extension, and being located sequentially along the longitudinal extension of the end-section of the cable. The branch current leads being individually insulated from said thermally insulation envelope by individual electrical insulators.

  5. Termination unit

    Energy Technology Data Exchange (ETDEWEB)

    Traeholt, Chresten; Willen, Dag; Roden, Mark; Tolbert, Jerry C.; Lindsay, David; Fisher, Paul W.; Nielsen, Carsten Thidemann

    2016-05-03

    Cable end section comprises end-parts of N electrical phases/neutral, and a thermally-insulation envelope comprising cooling fluid. The end-parts each comprises a conductor and are arranged with phase 1 innermost, N outermost surrounded by the neutral, electrical insulation being between phases and N and neutral. The end-parts comprise contacting surfaces located sequentially along the longitudinal extension of the end-section. A termination unit has an insulating envelope connected to a cryostat, special parts at both ends comprising an adapter piece at the cable interface and a closing end-piece terminating the envelope in the end-section. The special parts houses an inlet and/or outlet for cooling fluid. The space between an inner wall of the envelope and a central opening of the cable is filled with cooling fluid. The special part at the end connecting to the cryostat houses an inlet or outlet, splitting cooling flow into cable annular flow and termination annular flow.

  6. From Sticks and Stones to Zeros and Ones: The Development of Computer Network Operations as an Element of Warfare. A Study of the Palestinian-Israeli Cyberconflict and What the United States Can Learn from the Interfada

    Science.gov (United States)

    2005-09-01

    Israeli teens who sabotaged a website of Hezbollah, the militantly anti-Israel guerrilla movement in Lebanon, with a defacement which placed an Israeli...ftp with out a password. FUCK HIZZBALLA!!! Sincerely digibrain & haboshanik (we are the domain masters).” • Hackers of Israel Unite: Hackers of...other defacement. 2. b1n4ry c0d3 defaced only one site on December 3, 2000. This defacement was the only one to refer to the WFD as the “World’s Fuck

  7. Riemannian computing in computer vision

    CERN Document Server

    Srivastava, Anuj

    2016-01-01

    This book presents a comprehensive treatise on Riemannian geometric computations and related statistical inferences in several computer vision problems. This edited volume includes chapter contributions from leading figures in the field of computer vision who are applying Riemannian geometric approaches in problems such as face recognition, activity recognition, object detection, biomedical image analysis, and structure-from-motion. Some of the mathematical entities that necessitate a geometric analysis include rotation matrices (e.g. in modeling camera motion), stick figures (e.g. for activity recognition), subspace comparisons (e.g. in face recognition), symmetric positive-definite matrices (e.g. in diffusion tensor imaging), and function-spaces (e.g. in studying shapes of closed contours).   ·         Illustrates Riemannian computing theory on applications in computer vision, machine learning, and robotics ·         Emphasis on algorithmic advances that will allow re-application in other...

  8. Statistical Computing

    Indian Academy of Sciences (India)

    inference and finite population sampling. Sudhakar Kunte. Elements of statistical computing are discussed in this series. ... which captain gets an option to decide whether to field first or bat first ... may of course not be fair, in the sense that the team which wins ... describe two methods of drawing a random number between 0.

  9. Computational biology

    DEFF Research Database (Denmark)

    Hartmann, Lars Røeboe; Jones, Neil; Simonsen, Jakob Grue

    2011-01-01

    Computation via biological devices has been the subject of close scrutiny since von Neumann’s early work some 60 years ago. In spite of the many relevant works in this field, the notion of programming biological devices seems to be, at best, ill-defined. While many devices are claimed or proved t...

  10. Quantum Computation

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 16; Issue 9. Quantum Computation - Particle and Wave Aspects of Algorithms. Apoorva Patel. General Article Volume 16 Issue 9 September 2011 pp 821-835. Fulltext. Click here to view fulltext PDF. Permanent link:

  11. Cloud computing.

    Science.gov (United States)

    Wink, Diane M

    2012-01-01

    In this bimonthly series, the author examines how nurse educators can use Internet and Web-based technologies such as search, communication, and collaborative writing tools; social networking and social bookmarking sites; virtual worlds; and Web-based teaching and learning programs. This article describes how cloud computing can be used in nursing education.

  12. Computer Recreations.

    Science.gov (United States)

    Dewdney, A. K.

    1988-01-01

    Describes the creation of the computer program "BOUNCE," designed to simulate a weighted piston coming into equilibrium with a cloud of bouncing balls. The model follows the ideal gas law. Utilizes the critical event technique to create the model. Discusses another program, "BOOM," which simulates a chain reaction. (CW)

  13. [Grid computing

    CERN Multimedia

    Wolinsky, H

    2003-01-01

    "Turn on a water spigot, and it's like tapping a bottomless barrel of water. Ditto for electricity: Flip the switch, and the supply is endless. But computing is another matter. Even with the Internet revolution enabling us to connect in new ways, we are still limited to self-contained systems running locally stored software, limited by corporate, institutional and geographic boundaries" (1 page).

  14. Computational Finance

    DEFF Research Database (Denmark)

    Rasmussen, Lykke

    One of the major challenges in todays post-crisis finance environment is calculating the sensitivities of complex products for hedging and risk management. Historically, these derivatives have been determined using bump-and-revalue, but due to the increasing magnitude of these computations does...

  15. Optical Computing

    Indian Academy of Sciences (India)

    Optical computing technology is, in general, developing in two directions. One approach is ... current support in many places, with private companies as well as governments in several countries encouraging such research work. For example, much ... which enables more information to be carried and data to be processed.

  16. Brain-computer interface

    DEFF Research Database (Denmark)

    2014-01-01

    A computer-implemented method of providing an interface between a user and a processing unit, the method comprising : presenting one or more stimuli to a user, each stimulus varying at a respective stimulation frequency, each stimulation frequency being associated with a respective user......-selectable input; receiving at least one signal indicative of brain activity of the user; and determining, from the received signal, which of the one or more stimuli the user attends to and selecting the user-selectable input associated with the stimulation frequency of the determined stimuli as being a user...

  17. Computer Data Punch Cards

    CERN Multimedia

    Those card are printed with minimal layout aids for the formatting of FORTRAN programs, plus extra guidelines every ten columns suggesting a generic tabular data layout. A punch card is a piece of stiff paper that can be used to contain digital information represented by the presence or absence of holes in predefined positions. Punched cards were used for specialized unit record machines, organized into semiautomatic data processing systems, used punched cards for data input, output, and storage. Furthermore many new digital computers started to used punched cards.

  18. Dosimetry in computed tomography

    International Nuclear Information System (INIS)

    Andisco, D.; Blanco, S.; Buzzia, A.E.

    2014-01-01

    Objective: The amount of computed tomography (CT) studies that are performed each year in the world is growing exponentially mainly due to the incorporation of multislice CT that allows studies in a few seconds. But, despite the benefit received by patients with the diagnosis, radiation dose is a concern in the professional community and it has be reduced as much as reasonably possible. This article describes the main dosimetric CT units used in order to work with this practice easily, using the values that provide modern equipment and internationally known reference levels. (authors) [es

  19. Exercises in molecular computing.

    Science.gov (United States)

    Stojanovic, Milan N; Stefanovic, Darko; Rudchenko, Sergei

    2014-06-17

    CONSPECTUS: The successes of electronic digital logic have transformed every aspect of human life over the last half-century. The word "computer" now signifies a ubiquitous electronic device, rather than a human occupation. Yet evidently humans, large assemblies of molecules, can compute, and it has been a thrilling challenge to develop smaller, simpler, synthetic assemblies of molecules that can do useful computation. When we say that molecules compute, what we usually mean is that such molecules respond to certain inputs, for example, the presence or absence of other molecules, in a precisely defined but potentially complex fashion. The simplest way for a chemist to think about computing molecules is as sensors that can integrate the presence or absence of multiple analytes into a change in a single reporting property. Here we review several forms of molecular computing developed in our laboratories. When we began our work, combinatorial approaches to using DNA for computing were used to search for solutions to constraint satisfaction problems. We chose to work instead on logic circuits, building bottom-up from units based on catalytic nucleic acids, focusing on DNA secondary structures in the design of individual circuit elements, and reserving the combinatorial opportunities of DNA for the representation of multiple signals propagating in a large circuit. Such circuit design directly corresponds to the intuition about sensors transforming the detection of analytes into reporting properties. While this approach was unusual at the time, it has been adopted since by other groups working on biomolecular computing with different nucleic acid chemistries. We created logic gates by modularly combining deoxyribozymes (DNA-based enzymes cleaving or combining other oligonucleotides), in the role of reporting elements, with stem-loops as input detection elements. For instance, a deoxyribozyme that normally exhibits an oligonucleotide substrate recognition region is

  20. Computer codes for safety analysis

    International Nuclear Information System (INIS)

    Holland, D.F.

    1986-11-01

    Computer codes for fusion safety analysis have been under development in the United States for about a decade. This paper will discuss five codes that are currently under development by the Fusion Safety Program. The purpose and capability of each code will be presented, a sample given, followed by a discussion of the present status and future development plans

  1. Using the Computer in Special Vocational Programs. Inservice Activities.

    Science.gov (United States)

    Lane, Kenneth; Ward, Raymond

    This inservice manual is intended to assist vocational education teachers in using the techniques of computer-assisted instruction in special vocational education programs. Addressed in the individual units are the following topics: the basic principles of computer-assisted instruction (TRS-80 computers and typing on a computer keyboard); money…

  2. Computable Frames in Computable Banach Spaces

    Directory of Open Access Journals (Sweden)

    S.K. Kaushik

    2016-06-01

    Full Text Available We develop some parts of the frame theory in Banach spaces from the point of view of Computable Analysis. We define computable M-basis and use it to construct a computable Banach space of scalar valued sequences. Computable Xd frames and computable Banach frames are also defined and computable versions of sufficient conditions for their existence are obtained.

  3. Algebraic computing

    International Nuclear Information System (INIS)

    MacCallum, M.A.H.

    1990-01-01

    The implementation of a new computer algebra system is time consuming: designers of general purpose algebra systems usually say it takes about 50 man-years to create a mature and fully functional system. Hence the range of available systems and their capabilities changes little between one general relativity meeting and the next, despite which there have been significant changes in the period since the last report. The introductory remarks aim to give a brief survey of capabilities of the principal available systems and highlight one or two trends. The reference to the most recent full survey of computer algebra in relativity and brief descriptions of the Maple, REDUCE and SHEEP and other applications are given. (author)

  4. Computational Controversy

    OpenAIRE

    Timmermans, Benjamin; Kuhn, Tobias; Beelen, Kaspar; Aroyo, Lora

    2017-01-01

    Climate change, vaccination, abortion, Trump: Many topics are surrounded by fierce controversies. The nature of such heated debates and their elements have been studied extensively in the social science literature. More recently, various computational approaches to controversy analysis have appeared, using new data sources such as Wikipedia, which help us now better understand these phenomena. However, compared to what social sciences have discovered about such debates, the existing computati...

  5. Computed tomography

    International Nuclear Information System (INIS)

    Andre, M.; Resnick, D.

    1988-01-01

    Computed tomography (CT) has matured into a reliable and prominent tool for study of the muscoloskeletal system. When it was introduced in 1973, it was unique in many ways and posed a challenge to interpretation. It is in these unique features, however, that its advantages lie in comparison with conventional techniques. These advantages will be described in a spectrum of important applications in orthopedics and rheumatology

  6. Computed radiography

    International Nuclear Information System (INIS)

    Pupchek, G.

    2004-01-01

    Computed radiography (CR) is an image acquisition process that is used to create digital, 2-dimensional radiographs. CR employs a photostimulable phosphor-based imaging plate, replacing the standard x-ray film and intensifying screen combination. Conventional radiographic exposure equipment is used with no modification required to the existing system. CR can transform an analog x-ray department into a digital one and eliminates the need for chemicals, water, darkrooms and film processor headaches. (author)

  7. Computational universes

    International Nuclear Information System (INIS)

    Svozil, Karl

    2005-01-01

    Suspicions that the world might be some sort of a machine or algorithm existing 'in the mind' of some symbolic number cruncher have lingered from antiquity. Although popular at times, the most radical forms of this idea never reached mainstream. Modern developments in physics and computer science have lent support to the thesis, but empirical evidence is needed before it can begin to replace our contemporary world view

  8. Intelligent multi-unit disk controller

    International Nuclear Information System (INIS)

    Poirot, Lucien

    1982-01-01

    This controller has been designed as a link between a 16 bits minicomputer and two types of disks units interface: the SMD interface and an equivalent to the DRI unit interface. Four units of each type can be handled by the controller. A bit slice microprocessor controls the interface with the disks units. The maximum exchange rate is 8 megabits per second. A CRC feature has been provided for error detection. A 16 bits microprocessor implements the interface to the computer, assuring head positioning, the management of bad tracks, as well as the supervision of each transfer. A internal buffer memory allows an asynchronous dialogue with the computer. The implementation of the controller makes easy the adaptation to disks units of various types, and though it has been initially intended for a minicomputer of the MITRA type, its microprocessor based design makes it fitted to any minicomputer. (author) [fr

  9. Failure detection in high-performance clusters and computers using chaotic map computations

    Science.gov (United States)

    Rao, Nageswara S.

    2015-09-01

    A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

  10. Management of planned unit outages

    International Nuclear Information System (INIS)

    Brune, W.

    1984-01-01

    Management of planned unit outages at the Bruno Leuschner Nuclear Power Plant is based on the experience gained with Soviet PWR units of the WWER type over a period of more than 50 reactor-years. For PWR units, planned outages concentrate almost exclusively on annual refuellings and major maintenance of the power plant facilities involved. Planning of such major maintenance work is based on a standardized basic network plan and a catalogue of standardized maintenance and inspection measures. From these, an overall maintenance schedule of the unit and partial process plans of the individual main components are derived (manually or by computer) and, in the temporal integration of major maintenance at every unit, fixed starting times and durations are determined. More than 75% of the maintenance work at the Bruno Leuschner Nuclear Power Plant is carried out by the plant's own maintenance personnel. Large-scale maintenance of every unit is controlled by a special project head. He is assisted by commissioners, each of whom is responsible for his own respective item. A daily control report is made. The organizational centre is a central office which works in shifts around the clock. All maintenance orders and reports of completion pass through this office; thus, the overall maintenance schedule can be corrected daily. To enforce the proposed operational strategy, suitable accompanying technical measures are required with respect to effective facility monitoring and technical diagnosis, purposeful improvement of particularly sensitive components and an increase in the effectiveness of maintenance work by special technologies and devices. (author)

  11. Computer Modeling of Platinum Reforming Reactors | Momoh ...

    African Journals Online (AJOL)

    This paper, instead of using a theoretical approach has considered a computer model as means of assessing the reformate composition for three-stage fixed bed reactors in platforming unit. This is done by identifying many possible hydrocarbon transformation reactions that are peculiar to the process unit, identify the ...

  12. Computer-Aided Instruction in Automated Instrumentation.

    Science.gov (United States)

    Stephenson, David T.

    1986-01-01

    Discusses functions of automated instrumentation systems, i.e., systems which combine electrical measuring instruments and a controlling computer to measure responses of a unit under test. The computer-assisted tutorial then described is programmed for use on such a system--a modern microwave spectrum analyzer--to introduce engineering students to…

  13. Online evaluation of a commercial video image analysis system (Computer Vision System) to predict beef carcass red meat yield and for augmenting the assignment of USDA yield grades. United States Department of Agriculture.

    Science.gov (United States)

    Cannell, R C; Belk, K E; Tatum, J D; Wise, J W; Chapman, P L; Scanga, J A; Smith, G C

    2002-05-01

    Objective quantification of differences in wholesale cut yields of beef carcasses at plant chain speeds is important for the application of value-based marketing. This study was conducted to evaluate the ability of a commercial video image analysis system, the Computer Vision System (CVS) to 1) predict commercially fabricated beef subprimal yield and 2) augment USDA yield grading, in order to improve accuracy of grade assessment. The CVS was evaluated as a fully installed production system, operating on a full-time basis at chain speeds. Steer and heifer carcasses (n = 296) were evaluated using CVS, as well as by USDA expert and online graders, before the fabrication of carcasses into industry-standard subprimal cuts. Expert yield grade (YG), online YG, CVS estimated carcass yield, and CVS measured ribeye area in conjunction with expert grader estimates of the remaining YG factors (adjusted fat thickness, percentage of kidney-pelvic-heart fat, hot carcass weight) accounted for 67, 39, 64, and 65% of the observed variation in fabricated yields of closely trimmed subprimals. The dual component CVS predicted wholesale cut yields more accurately than current online yield grading, and, in an augmentation system, CVS ribeye measurement replaced estimated ribeye area in determination of USDA yield grade, and the accuracy of cutability prediction was improved, under packing plant conditions and speeds, to a level close to that of expert graders applying grades at a comfortable rate of speed offline.

  14. Customizable computing

    CERN Document Server

    Chen, Yu-Ting; Gill, Michael; Reinman, Glenn; Xiao, Bingjun

    2015-01-01

    Since the end of Dennard scaling in the early 2000s, improving the energy efficiency of computation has been the main concern of the research community and industry. The large energy efficiency gap between general-purpose processors and application-specific integrated circuits (ASICs) motivates the exploration of customizable architectures, where one can adapt the architecture to the workload. In this Synthesis lecture, we present an overview and introduction of the recent developments on energy-efficient customizable architectures, including customizable cores and accelerators, on-chip memory

  15. Massively parallel evolutionary computation on GPGPUs

    CERN Document Server

    Tsutsui, Shigeyoshi

    2013-01-01

    Evolutionary algorithms (EAs) are metaheuristics that learn from natural collective behavior and are applied to solve optimization problems in domains such as scheduling, engineering, bioinformatics, and finance. Such applications demand acceptable solutions with high-speed execution using finite computational resources. Therefore, there have been many attempts to develop platforms for running parallel EAs using multicore machines, massively parallel cluster machines, or grid computing environments. Recent advances in general-purpose computing on graphics processing units (GPGPU) have opened u

  16. Software Accelerates Computing Time for Complex Math

    Science.gov (United States)

    2014-01-01

    Ames Research Center awarded Newark, Delaware-based EM Photonics Inc. SBIR funding to utilize graphic processing unit (GPU) technology- traditionally used for computer video games-to develop high-computing software called CULA. The software gives users the ability to run complex algorithms on personal computers with greater speed. As a result of the NASA collaboration, the number of employees at the company has increased 10 percent.

  17. Retrofitting of NPP Computer systems

    International Nuclear Information System (INIS)

    Pettersen, G.

    1994-01-01

    Retrofitting of nuclear power plant control rooms is a continuing process for most utilities. This involves introducing and/or extending computer-based solutions for surveillance and control as well as improving the human-computer interface. The paper describes typical requirements when retrofitting NPP process computer systems, and focuses on the activities of Institute for energieteknikk, OECD Halden Reactor project with respect to such retrofitting, using examples from actual delivery projects. In particular, a project carried out for Forsmarksverket in Sweden comprising upgrade of the operator system in the control rooms of units 1 and 2 is described. As many of the problems of retrofitting NPP process computer systems are similar to such work in other kinds of process industries, an example from a non-nuclear application area is also given

  18. Computer-aided cleanup

    International Nuclear Information System (INIS)

    Williams, J.; Jones, B.

    1994-01-01

    In late 1992, the remedial investigation of operable unit 2 at the Department of Energy (DOE) Superfund site in Fernald, Ohio was in trouble. Despite years of effort--including an EPA-approved field-investigation work plan, 123 soil borings, 51 ground-water-monitoring wells, analysis of more than 650 soil and ground-water samples, and preparation of a draft remedial-investigation (RI) report--it was not possible to conclude if contaminated material in the unit was related to ground-water contamination previously detected beneath and beyond the site boundary. Compounding the problem, the schedule for the RI, feasibility study and record of decision for operable unit 2 was governed by a DOE-EPA consent agreement stipulating penalties of up to $10,000 per week for not meeting scheduled milestones--and time was running out. An advanced three-dimensional computer model confirmed that radioactive wastes dumped at the Fernald, Ohio Superfund site had contaminated ground water, after years of previous testing has been inconclusive. The system is now being used to aid feasibility and design work on the more-than-$1 billion remediation project

  19. Computed tomography

    International Nuclear Information System (INIS)

    Wells, P.; Davis, J.; Morgan, M.

    1994-01-01

    X-ray or gamma-ray transmission computed tomography (CT) is a powerful non-destructive evaluation (NDE) technique that produces two-dimensional cross-sectional images of an object without the need to physically section it. CT is also known by the acronym CAT, for computerised axial tomography. This review article presents a brief historical perspective on CT, its current status and the underlying physics. The mathematical fundamentals of computed tomography are developed for the simplest transmission CT modality. A description of CT scanner instrumentation is provided with an emphasis on radiation sources and systems. Examples of CT images are shown indicating the range of materials that can be scanned and the spatial and contrast resolutions that may be achieved. Attention is also given to the occurrence, interpretation and minimisation of various image artefacts that may arise. A final brief section is devoted to the principles and potential of a range of more recently developed tomographic modalities including diffraction CT, positron emission CT and seismic tomography. 57 refs., 2 tabs., 14 figs

  20. 24 CFR 990.155 - Addition and deletion of units.

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Addition and deletion of units. 990.155 Section 990.155 Housing and Urban Development Regulations Relating to Housing and Urban...; Computation of Eligible Unit Months § 990.155 Addition and deletion of units. (a) Changes in public housing...

  1. 24 CFR 990.145 - Dwelling units with approved vacancies.

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Dwelling units with approved vacancies. 990.145 Section 990.145 Housing and Urban Development Regulations Relating to Housing and Urban...; Computation of Eligible Unit Months § 990.145 Dwelling units with approved vacancies. (a) A PHA is eligible to...

  2. 6th July 2010 - United Kingdom Science and Technology Facilities Council W. Whitehorn signing the guest book with Head of International relations F. Pauss, visiting the Computing Centre with Information Technology Department Head Deputy D. Foster, the LHC superconducting magnet test hall with Technology Department P. Strubin,the Centre Control Centre with Operation Group Leader M. Lamont and the CLIC/CTF3 facility with Project Leader J.-P. Delahaye.

    CERN Multimedia

    Teams : M. Brice, JC Gadmer

    2010-01-01

    6th July 2010 - United Kingdom Science and Technology Facilities Council W. Whitehorn signing the guest book with Head of International relations F. Pauss, visiting the Computing Centre with Information Technology Department Head Deputy D. Foster, the LHC superconducting magnet test hall with Technology Department P. Strubin,the Centre Control Centre with Operation Group Leader M. Lamont and the CLIC/CTF3 facility with Project Leader J.-P. Delahaye.

  3. Computing Services and Assured Computing

    Science.gov (United States)

    2006-05-01

    fighters’ ability to execute the mission.” Computing Services 4 We run IT Systems that: provide medical care pay the warfighters manage maintenance...users • 1,400 applications • 18 facilities • 180 software vendors • 18,000+ copies of executive software products • Virtually every type of mainframe and... chocs electriques, de branchez les deux cordons d’al imentation avant de faire le depannage P R IM A R Y SD A S B 1 2 PowerHub 7000 RST U L 00- 00

  4. Reversible arithmetic logic unit for quantum arithmetic

    DEFF Research Database (Denmark)

    Thomsen, Michael Kirkedal; Glück, Robert; Axelsen, Holger Bock

    2010-01-01

    This communication presents the complete design of a reversible arithmetic logic unit (ALU) that can be part of a programmable reversible computing device such as a quantum computer. The presented ALU is garbage free and uses reversible updates to combine the standard reversible arithmetic...... and logical operations in one unit. Combined with a suitable control unit, the ALU permits the construction of an r-Turing complete computing device. The garbage-free ALU developed in this communication requires only 6n elementary reversible gates for five basic arithmetic-logical operations on two n......-bit operands and does not use ancillae. This remarkable low resource consumption was achieved by generalizing the V-shape design first introduced for quantum ripple-carry adders and nesting multiple V-shapes in a novel integrated design. This communication shows that the realization of an efficient reversible...

  5. Computational neuroscience

    CERN Document Server

    Blackwell, Kim L

    2014-01-01

    Progress in Molecular Biology and Translational Science provides a forum for discussion of new discoveries, approaches, and ideas in molecular biology. It contains contributions from leaders in their fields and abundant references. This volume brings together different aspects of, and approaches to, molecular and multi-scale modeling, with applications to a diverse range of neurological diseases. Mathematical and computational modeling offers a powerful approach for examining the interaction between molecular pathways and ionic channels in producing neuron electrical activity. It is well accepted that non-linear interactions among diverse ionic channels can produce unexpected neuron behavior and hinder a deep understanding of how ion channel mutations bring about abnormal behavior and disease. Interactions with the diverse signaling pathways activated by G protein coupled receptors or calcium influx adds an additional level of complexity. Modeling is an approach to integrate myriad data sources into a cohesiv...

  6. Social Computing

    CERN Multimedia

    CERN. Geneva

    2011-01-01

    The past decade has witnessed a momentous transformation in the way people interact with each other. Content is now co-produced, shared, classified, and rated by millions of people, while attention has become the ephemeral and valuable resource that everyone seeks to acquire. This talk will describe how social attention determines the production and consumption of content within both the scientific community and social media, how its dynamics can be used to predict the future and the role that social media plays in setting the public agenda. About the speaker Bernardo Huberman is a Senior HP Fellow and Director of the Social Computing Lab at Hewlett Packard Laboratories. He received his Ph.D. in Physics from the University of Pennsylvania, and is currently a Consulting Professor in the Department of Applied Physics at Stanford University. He originally worked in condensed matter physics, ranging from superionic conductors to two-dimensional superfluids, and made contributions to the theory of critical p...

  7. computer networks

    Directory of Open Access Journals (Sweden)

    N. U. Ahmed

    2002-01-01

    Full Text Available In this paper, we construct a new dynamic model for the Token Bucket (TB algorithm used in computer networks and use systems approach for its analysis. This model is then augmented by adding a dynamic model for a multiplexor at an access node where the TB exercises a policing function. In the model, traffic policing, multiplexing and network utilization are formally defined. Based on the model, we study such issues as (quality of service QoS, traffic sizing and network dimensioning. Also we propose an algorithm using feedback control to improve QoS and network utilization. Applying MPEG video traces as the input traffic to the model, we verify the usefulness and effectiveness of our model.

  8. Computer Tree

    Directory of Open Access Journals (Sweden)

    Onur AĞAOĞLU

    2014-12-01

    Full Text Available It is crucial that gifted and talented students should be supported by different educational methods for their interests and skills. The science and arts centres (gifted centres provide the Supportive Education Program for these students with an interdisciplinary perspective. In line with the program, an ICT lesson entitled “Computer Tree” serves for identifying learner readiness levels, and defining the basic conceptual framework. A language teacher also contributes to the process, since it caters for the creative function of the basic linguistic skills. The teaching technique is applied for 9-11 aged student level. The lesson introduces an evaluation process including basic information, skills, and interests of the target group. Furthermore, it includes an observation process by way of peer assessment. The lesson is considered to be a good sample of planning for any subject, for the unpredicted convergence of visual and technical abilities with linguistic abilities.

  9. Computed tomography

    International Nuclear Information System (INIS)

    Boyd, D.P.

    1989-01-01

    This paper reports on computed tomographic (CT) scanning which has improved computer-assisted imaging modalities for radiologic diagnosis. The advantage of this modality is its ability to image thin cross-sectional planes of the body, thus uncovering density information in three dimensions without tissue superposition problems. Because this enables vastly superior imaging of soft tissues in the brain and body, CT scanning was immediately successful and continues to grow in importance as improvements are made in speed, resolution, and cost efficiency. CT scanners are used for general purposes, and the more advanced machines are generally preferred in large hospitals, where volume and variety of usage justifies the cost. For imaging in the abdomen, a scanner with a rapid speed is preferred because peristalsis, involuntary motion of the diaphram, and even cardiac motion are present and can significantly degrade image quality. When contrast media is used in imaging to demonstrate scanner, immediate review of images, and multiformat hardcopy production. A second console is reserved for the radiologist to read images and perform the several types of image analysis that are available. Since CT images contain quantitative information in terms of density values and contours of organs, quantitation of volumes, areas, and masses is possible. This is accomplished with region-of- interest methods, which involve the electronic outlining of the selected region of the television display monitor with a trackball-controlled cursor. In addition, various image- processing options, such as edge enhancement (for viewing fine details of edges) or smoothing filters (for enhancing the detectability of low-contrast lesions) are useful tools

  10. Cloud Computing: The Future of Computing

    OpenAIRE

    Aggarwal, Kanika

    2013-01-01

    Cloud computing has recently emerged as a new paradigm for hosting and delivering services over the Internet. Cloud computing is attractive to business owners as it eliminates the requirement for users to plan ahead for provisioning, and allows enterprises to start from the small and increase resources only when there is a rise in service demand. The basic principles of cloud computing is to make the computing be assigned in a great number of distributed computers, rather then local computer ...

  11. Design Anthropology, Emerging Technologies and Alternative Computational Futures

    DEFF Research Database (Denmark)

    Smith, Rachel Charlotte

    Emerging technologies are providing a new field for design anthropological inquiry that unite experiences, imaginaries and materialities in complex way and demands new approaches to developing sustainable computational futures.......Emerging technologies are providing a new field for design anthropological inquiry that unite experiences, imaginaries and materialities in complex way and demands new approaches to developing sustainable computational futures....

  12. NASA work unit system file maintenance manual

    Science.gov (United States)

    1972-01-01

    The NASA Work Unit System is a management information system for research tasks (i.e., work units) performed under NASA grants and contracts. It supplies profiles on research efforts and statistics on fund distribution. The file maintenance operator can add, delete and change records at a remote terminal or can submit punched cards to the computer room for batch update. The system is designed for file maintenance by a person with little or no knowledge of data processing techniques.

  13. Computer Refurbishment

    International Nuclear Information System (INIS)

    Ichiyen, Norman; Chan, Dominic; Thompson, Paul

    2004-01-01

    The major activity for the 18-month refurbishment outage at the Point Lepreau Generating Station is the replacement of all 380 fuel channel and calandria tube assemblies and the lower portion of connecting feeder pipes. New Brunswick Power would also take advantage of this outage to conduct a number of repairs, replacements, inspections and upgrades (such as rewinding or replacing the generator, replacement of shutdown system trip computers, replacement of certain valves and expansion joints, inspection of systems not normally accessible, etc.). This would allow for an additional 25 to 30 years. Among the systems to be replaced are the PDC's for both shutdown systems. Assessments have been completed for both the SDS1 and SDS2 PDC's, and it has been decided to replace the SDS2 PDCs with the same hardware and software approach that has been used successfully for the Wolsong 2, 3, and 4 and the Qinshan 1 and 2 SDS2 PDCs. For SDS1, it has been decided to use the same software development methodology that was used successfully for the Wolsong and Qinshan called the I A and to use a new hardware platform in order to ensure successful operation for the 25-30 year station operating life. The selected supplier is Triconex, which uses a triple modular redundant architecture that will enhance the robustness/fault tolerance of the design with respect to equipment failures

  14. Computed Tomography (CT) -- Sinuses

    Medline Plus

    Full Text Available ... Physician Resources Professions Site Index A-Z Computed Tomography (CT) - Sinuses Computed tomography (CT) of the sinuses ... CT of the Sinuses? What is CT (Computed Tomography) of the Sinuses? Computed tomography, more commonly known ...

  15. Illustrated computer tomography

    International Nuclear Information System (INIS)

    Takahashi, S.

    1983-01-01

    This book provides the following information: basic aspects of computed tomography; atlas of computed tomography of the normal adult; clinical application of computed tomography; and radiotherapy planning and computed tomography

  16. Computer-assisted optimization of chest fluoroscopy

    International Nuclear Information System (INIS)

    Korolyuk, I.P.; Filippova, N.V.; Kirillov, L.P.; Momsenko, S.F.

    1987-01-01

    The main trends in the use of computer for the optimization of chest fluorography among employees and workers of a large industrial enterprise are considered. The following directions were determined: automatted sorting of fluorograms, formalization of X-ray signs in describing fluorograms, organization of a special system of fluorographic data management. Four levels of algorithms to solve the problems of fluorography were considered: 1) shops, personnel department, etc.; 2) an automated center for mass screening and a medical unit; 3) a computer center and 4) planning and management service. The results of computer use over a 3-year period were analyzed. The efficacy of computer was shown

  17. Cloud Computing Fundamentals

    Science.gov (United States)

    Furht, Borko

    In the introductory chapter we define the concept of cloud computing and cloud services, and we introduce layers and types of cloud computing. We discuss the differences between cloud computing and cloud services. New technologies that enabled cloud computing are presented next. We also discuss cloud computing features, standards, and security issues. We introduce the key cloud computing platforms, their vendors, and their offerings. We discuss cloud computing challenges and the future of cloud computing.

  18. Unconventional Quantum Computing Devices

    OpenAIRE

    Lloyd, Seth

    2000-01-01

    This paper investigates a variety of unconventional quantum computation devices, including fermionic quantum computers and computers that exploit nonlinear quantum mechanics. It is shown that unconventional quantum computing devices can in principle compute some quantities more rapidly than `conventional' quantum computers.

  19. Computing handbook computer science and software engineering

    CERN Document Server

    Gonzalez, Teofilo; Tucker, Allen

    2014-01-01

    Overview of Computer Science Structure and Organization of Computing Peter J. DenningComputational Thinking Valerie BarrAlgorithms and Complexity Data Structures Mark WeissBasic Techniques for Design and Analysis of Algorithms Edward ReingoldGraph and Network Algorithms Samir Khuller and Balaji RaghavachariComputational Geometry Marc van KreveldComplexity Theory Eric Allender, Michael Loui, and Kenneth ReganFormal Models and Computability Tao Jiang, Ming Li, and Bala

  20. Hispanic Women Overcoming Deterrents to Computer Science: A Phenomenological Study

    Science.gov (United States)

    Herling, Lourdes

    2011-01-01

    The products of computer science are important to all aspects of society and are tools in the solution of the world's problems. It is, therefore, troubling that the United States faces a shortage in qualified graduates in computer science. The number of women and minorities in computer science is significantly lower than the percentage of the…

  1. SI units in radiology

    Energy Technology Data Exchange (ETDEWEB)

    Iyer, P S [Bhabha Atomic Research Centre, Bombay (India). Div. of Radiation Protection

    1978-11-01

    The proposal of the International Commission on Radiation Units and Measurements that the special units of radiation and radioactivity-roentgen, rad, rem and curie-be replaced by the International System (SI) of Units has been accepted by international bodies. This paper reviews the resons for introducing the new units and their features. The relation between the special units and the corresponding SI units is discussed with examples. In spite of anticipated difficulties, the commission recommends a smooth and efficient changeover to the SI units in ten years.

  2. Computer finds ore

    Science.gov (United States)

    Bell, Peter M.

    Artificial intelligence techniques are being used for the first time to evaluate geophysical, geochemical, and geologic data and theory in order to locate ore deposits. After several years of development, an intelligent computer code has been formulated and applied to the Mount Tolman area in Washington state. In a project funded by the United States Geological Survey and the National Science Foundation a set of computer programs, under the general title Prospector, was used successfully to locate a previously unknown ore-grade porphyry molybdenum deposit in the vicinity of Mount Tolman (Science, Sept. 3, 1982).The general area of the deposit had been known to contain exposures of porphyry mineralization. Between 1964 and 1978, exploration surveys had been run by the Bear Creek Mining Company, and later exploration was done in the area by the Amax Corporation. Some of the geophysical data and geochemical and other prospecting surveys were incorporated into the programs, and mine exploration specialists contributed to a set of rules for Prospector. The rules were encoded as ‘inference networks’ to form the ‘expert system’ on which the artificial intelligence codes were based. The molybdenum ore deposit discovered by the test is large, located subsurface, and has an areal extent of more than 18 km2.

  3. Optical programmable Boolean logic unit.

    Science.gov (United States)

    Chattopadhyay, Tanay

    2011-11-10

    Logic units are the building blocks of many important computational operations likes arithmetic, multiplexer-demultiplexer, radix conversion, parity checker cum generator, etc. Multifunctional logic operation is very much essential in this respect. Here a programmable Boolean logic unit is proposed that can perform 16 Boolean logical operations from a single optical input according to the programming input without changing the circuit design. This circuit has two outputs. One output is complementary to the other. Hence no loss of data can occur. The circuit is basically designed by a 2×2 polarization independent optical cross bar switch. Performance of the proposed circuit has been achieved by doing numerical simulations. The binary logical states (0,1) are represented by the absence of light (null) and presence of light, respectively.

  4. Iterative Methods for MPC on Graphical Processing Units

    DEFF Research Database (Denmark)

    Gade-Nielsen, Nicolai Fog; Jørgensen, John Bagterp; Dammann, Bernd

    2012-01-01

    The high oating point performance and memory bandwidth of Graphical Processing Units (GPUs) makes them ideal for a large number of computations which often arises in scientic computing, such as matrix operations. GPUs achieve this performance by utilizing massive par- allelism, which requires ree...... as to avoid the use of dense matrices, which may be too large for the limited memory capacity of current graphics cards.......The high oating point performance and memory bandwidth of Graphical Processing Units (GPUs) makes them ideal for a large number of computations which often arises in scientic computing, such as matrix operations. GPUs achieve this performance by utilizing massive par- allelism, which requires...

  5. Specialized computer architectures for computational aerodynamics

    Science.gov (United States)

    Stevenson, D. K.

    1978-01-01

    In recent years, computational fluid dynamics has made significant progress in modelling aerodynamic phenomena. Currently, one of the major barriers to future development lies in the compute-intensive nature of the numerical formulations and the relative high cost of performing these computations on commercially available general purpose computers, a cost high with respect to dollar expenditure and/or elapsed time. Today's computing technology will support a program designed to create specialized computing facilities to be dedicated to the important problems of computational aerodynamics. One of the still unresolved questions is the organization of the computing components in such a facility. The characteristics of fluid dynamic problems which will have significant impact on the choice of computer architecture for a specialized facility are reviewed.

  6. Unit 037 - Fundamentals of Data Storage

    OpenAIRE

    037, CC in GIScience; Jacobson, Carol R.

    2000-01-01

    This unit introduces the concepts and terms needed to understand storage of GIS data in a computer system, including the weaknesses of a discrete data model for representing the real world; an overview of data storage types and terminology; and a description of data storage issues.

  7. Fishing for meaningful units in connected speech

    DEFF Research Database (Denmark)

    Henrichsen, Peter Juel; Christiansen, Thomas Ulrich

    2009-01-01

    In many branches of spoken language analysis including ASR, the set of smallest meaningful units of speech is taken to coincide with the set of phones or phonemes. However, fishing for phones is difficult, error-prone, and computationally expensive. We present an experiment, based on machine...

  8. SWITCHING POWER FAN CONTROL OF COMPUTER

    Directory of Open Access Journals (Sweden)

    Oleksandr I. Popovskyi

    2010-10-01

    Full Text Available Relevance of material presented in the article, due to extensive use of high-performance computers to create modern information systems, including the NAPS of Ukraine. Most computers in NAPS of Ukraine work on Intel Pentium processors at speeds from 600 MHz to 3 GHz and release a lot of heat, which requires the installation of the system unit 2-3 additional fans. The fan is always works on full power, that leads to rapid deterioration and high level (up to 50 dB noise. In order to meet ergonomic requirements it is proposed to іnstall a computer system unit and an additional control unit ventilators, allowing independent control of each fan. The solution is applied at creation of information systems planning research in the National Academy of Pedagogical Sciences of Ukraine on Internet basis.

  9. Amorphous computing in the presence of stochastic disturbances.

    Science.gov (United States)

    Chu, Dominique; Barnes, David J; Perkins, Samuel

    2014-11-01

    Amorphous computing is a non-standard computing paradigm that relies on massively parallel execution of computer code by a large number of small, spatially distributed, weakly interacting processing units. Over the last decade or so, amorphous computing has attracted a great deal of interest both as an alternative model of computing and as an inspiration to understand developmental biology. A number of algorithms have been developed that can take advantage of the massive parallelism of this computing paradigm to solve specific problems. One of the interesting properties of amorphous computers is that they are robust with respect to the loss of individual processing units, in the sense that a removal of some of them should not impact on the computation as a whole. However, much less understood is to what extent amorphous computers are robust with respect to minor disturbances to the individual processing units, such as random motion or occasional faulty computation short of total component failure. In this article we address this question. As an example problem we choose an algorithm to calculate a straight line between two points. Using this example, we find that amorphous computers are not in general robust with respect to Brownian motion and noise, but we find strategies that restore reliable computation even in their presence. We will argue that these strategies are generally applicable and not specific to the particular AC we consider, or even specific to electronic computers. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  10. Interface control scheme for computer high-speed interface unit

    Science.gov (United States)

    Ballard, B. K.

    1975-01-01

    Control scheme is general and performs for multiplexed and dedicated channels as well as for data-bus interfaces. Control comprises two 64-pin, dual in-line packages, each of which holds custom large-scale integrated array built with silicon-on-sapphire complementary metal-oxide semiconductor technology.

  11. INFORM: European survey of computers in intensive care units.

    Science.gov (United States)

    Ambroso, C; Bowes, C; Chambrin, M C; Gilhooly, K; Green, C; Kari, A; Logie, R; Marraro, G; Mereu, M; Rembold, P

    1992-01-01

    The aims of this study were (a) to survey and evaluate the impact of information technology applications in High Dependency Environments (HDEs) on organizational, psychological and cost-effectiveness factors, (b) to contribute information and design requirements to the other workpackages in the INFORM Project, and (c) to develop useful evaluation methodologies. The evaluation methodologies used were: questionnaires, case studies, objective findings (keystroke) and literature search and review. Six questionnaires were devised covering organizational impact, cost-benefit impact and perceived advantages and disadvantages of computerized systems in HDE (psychological impact). The general conclusion was that while existing systems have been generally well received, they are not yet designed in such a developed and integrated way as to yield their full potential. Greater user involvement in design and implementation and more emphasis on training emerged as strong requirements. Lack of reliability leading to parallel charting was a major problem with the existing systems. It proved difficult to assess cost effectiveness due to a lack of detailed accounting costs; however, it appeared that in the short term, computerisation in HDEs tended to increase costs. It is felt that through a better stock control and better decision making, costs may be reduced in the longer run and effectiveness increased; more detailed longitudinal studies appear to be needed on this subject.

  12. New trends in computational collective intelligence

    CERN Document Server

    Kim, Sang-Wook; Trawiński, Bogdan

    2015-01-01

    This book consists of 20 chapters in which the authors deal with different theoretical and practical aspects of new trends in Collective Computational Intelligence techniques. Computational Collective Intelligence methods and algorithms are one the current trending research topics from areas related to Artificial Intelligence, Soft Computing or Data Mining among others. Computational Collective Intelligence is a rapidly growing field that is most often understood as an AI sub-field dealing with soft computing methods which enable making group decisions and processing knowledge among autonomous units acting in distributed environments. Web-based Systems, Social Networks, and Multi-Agent Systems very often need these tools for working out consistent knowledge states, resolving conflicts and making decisions. The chapters included in this volume cover a selection of topics and new trends in several domains related to Collective Computational Intelligence: Language and Knowledge Processing, Data Mining Methods an...

  13. Inovation of the computer system for the WWER-440 simulator

    International Nuclear Information System (INIS)

    Schrumpf, L.

    1988-01-01

    The configuration of the WWER-440 simulator computer system consists of four SMEP computers. The basic data processing unit consists of two interlinked SM 52/11.M1 computers with 1 MB of main memory. This part of the computer system of the simulator controls the operation of the entire simulator, processes the programs of technology behavior simulation, of the unit information system and of other special systems, guarantees program support and the operation of the instructor's console. An SM 52/11 computer with 256 kB of main memory is connected to each unit. It is used as a communication unit for data transmission using the DASIO 600 interface. Semigraphic color displays are based on the microprocessor modules of the SM 50/40 and SM 53/10 kit supplemented with a modified TESLA COLOR 110 ST tv receiver. (J.B.). 1 fig

  14. Applied Parallel Computing Industrial Computation and Optimization

    DEFF Research Database (Denmark)

    Madsen, Kaj; NA NA NA Olesen, Dorte

    Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)......Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)...

  15. NPP Mochovce units 1 and 2 diagnostic systems

    International Nuclear Information System (INIS)

    Heidenreich, S.

    1997-01-01

    In this paper the diagnostic systems (leak detection monitoring, vibration monitoring, lose parts monitoring, fatigue monitoring) of NPP Mochovce units 1 and 2 are presented. All of the designed diagnostic systems are personal computer based systems

  16. Further computer appreciation

    CERN Document Server

    Fry, T F

    2014-01-01

    Further Computer Appreciation is a comprehensive cover of the principles and aspects in computer appreciation. The book starts by describing the development of computers from the first to the third computer generations, to the development of processors and storage systems, up to the present position of computers and future trends. The text tackles the basic elements, concepts and functions of digital computers, computer arithmetic, input media and devices, and computer output. The basic central processor functions, data storage and the organization of data by classification of computer files,

  17. Data Sorting Using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    M. J. Mišić

    2012-06-01

    Full Text Available Graphics processing units (GPUs have been increasingly used for general-purpose computation in recent years. The GPU accelerated applications are found in both scientific and commercial domains. Sorting is considered as one of the very important operations in many applications, so its efficient implementation is essential for the overall application performance. This paper represents an effort to analyze and evaluate the implementations of the representative sorting algorithms on the graphics processing units. Three sorting algorithms (Quicksort, Merge sort, and Radix sort were evaluated on the Compute Unified Device Architecture (CUDA platform that is used to execute applications on NVIDIA graphics processing units. Algorithms were tested and evaluated using an automated test environment with input datasets of different characteristics. Finally, the results of this analysis are briefly discussed.

  18. Optical reversible programmable Boolean logic unit.

    Science.gov (United States)

    Chattopadhyay, Tanay

    2012-07-20

    Computing with reversibility is the only way to avoid dissipation of energy associated with bit erase. So, a reversible microprocessor is required for future computing. In this paper, a design of a simple all-optical reversible programmable processor is proposed using a polarizing beam splitter, liquid crystal-phase spatial light modulators, a half-wave plate, and plane mirrors. This circuit can perform 16 logical operations according to three programming inputs. Also, inputs can be easily recovered from the outputs. It is named the "reversible programmable Boolean logic unit (RPBLU)." The logic unit is the basic building block of many complex computational operations. Hence the design is important in sense. Two orthogonally polarized lights are defined here as two logical states, respectively.

  19. BONFIRE: benchmarking computers and computer networks

    OpenAIRE

    Bouckaert, Stefan; Vanhie-Van Gerwen, Jono; Moerman, Ingrid; Phillips, Stephen; Wilander, Jerker

    2011-01-01

    The benchmarking concept is not new in the field of computing or computer networking. With “benchmarking tools”, one usually refers to a program or set of programs, used to evaluate the performance of a solution under certain reference conditions, relative to the performance of another solution. Since the 1970s, benchmarking techniques have been used to measure the performance of computers and computer networks. Benchmarking of applications and virtual machines in an Infrastructure-as-a-Servi...

  20. Democratizing Computer Science

    Science.gov (United States)

    Margolis, Jane; Goode, Joanna; Ryoo, Jean J.

    2015-01-01

    Computer science programs are too often identified with a narrow stratum of the student population, often white or Asian boys who have access to computers at home. But because computers play such a huge role in our world today, all students can benefit from the study of computer science and the opportunity to build skills related to computing. The…

  1. Computing at Stanford.

    Science.gov (United States)

    Feigenbaum, Edward A.; Nielsen, Norman R.

    1969-01-01

    This article provides a current status report on the computing and computer science activities at Stanford University, focusing on the Computer Science Department, the Stanford Computation Center, the recently established regional computing network, and the Institute for Mathematical Studies in the Social Sciences. Also considered are such topics…

  2. Dynamic computing random access memory

    International Nuclear Information System (INIS)

    Traversa, F L; Bonani, F; Pershin, Y V; Di Ventra, M

    2014-01-01

    The present von Neumann computing paradigm involves a significant amount of information transfer between a central processing unit and memory, with concomitant limitations in the actual execution speed. However, it has been recently argued that a different form of computation, dubbed memcomputing (Di Ventra and Pershin 2013 Nat. Phys. 9 200–2) and inspired by the operation of our brain, can resolve the intrinsic limitations of present day architectures by allowing for computing and storing of information on the same physical platform. Here we show a simple and practical realization of memcomputing that utilizes easy-to-build memcapacitive systems. We name this architecture dynamic computing random access memory (DCRAM). We show that DCRAM provides massively-parallel and polymorphic digital logic, namely it allows for different logic operations with the same architecture, by varying only the control signals. In addition, by taking into account realistic parameters, its energy expenditures can be as low as a few fJ per operation. DCRAM is fully compatible with CMOS technology, can be realized with current fabrication facilities, and therefore can really serve as an alternative to the present computing technology. (paper)

  3. Developments in Remote Collaboration and Computation

    International Nuclear Information System (INIS)

    Burruss, J.R.; Abla, G.; Flanagan, S.; Keahey, K.; Leggett, T.; Ludesche, C.; McCune, D.; Papka, M.E.; Peng, Q.; Randerson, L.; Schissel, D.P.

    2005-01-01

    The National Fusion Collaboratory (NFC) is creating and deploying collaborative software tools to unite magnetic fusion research in the United States. In particular, the NFC is developing and deploying a national FES 'Grid' (FusionGrid) for secure sharing of computation, visualization, and data resources over the Internet. The goal of FusionGrid is to allow scientists at remote sites to participate as fully in experiments, machine design, and computational activities as if they were working on site thereby creating a unified virtual organization of the geographically dispersed U.S. fusion community

  4. Simulation of motor unit recruitment and microvascular unit perfusion: spatial considerations.

    Science.gov (United States)

    Fuglevand, A J; Segal, S S

    1997-10-01

    Muscle fiber activity is the principal stimulus for increasing capillary perfusion during exercise. The control elements of perfusion, i.e., microvascular units (MVUs), supply clusters of muscle fibers, whereas the control elements of contraction, i.e., motor units, are composed of fibers widely scattered throughout muscle. The purpose of this study was to examine how the discordant spatial domains of MVUs and motor units could influence the proportion of open capillaries (designated as perfusion) throughout a muscle cross section. A computer model simulated the locations of perfused MVUs in response to the activation of up to 100 motor units in a muscle with 40,000 fibers and a cross-sectional area of 100 mm2. The simulation increased contraction intensity by progressive recruitment of motor units. For each step of motor unit recruitment, the percentage of active fibers and the number of perfused MVUs were determined for several conditions: 1) motor unit fibers widely dispersed and motor unit territories randomly located (which approximates healthy human muscle), 2) regionalized motor unit territories, 3) reversed recruitment order of motor units, 4) densely clustered motor unit fibers, and 5) increased size but decreased number of motor units. The simulations indicated that the widespread dispersion of motor unit fibers facilitates complete capillary (MVU) perfusion of muscle at low levels of activity. The efficacy by which muscle fiber activity induced perfusion was reduced 7- to 14-fold under conditions that decreased the dispersion of active fibers, increased the size of motor units, or reversed the sequence of motor unit recruitment. Such conditions are similar to those that arise in neuromuscular disorders, with aging, or during electrical stimulation of muscle, respectively.

  5. United States housing, 2012

    Science.gov (United States)

    Delton Alderman

    2013-01-01

    Provides current and historical information on housing market in the United States. Information includes trends for housing permits and starts, housing completions for single and multifamily units, and sales and construction. This report will be updated annually.

  6. United Cerebral Palsy

    Science.gov (United States)

    ... your local affiliate Find your local affiliate United Cerebral Palsy United Cerebral Palsy (UCP) is a trusted resource for individuals with Cerebral Palsy and other disabilities and their networks. Individuals with ...

  7. Malaria Treatment (United States)

    Science.gov (United States)

    ... Providers, Emergency Consultations, and General Public. Contact Us Malaria Treatment (United States) Recommend on Facebook Tweet Share Compartir Treatment of Malaria: Guidelines For Clinicians (United States) Download PDF version ...

  8. Computer aided design for the nuclear industry

    International Nuclear Information System (INIS)

    Basson, Keith

    1986-01-01

    The paper concerns the new computer aided design (CAD) centre for the United Kingdom nuclear industry, and its applications. A description of the CAD system is given, including the current projects at the CAD centre. Typical applications of the 3D CAD plant based models, stress analysis studies, and the extraction of data from CAD drawings to produce associated documentation, are all described. Future developments using computer aided design systems are also considered. (U.K.)

  9. Small Computer Applications for Base Supply.

    Science.gov (United States)

    1984-03-01

    research on small computer utili- zation at bse level organizatins , This research effort studies whether small computers and commercial softure can assist...Doe has made !solid contributions to the full range of departmental activity. His demonstrated leadership skills and administrative ability warrent his...outstanding professionalism and leadership abilities were evidenced by his superb performance as unit key worker In the 1980 Combined Federal CauMign

  10. Computational performance of a smoothed particle hydrodynamics simulation for shared-memory parallel computing

    Science.gov (United States)

    Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide

    2015-09-01

    The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.

  11. Learning about the Unit Cell and Crystal Lattice with Computerized Simulations and Games: A Pilot Study

    Science.gov (United States)

    Luealamai, Sutha; Panijpan, Bhinyo

    2012-01-01

    The authors have developed a computer-based learning module on the unit cell of various types of crystal. The module has two components: the virtual unit cell (VUC) part and the subsequent unit cell hunter part. The VUC is a virtual reality simulation for students to actively arrive at the unit cell from exploring, from a broad view, the crystal…

  12. Soft computing in computer and information science

    CERN Document Server

    Fray, Imed; Pejaś, Jerzy

    2015-01-01

    This book presents a carefully selected and reviewed collection of papers presented during the 19th Advanced Computer Systems conference ACS-2014. The Advanced Computer Systems conference concentrated from its beginning on methods and algorithms of artificial intelligence. Further future brought new areas of interest concerning technical informatics related to soft computing and some more technological aspects of computer science such as multimedia and computer graphics, software engineering, web systems, information security and safety or project management. These topics are represented in the present book under the categories Artificial Intelligence, Design of Information and Multimedia Systems, Information Technology Security and Software Technologies.

  13. Computational Intelligence, Cyber Security and Computational Models

    CERN Document Server

    Anitha, R; Lekshmi, R; Kumar, M; Bonato, Anthony; Graña, Manuel

    2014-01-01

    This book contains cutting-edge research material presented by researchers, engineers, developers, and practitioners from academia and industry at the International Conference on Computational Intelligence, Cyber Security and Computational Models (ICC3) organized by PSG College of Technology, Coimbatore, India during December 19–21, 2013. The materials in the book include theory and applications for design, analysis, and modeling of computational intelligence and security. The book will be useful material for students, researchers, professionals, and academicians. It will help in understanding current research trends and findings and future scope of research in computational intelligence, cyber security, and computational models.

  14. Computed Tomography (CT) -- Head

    Medline Plus

    Full Text Available ... When the image slices are reassembled by computer software, the result is a very detailed multidimensional view ... Safety Images related to Computed Tomography (CT) - Head Videos related to Computed Tomography (CT) - Head Sponsored by ...

  15. Computers: Instruments of Change.

    Science.gov (United States)

    Barkume, Megan

    1993-01-01

    Discusses the impact of computers in the home, the school, and the workplace. Looks at changes in computer use by occupations and by industry. Provides information on new job titles in computer occupations. (JOW)

  16. DNA computing models

    CERN Document Server

    Ignatova, Zoya; Zimmermann, Karl-Heinz

    2008-01-01

    In this excellent text, the reader is given a comprehensive introduction to the field of DNA computing. The book emphasizes computational methods to tackle central problems of DNA computing, such as controlling living cells, building patterns, and generating nanomachines.

  17. Distributed multiscale computing

    NARCIS (Netherlands)

    Borgdorff, J.

    2014-01-01

    Multiscale models combine knowledge, data, and hypotheses from different scales. Simulating a multiscale model often requires extensive computation. This thesis evaluates distributing these computations, an approach termed distributed multiscale computing (DMC). First, the process of multiscale

  18. Computational Modeling | Bioenergy | NREL

    Science.gov (United States)

    cell walls and are the source of biofuels and biomaterials. Our modeling investigates their properties . Quantum Mechanical Models NREL studies chemical and electronic properties and processes to reduce barriers Computational Modeling Computational Modeling NREL uses computational modeling to increase the

  19. Computer Viruses: An Overview.

    Science.gov (United States)

    Marmion, Dan

    1990-01-01

    Discusses the early history and current proliferation of computer viruses that occur on Macintosh and DOS personal computers, mentions virus detection programs, and offers suggestions for how libraries can protect themselves and their users from damage by computer viruses. (LRW)

  20. On techniques of ATR lattice computation

    International Nuclear Information System (INIS)

    1997-08-01

    Lattice computation is to compute the average nuclear constants of unit fuel lattice which are required for computing core nuclear characteristics such as core power distribution and reactivity characteristics. The main nuclear constants are infinite multiplying rate, neutron movement area, cross section for diffusion computation, local power distribution and isotope composition. As for the lattice computation code, WIMS-ATR is used, which is based on the WIMS-D code developed in U.K., and for the purpose of heightening the accuracy of analysis, which was improved by adding heavy water scattering cross section considering the temperature dependence by Honeck model. For the computation of the neutron absorption by control rods, LOIEL BLUE code is used. The extrapolation distance of neutron flux on control rod surfaces is computed by using THERMOS and DTF codes, and the lattice constants of adjoining lattices are computed by using the WIMS-ATR code. As for the WIMS-ATR code, the computation flow and nuclear data library, and as for the LOIEL BLUE code, the computation flow are explained. The local power distribution in fuel assemblies determined by the WIMS-ATR code was verified with the measured data, and the results are reported. (K.I.)

  1. Computer Virus and Trends

    OpenAIRE

    Tutut Handayani; Soenarto Usna,Drs.MMSI

    2004-01-01

    Since its appearance the first time in the mid-1980s, computer virus has invited various controversies that still lasts to this day. Along with the development of computer systems technology, viruses komputerpun find new ways to spread itself through a variety of existing communications media. This paper discusses about some things related to computer viruses, namely: the definition and history of computer viruses; the basics of computer viruses; state of computer viruses at this time; and ...

  2. Plasticity: modeling & computation

    National Research Council Canada - National Science Library

    Borja, Ronaldo Israel

    2013-01-01

    .... "Plasticity Modeling & Computation" is a textbook written specifically for students who want to learn the theoretical, mathematical, and computational aspects of inelastic deformation in solids...

  3. Cloud Computing Quality

    Directory of Open Access Journals (Sweden)

    Anamaria Şiclovan

    2013-02-01

    Full Text Available Cloud computing was and it will be a new way of providing Internet services and computers. This calculation approach is based on many existing services, such as the Internet, grid computing, Web services. Cloud computing as a system aims to provide on demand services more acceptable as price and infrastructure. It is exactly the transition from computer to a service offered to the consumers as a product delivered online. This paper is meant to describe the quality of cloud computing services, analyzing the advantages and characteristics offered by it. It is a theoretical paper.Keywords: Cloud computing, QoS, quality of cloud computing

  4. Computer hardware fault administration

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-09-14

    Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

  5. Computer jargon explained

    CERN Document Server

    Enticknap, Nicholas

    2014-01-01

    Computer Jargon Explained is a feature in Computer Weekly publications that discusses 68 of the most commonly used technical computing terms. The book explains what the terms mean and why the terms are important to computer professionals. The text also discusses how the terms relate to the trends and developments that are driving the information technology industry. Computer jargon irritates non-computer people and in turn causes problems for computer people. The technology and the industry are changing so rapidly; it is very hard even for professionals to keep updated. Computer people do not

  6. Computers and data processing

    CERN Document Server

    Deitel, Harvey M

    1985-01-01

    Computers and Data Processing provides information pertinent to the advances in the computer field. This book covers a variety of topics, including the computer hardware, computer programs or software, and computer applications systems.Organized into five parts encompassing 19 chapters, this book begins with an overview of some of the fundamental computing concepts. This text then explores the evolution of modern computing systems from the earliest mechanical calculating devices to microchips. Other chapters consider how computers present their results and explain the storage and retrieval of

  7. Computers in nuclear medicine

    International Nuclear Information System (INIS)

    Giannone, Carlos A.

    1999-01-01

    This chapter determines: capture and observation of images in computers; hardware and software used, personal computers, networks and workstations. The use of special filters determine the quality image

  8. Unit-time scheduling problems with time dependent resources

    NARCIS (Netherlands)

    Tautenhahn, T.; Woeginger, G.

    1997-01-01

    We investigate the computational complexity of scheduling problems, where the operations consume certain amounts of renewable resources which are available in time-dependent quantities. In particular, we consider unit-time open shop problems and unit-time scheduling problems with identical parallel

  9. Advances in unconventional computing

    CERN Document Server

    2017-01-01

    The unconventional computing is a niche for interdisciplinary science, cross-bred of computer science, physics, mathematics, chemistry, electronic engineering, biology, material science and nanotechnology. The aims of this book are to uncover and exploit principles and mechanisms of information processing in and functional properties of physical, chemical and living systems to develop efficient algorithms, design optimal architectures and manufacture working prototypes of future and emergent computing devices. This first volume presents theoretical foundations of the future and emergent computing paradigms and architectures. The topics covered are computability, (non-)universality and complexity of computation; physics of computation, analog and quantum computing; reversible and asynchronous devices; cellular automata and other mathematical machines; P-systems and cellular computing; infinity and spatial computation; chemical and reservoir computing. The book is the encyclopedia, the first ever complete autho...

  10. Graphics supercomputer for computational fluid dynamics research

    Science.gov (United States)

    Liaw, Goang S.

    1994-11-01

    The objective of this project is to purchase a state-of-the-art graphics supercomputer to improve the Computational Fluid Dynamics (CFD) research capability at Alabama A & M University (AAMU) and to support the Air Force research projects. A cutting-edge graphics supercomputer system, Onyx VTX, from Silicon Graphics Computer Systems (SGI), was purchased and installed. Other equipment including a desktop personal computer, PC-486 DX2 with a built-in 10-BaseT Ethernet card, a 10-BaseT hub, an Apple Laser Printer Select 360, and a notebook computer from Zenith were also purchased. A reading room has been converted to a research computer lab by adding some furniture and an air conditioning unit in order to provide an appropriate working environments for researchers and the purchase equipment. All the purchased equipment were successfully installed and are fully functional. Several research projects, including two existing Air Force projects, are being performed using these facilities.

  11. Use of computers at nuclear power plants

    International Nuclear Information System (INIS)

    Sen'kin, V.I.; Ozhigano, Yu.V.

    1974-01-01

    Applications of information and control computors in reacter central systems in Great Britain, Federal Republic of Germany, France, Canada, and the USA is surveyed. For the purpose of increasing the reliability of the computers effective means were designed for emergency operation and automatic computerized controls, and highly reliable micromodel modifications were developed. Numerical data units were handled along with development of methods and diagrams for converting analog values to numerical values, in accordance with modern requirements. Some data are presented on computer reliability in operating nuclear power plants both proposed and under construction. It is concluded that in foreign nuclear power stations the informational and calculational computers are finding increasingly wide distribution. Rapid action, the possibility of controlling large parameters, and operation of the computer in conjunction with increasing reliability are speeding up the process of introducing computers in atomic energy and broadenig their functions. (V.P.)

  12. GPU Computing For Particle Tracking

    International Nuclear Information System (INIS)

    Nishimura, Hiroshi; Song, Kai; Muriki, Krishna; Sun, Changchun; James, Susan; Qin, Yong

    2011-01-01

    This is a feasibility study of using a modern Graphics Processing Unit (GPU) to parallelize the accelerator particle tracking code. To demonstrate the massive parallelization features provided by GPU computing, a simplified TracyGPU program is developed for dynamic aperture calculation. Performances, issues, and challenges from introducing GPU are also discussed. General purpose Computation on Graphics Processing Units (GPGPU) bring massive parallel computing capabilities to numerical calculation. However, the unique architecture of GPU requires a comprehensive understanding of the hardware and programming model to be able to well optimize existing applications. In the field of accelerator physics, the dynamic aperture calculation of a storage ring, which is often the most time consuming part of the accelerator modeling and simulation, can benefit from GPU due to its embarrassingly parallel feature, which fits well with the GPU programming model. In this paper, we use the Tesla C2050 GPU which consists of 14 multi-processois (MP) with 32 cores on each MP, therefore a total of 448 cores, to host thousands ot threads dynamically. Thread is a logical execution unit of the program on GPU. In the GPU programming model, threads are grouped into a collection of blocks Within each block, multiple threads share the same code, and up to 48 KB of shared memory. Multiple thread blocks form a grid, which is executed as a GPU kernel. A simplified code that is a subset of Tracy++ (2) is developed to demonstrate the possibility of using GPU to speed up the dynamic aperture calculation by having each thread track a particle.

  13. Quantum computing with defects.

    Science.gov (United States)

    Weber, J R; Koehl, W F; Varley, J B; Janotti, A; Buckley, B B; Van de Walle, C G; Awschalom, D D

    2010-05-11

    Identifying and designing physical systems for use as qubits, the basic units of quantum information, are critical steps in the development of a quantum computer. Among the possibilities in the solid state, a defect in diamond known as the nitrogen-vacancy (NV(-1)) center stands out for its robustness--its quantum state can be initialized, manipulated, and measured with high fidelity at room temperature. Here we describe how to systematically identify other deep center defects with similar quantum-mechanical properties. We present a list of physical criteria that these centers and their hosts should meet and explain how these requirements can be used in conjunction with electronic structure theory to intelligently sort through candidate defect systems. To illustrate these points in detail, we compare electronic structure calculations of the NV(-1) center in diamond with those of several deep centers in 4H silicon carbide (SiC). We then discuss the proposed criteria for similar defects in other tetrahedrally coordinated semiconductors.

  14. Reliability of voxel gray values in cone beam computed tomography for preoperative implant planning assessment

    NARCIS (Netherlands)

    Parsa, A.; Ibrahim, N.; Hassan, B.; Motroni, A.; van der Stelt, P.; Wismeijer, D.

    2012-01-01

    Purpose: To assess the reliability of cone beam computed tomography (CBCT) voxel gray value measurements using Hounsfield units (HU) derived from multislice computed tomography (MSCT) as a clinical reference (gold standard). Materials and Methods: Ten partially edentulous human mandibular cadavers

  15. Computability and unsolvability

    CERN Document Server

    Davis, Martin

    1985-01-01

    ""A clearly written, well-presented survey of an intriguing subject."" - Scientific American. Classic text considers general theory of computability, computable functions, operations on computable functions, Turing machines self-applied, unsolvable decision problems, applications of general theory, mathematical logic, Kleene hierarchy, computable functionals, classification of unsolvable decision problems and more.

  16. Mathematics for computer graphics

    CERN Document Server

    Vince, John

    2006-01-01

    Helps you understand the mathematical ideas used in computer animation, virtual reality, CAD, and other areas of computer graphics. This work also helps you to rediscover the mathematical techniques required to solve problems and design computer programs for computer graphic applications

  17. Computations and interaction

    NARCIS (Netherlands)

    Baeten, J.C.M.; Luttik, S.P.; Tilburg, van P.J.A.; Natarajan, R.; Ojo, A.

    2011-01-01

    We enhance the notion of a computation of the classical theory of computing with the notion of interaction. In this way, we enhance a Turing machine as a model of computation to a Reactive Turing Machine that is an abstract model of a computer as it is used nowadays, always interacting with the user

  18. Symbiotic Cognitive Computing

    OpenAIRE

    Farrell, Robert G.; Lenchner, Jonathan; Kephjart, Jeffrey O.; Webb, Alan M.; Muller, MIchael J.; Erikson, Thomas D.; Melville, David O.; Bellamy, Rachel K.E.; Gruen, Daniel M.; Connell, Jonathan H.; Soroker, Danny; Aaron, Andy; Trewin, Shari M.; Ashoori, Maryam; Ellis, Jason B.

    2016-01-01

    IBM Research is engaged in a research program in symbiotic cognitive computing to investigate how to embed cognitive computing in physical spaces. This article proposes 5 key principles of symbiotic cognitive computing.  We describe how these principles are applied in a particular symbiotic cognitive computing environment and in an illustrative application.  

  19. Computer scientist looks at reliability computations

    International Nuclear Information System (INIS)

    Rosenthal, A.

    1975-01-01

    Results from the theory of computational complexity are applied to reliability computations on fault trees and networks. A well known class of problems which almost certainly have no fast solution algorithms is presented. It is shown that even approximately computing the reliability of many systems is difficult enough to be in this class. In the face of this result, which indicates that for general systems the computation time will be exponential in the size of the system, decomposition techniques which can greatly reduce the effective size of a wide variety of realistic systems are explored

  20. Roadmap to greener computing

    CERN Document Server

    Nguemaleu, Raoul-Abelin Choumin

    2014-01-01

    A concise and accessible introduction to green computing and green IT, this book addresses how computer science and the computer infrastructure affect the environment and presents the main challenges in making computing more environmentally friendly. The authors review the methodologies, designs, frameworks, and software development tools that can be used in computer science to reduce energy consumption and still compute efficiently. They also focus on Computer Aided Design (CAD) and describe what design engineers and CAD software applications can do to support new streamlined business directi

  1. Brief: Managing computing technology

    International Nuclear Information System (INIS)

    Startzman, R.A.

    1994-01-01

    While computing is applied widely in the production segment of the petroleum industry, its effective application is the primary goal of computing management. Computing technology has changed significantly since the 1950's, when computers first began to influence petroleum technology. The ability to accomplish traditional tasks faster and more economically probably is the most important effect that computing has had on the industry. While speed and lower cost are important, are they enough? Can computing change the basic functions of the industry? When new computing technology is introduced improperly, it can clash with traditional petroleum technology. This paper examines the role of management in merging these technologies

  2. Computer mathematics for programmers

    CERN Document Server

    Abney, Darrell H; Sibrel, Donald W

    1985-01-01

    Computer Mathematics for Programmers presents the Mathematics that is essential to the computer programmer.The book is comprised of 10 chapters. The first chapter introduces several computer number systems. Chapter 2 shows how to perform arithmetic operations using the number systems introduced in Chapter 1. The third chapter covers the way numbers are stored in computers, how the computer performs arithmetic on real numbers and integers, and how round-off errors are generated in computer programs. Chapter 4 details the use of algorithms and flowcharting as problem-solving tools for computer p

  3. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  4. Emergent computation a festschrift for Selim G. Akl

    CERN Document Server

    2017-01-01

    This book is dedicated to Professor Selim G. Akl to honour his groundbreaking research achievements in computer science over four decades. The book is an intellectually stimulating excursion into emergent computing paradigms, architectures and implementations. World top experts in computer science, engineering and mathematics overview exciting and intriguing topics of musical rhythms generation algorithms, analyse the computational power of random walks, dispelling a myth of computational universality, computability and complexity at the microscopic level of synchronous computation, descriptional complexity of error detection, quantum cryptography, context-free parallel communicating grammar systems, fault tolerance of hypercubes, finite automata theory of bulk-synchronous parallel computing, dealing with silent data corruptions in high-performance computing, parallel sorting on graphics processing units, mining for functional dependencies in relational databases, cellular automata optimisation of wireless se...

  5. Data Acquistion Controllers and Computers that can Endure, Operate and Survive Cryogenic Temperatures, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Current and future NASA exploration flight missions require Avionics systems, Computers, Controllers and Data processing units that are capable of enduring extreme...

  6. The digital computer

    CERN Document Server

    Parton, K C

    2014-01-01

    The Digital Computer focuses on the principles, methodologies, and applications of the digital computer. The publication takes a look at the basic concepts involved in using a digital computer, simple autocode examples, and examples of working advanced design programs. Discussions focus on transformer design synthesis program, machine design analysis program, solution of standard quadratic equations, harmonic analysis, elementary wage calculation, and scientific calculations. The manuscript then examines commercial and automatic programming, how computers work, and the components of a computer

  7. Toward Cloud Computing Evolution

    OpenAIRE

    Susanto, Heru; Almunawar, Mohammad Nabil; Kang, Chen Chin

    2012-01-01

    -Information Technology (IT) shaped the success of organizations, giving them a solid foundation that increases both their level of efficiency as well as productivity. The computing industry is witnessing a paradigm shift in the way computing is performed worldwide. There is a growing awareness among consumers and enterprises to access their IT resources extensively through a "utility" model known as "cloud computing." Cloud computing was initially rooted in distributed grid-based computing. ...

  8. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  9. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    Science.gov (United States)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  10. A multiplicity logic unit

    International Nuclear Information System (INIS)

    Bialkowski, J.; Moszynski, M.; Zagorski, A.

    1981-01-01

    The logic diagram principle of operation and some details of the design of the multiplicity logic unit are presented. This unit was specially designed to fulfil the requirements of a multidetector arrangement for gamma-ray multiplicity measurements. The unit is equipped with 16 inputs controlled by a common coincidence gate. It delivers a linear output pulse with the height proportional to the multiplicity of coincidences and logic pulses corresponding to 0, 1, ... up to >= 5-fold coincidences. These last outputs are used to steer the routing unit working with the multichannel analyser. (orig.)

  11. ENERGY STAR Unit Reports

    Data.gov (United States)

    Department of Housing and Urban Development — These quarterly Federal Fiscal Year performance reports track the ENERGY STAR qualified HOME units that Participating Jurisdictions record in HUD's Integrated...

  12. Antibiotic Policies in the Intensive Care Unit

    Directory of Open Access Journals (Sweden)

    Nese Saltoglu

    2003-08-01

    Full Text Available The antimicrobial management of patients in the Intensive Care Units are complex. Antimicrobial resistance is an increasing problem. Effective strategies for the prevention of antimicrobial resistance in ICUs have focused on limiting the unnecessary use of antibiotics and increasing compliance with infection control practices. Antibiotic policies have been implemented to modify antibiotic use, including national or regional formulary manipulations, antibiotic restriction forms, care plans, antibiotic cycling and computer assigned antimicrobial therapy. Moreover, infectious diseases consultation is a simple way to limit antibiotic use in ICU units. To improve rational antimicrobial using a multidisiplinary approach is suggested. [Archives Medical Review Journal 2003; 12(4.000: 299-309

  13. Three-Dimensional Computer Visualization of Forensic Pathology Data

    OpenAIRE

    March, Jack; Schofield, Damian; Evison, Martin; Woodford, Noel

    2004-01-01

    Despite a decade of use in US courtrooms, it is only recently that forensic computer animations have become an increasingly important form of communication in legal spheres within the United Kingdom. Aims Research at the University of Nottingham has been influential in the critical investigation of forensic computer graphics reconstruction methodologies and techniques and in raising the profile of this novel form of data visualization within the United Kingdom. The case study presented demons...

  14. Synthetic Computation: Chaos Computing, Logical Stochastic Resonance, and Adaptive Computing

    Science.gov (United States)

    Kia, Behnam; Murali, K.; Jahed Motlagh, Mohammad-Reza; Sinha, Sudeshna; Ditto, William L.

    Nonlinearity and chaos can illustrate numerous behaviors and patterns, and one can select different patterns from this rich library of patterns. In this paper we focus on synthetic computing, a field that engineers and synthesizes nonlinear systems to obtain computation. We explain the importance of nonlinearity, and describe how nonlinear systems can be engineered to perform computation. More specifically, we provide an overview of chaos computing, a field that manually programs chaotic systems to build different types of digital functions. Also we briefly describe logical stochastic resonance (LSR), and then extend the approach of LSR to realize combinational digital logic systems via suitable concatenation of existing logical stochastic resonance blocks. Finally we demonstrate how a chaotic system can be engineered and mated with different machine learning techniques, such as artificial neural networks, random searching, and genetic algorithm, to design different autonomous systems that can adapt and respond to environmental conditions.

  15. 31 CFR 515.321 - United States; continental United States.

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 3 2010-07-01 2010-07-01 false United States; continental United... General Definitions § 515.321 United States; continental United States. The term United States means the United States and all areas under the jurisdiction or authority thereof, including the Trust Territory of...

  16. 31 CFR 500.321 - United States; continental United States.

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 3 2010-07-01 2010-07-01 false United States; continental United... General Definitions § 500.321 United States; continental United States. The term United States means the United States and all areas under the jurisdiction or authority thereof, including U.S. trust territories...

  17. 31 CFR 535.321 - United States; continental United States.

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 3 2010-07-01 2010-07-01 false United States; continental United... General Definitions § 535.321 United States; continental United States. The term United States means the United States and all areas under the jurisdiction or authority thereof including the Trust Territory of...

  18. Future Computer Requirements for Computational Aerodynamics

    Science.gov (United States)

    1978-01-01

    Recent advances in computational aerodynamics are discussed as well as motivations for and potential benefits of a National Aerodynamic Simulation Facility having the capability to solve fluid dynamic equations at speeds two to three orders of magnitude faster than presently possible with general computers. Two contracted efforts to define processor architectures for such a facility are summarized.

  19. Computers and Computation. Readings from Scientific American.

    Science.gov (United States)

    Fenichel, Robert R.; Weizenbaum, Joseph

    A collection of articles from "Scientific American" magazine has been put together at this time because the current period in computer science is one of consolidation rather than innovation. A few years ago, computer science was moving so swiftly that even the professional journals were more archival than informative; but today it is…

  20. Know Your Personal Computer Introduction to Computers

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 1. Know Your Personal Computer Introduction to Computers. Siddhartha Kumar Ghoshal. Series Article Volume 1 Issue 1 January 1996 pp 48-55. Fulltext. Click here to view fulltext PDF. Permanent link: