WorldWideScience

Sample records for platform-independent multi-threaded general-purpose

  1. Large Scale Document Inversion using a Multi-threaded Computing System

    Science.gov (United States)

    Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won

    2018-01-01

    Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. CCS Concepts •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations.

  2. Large Scale Document Inversion using a Multi-threaded Computing System.

    Science.gov (United States)

    Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won

    2017-06-01

    Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations.

  3. Effective verification of confidentiality for multi-threaded programs

    NARCIS (Netherlands)

    Ngo, Minh Tri; Stoelinga, Mariëlle Ida Antoinette; Huisman, Marieke

    2014-01-01

    This paper studies how confidentiality properties of multi-threaded programs can be verified efficiently by a combination of newly developed and existing model checking algorithms. In particular, we study the verification of scheduler-specific observational determinism (SSOD), a property that

  4. A multi-threading approach to secure VERIFYPIN

    CSIR Research Space (South Africa)

    Frieslaar, Ibraheem

    2016-10-01

    Full Text Available along side a pin-acceptance program in a multi-threaded environment. These threads are inserted randomly on each execution of the program to create confusion for the attacker. Moreover, the research proposes a more improved version of the pin...

  5. Creating and improving multi-threaded Geant4

    CERN Document Server

    Dong, Xin; Apostolakis, John; Jarp, Sverre; Nowak, Andrzej; Asai, Makoto; Brandt, Daniel

    2012-01-01

    We document the methods used to create the multi-threaded prototype Geant4MT from a sequential version of Geant4. We cover the Source-to-Source transformations applied, and discuss the process of verifying the correctness of the Geant4MT toolkit and applications based on it. Tools to ensure that the results of a transformed multi-threaded application are exactly equal to the original sequential version are under development. Stand-alone or simple applications can be adapted within 1-2 working days. Geant4MT is shown to scale linearly on an 80-core computer. In the special case of a single worker thread on one core, 30% overhead has been observed. We explain the reasons for this and the improvements introduced to reduce this overhead.

  6. Multi-thread Parallel Speech Recognition for Mobile Applications

    Directory of Open Access Journals (Sweden)

    LOJKA Martin

    2014-05-01

    Full Text Available In this paper, the server based solution of the multi-thread large vocabulary automatic speech recognition engine is described along with the Android OS and HTML5 practical application examples. The basic idea was to bring speech recognition available for full variety of applications for computers and especially for mobile devices. The speech recognition engine should be independent of commercial products and services (where the dictionary could not be modified. Using of third-party services could be also a security and privacy problem in specific applications, when the unsecured audio data could not be sent to uncontrolled environments (voice data transferred to servers around the globe. Using our experience with speech recognition applications, we have been able to construct a multi-thread speech recognition serverbased solution designed for simple applications interface (API to speech recognition engine modified to specific needs of particular application.

  7. A PREDICTABLE MULTI-THREADED MAIN-MEMORY STORAGE MANAGER

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper introduces the design and implementation of a predictable multi-threaded main-memo- ry storage manager (CS20), and emphasizes the database service mediator(DSM), an operation prediction model using exponential averaging. The memory manager, indexing, as well as lock manager in CS20 are also presented briefly. CS20 has been embedded in a mobile telecommunication service system. Practice showed, DSM effectively controls system load and hence improves the real-time characteristics of data accessing.

  8. Multi-threaded software framework development for the ATLAS experiment

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00226135; Baines, John; Bold, Tomasz; Calafiura, Paolo; Dotti, Andrea; Farrell, Steven; Leggett, Charles; Malon, David; Ritsch, Elmar; Snyder, Scott; Tsulaia, Vakhtang; van Gemmeren, Peter; Wynne, Benjamin

    2016-01-01

    ATLAS's current software framework, Gaudi/Athena, has been very successful for the experiment in LHC Runs 1 and 2. However, its single threaded design has been recognised for some time to be increasingly problematic as CPUs have increased core counts and decreased available memory per core. Even the multi-process version of Athena, AthenaMP, will not scale to the range of architectures we expect to use beyond Run2. ATLAS examined the requirements on an updated multi-threaded framework and laid out plans for a new framework, including better support for high level trigger (HLT) use cases, in 2014. In this paper we report on our progress in developing the new multi-threaded task parallel extension of Athena, AthenaMT. Implementing AthenaMT has required many significant code changes. Progress has been made in updating key concepts of the framework, to allow the incorporation of different levels of thread safety in algorithmic code (from un-migrated thread-unsafe code, to thread safe copyable code to reentrant co...

  9. Multi-threaded Software Framework Development for the ATLAS Experiment

    CERN Document Server

    Stewart, Graeme; The ATLAS collaboration; Baines, John; Calafiura, Paolo; Dotti, Andrea; Farrell, Steven; Leggett, Charles; Malon, David; Ritsch, Elmar; Snyder, Scott; Tsulaia, Vakhtang; van Gemmeren, Peter; Wynne, Benjamin

    2016-01-01

    ATLAS's current software framework, Gaudi/Athena, has been very successful for the experiment in LHC Runs 1 and 2. However, its single threaded design has been recognised for some time to be increasingly problematic as CPUs have increased core counts and decreased available memory per core. Even the multi-process version of Athena, AthenaMP, will not scale to the range of architectures we expect to use beyond Run2. ATLAS examined the requirements on an updated multi-threaded framework and layed out plans for a new framework, including better support for high level trigger (HLT) use cases, in 2014. In this paper we report on our progress in developing the new multi-threaded task parallel extension of Athena, AthenaMT. Implementing AthenaMT has required many significant code changes. Progress has been made in updating key concepts of the framework, to allow the incorporation of different levels of thread safety in algorithmic code (from un-migrated thread-unsafe code, to thread safe copyable code to reentrant c...

  10. Multi-threaded ATLAS simulation on Intel Knights Landing processors

    Science.gov (United States)

    Farrell, Steven; Calafiura, Paolo; Leggett, Charles; Tsulaia, Vakhtang; Dotti, Andrea; ATLAS Collaboration

    2017-10-01

    The Knights Landing (KNL) release of the Intel Many Integrated Core (MIC) Xeon Phi line of processors is a potential game changer for HEP computing. With 72 cores and deep vector registers, the KNL cards promise significant performance benefits for highly-parallel, compute-heavy applications. Cori, the newest supercomputer at the National Energy Research Scientific Computing Center (NERSC), was delivered to its users in two phases with the first phase online at the end of 2015 and the second phase now online at the end of 2016. Cori Phase 2 is based on the KNL architecture and contains over 9000 compute nodes with 96GB DDR4 memory. ATLAS simulation with the multithreaded Athena Framework (AthenaMT) is a good potential use-case for the KNL architecture and supercomputers like Cori. ATLAS simulation jobs have a high ratio of CPU computation to disk I/O and have been shown to scale well in multi-threading and across many nodes. In this paper we will give an overview of the ATLAS simulation application with details on its multi-threaded design. Then, we will present a performance analysis of the application on KNL devices and compare it to a traditional x86 platform to demonstrate the capabilities of the architecture and evaluate the benefits of utilizing KNL platforms like Cori for ATLAS production.

  11. A Multi-threaded Version of Field II

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2014-01-01

    A multi-threaded version of Field II has been developed, which automatically can use the multi-core capabil- ities of modern CPUs. The memory allocation routines were rewritten to minimize the number of dynamic allocations and to make pre-allocations possible for each thread. This ensures...... that the simulation job can be automatically partitioned and the interdependence between threads minimized. The new code has been compared to Field II version 3.22, October 27, 2013 (latest free-ware version). A 64 element 5 MHz focused array transducer was simulated. One million point scatterers randomly distributed...... in a plane of 20 x 50 mm (width x depth) with random Gaussian amplitudes were simulated using the command calc scat . Dual Intel Xeon CPU E5-2630 2.60 GHz CPUs were used under Ubuntu Linux 10.02 and Matlab version 2013b. Each CPU holds 6 cores with hyper-threading, corresponding to a total of 24 hyper...

  12. Multi-threaded ATLAS simulation on Intel Knights Landing processors

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00014247; The ATLAS collaboration; Calafiura, Paolo; Leggett, Charles; Tsulaia, Vakhtang; Dotti, Andrea

    2017-01-01

    The Knights Landing (KNL) release of the Intel Many Integrated Core (MIC) Xeon Phi line of processors is a potential game changer for HEP computing. With 72 cores and deep vector registers, the KNL cards promise significant performance benefits for highly-parallel, compute-heavy applications. Cori, the newest supercomputer at the National Energy Research Scientific Computing Center (NERSC), was delivered to its users in two phases with the first phase online at the end of 2015 and the second phase now online at the end of 2016. Cori Phase 2 is based on the KNL architecture and contains over 9000 compute nodes with 96GB DDR4 memory. ATLAS simulation with the multithreaded Athena Framework (AthenaMT) is a good potential use-case for the KNL architecture and supercomputers like Cori. ATLAS simulation jobs have a high ratio of CPU computation to disk I/O and have been shown to scale well in multi-threading and across many nodes. In this paper we will give an overview of the ATLAS simulation application with detai...

  13. Multi-threaded ATLAS Simulation on Intel Knights Landing Processors

    CERN Document Server

    Farrell, Steven; The ATLAS collaboration; Calafiura, Paolo; Leggett, Charles

    2016-01-01

    The Knights Landing (KNL) release of the Intel Many Integrated Core (MIC) Xeon Phi line of processors is a potential game changer for HEP computing. With 72 cores and deep vector registers, the KNL cards promise significant performance benefits for highly-parallel, compute-heavy applications. Cori, the newest supercomputer at the National Energy Research Scientific Computing Center (NERSC), will be delivered to its users in two phases with the first phase online now and the second phase expected in mid-2016. Cori Phase 2 will be based on the KNL architecture and will contain over 9000 compute nodes with 96GB DDR4 memory. ATLAS simulation with the multithreaded Athena Framework (AthenaMT) is a great use-case for the KNL architecture and supercomputers like Cori. Simulation jobs have a high ratio of CPU computation to disk I/O and have been shown to scale well in multi-threading and across many nodes. In this presentation we will give an overview of the ATLAS simulation application with details on its multi-thr...

  14. Matlab enhanced multi-threaded tomography optimization sequence (MEMTOS)

    International Nuclear Information System (INIS)

    Lum, Edward S.; Pope, Chad L.

    2016-01-01

    Highlights: • Monte Carlo simulation of spent nuclear fuel assembly neutron computed tomography. • Optimized parallel calculations conducted from within the MATLAB environment. • Projection difference technique used to identify anomalies in spent nuclear fuel assemblies. - Abstract: One challenge associated with spent nuclear fuel assemblies is the lack of non-destructive analysis techniques to determine if fuel pins have been removed or replaced or if there are significant defects associated with fuel pins deep within a fuel assembly. Neutron computed tomography is a promising technique for addressing these qualitative issues. Monte Carlo simulation of spent nuclear fuel neutron computed tomography allows inexpensive process investigation and optimization. The main purpose of this work is to provide a fully automated advanced simulation framework for the analysis of spent nuclear fuel inspection using neutron computed tomography. The simulation framework, called Matlab Enhanced Multi-Threaded Tomography Optimization Sequence (MEMTOS) not only automates the simulation process, but also generates superior tomography image results. MEMTOS is written in the MATLAB scripting language and addresses file management, parallel Monte Carlo execution, results extraction, and tomography image generation. This paper describes the mathematical basis for neutron computed tomography, the Monte Carlo technique used to simulate neutron computed tomography, and the overall tomography simulation optimization algorithm. Sequence results presented include overall simulation speed enhancement, tomography and image results obtained for Experimental Breeder Reactor II spent fuel assemblies and light water reactor fuel assemblies. Optimization using a projection difference technique are also described.

  15. Designing platform independent mobile apps and services

    CERN Document Server

    Heckman, Rocky

    2016-01-01

    This book explains how to help create an innovative and future proof architecture for mobile apps by introducing practical approaches to increase the value and flexibility of their service layers and reduce their delivery time. Designing Platform Independent Mobile Apps and Services begins by describing the mobile computing landscape and previous attempts at cross platform development. Platform independent mobile technologies and development strategies are described in chapter two and three. Communication protocols, details of a recommended five layer architecture, service layers, and the data abstraction layer are also introduced in these chapters. Cross platform languages and multi-client development tools for the User Interface (UI) layer, as well as message processing patterns and message routing of the Service Int rface (SI) layer are explained in chapter four and five. Ways to design the service layer for mobile computing, using Command Query Responsibility Segregation (CQRS) and the Data Abstraction La...

  16. Performance improvement of developed program by using multi-thread technique

    Directory of Open Access Journals (Sweden)

    Surasak Jabal

    2015-03-01

    Full Text Available This research presented how to use a multi-thread programming technique to improve the performance of a program written by Windows Presentation Foundation (WPF. The Computer Assisted Instruction (CAI software, named GAME24, was selected to use as a case study. This study composed of two main parts. The first part was about design and modification of the program structure upon the Object Oriented Programing (OOP approach. The second part was about coding the program using the multi-thread technique which the number of threads were based on the calculated Catalan number. The result showed that the multi-thread programming technique increased the performance of the program 44%-88% compared to the single-thread technique. In addition, it has been found that the number of cores in the CPU also increase the performance of multithreaded program proportionally.

  17. General Purpose (office) Network reorganisation

    CERN Multimedia

    IT Department

    2016-01-01

    On Saturday 27 August, the IT Department’s Communication Systems group will perform a major reorganisation of CERN’s General Purpose Network.   This reorganisation will cause network interruptions on Saturday 27 August (and possibly Sunday 28 August) and will be followed by a change to the IP addresses of connected systems that will come into effect on Monday 3 October. For further details and information about the actions you may need to take, please see: https://information-technology.web.cern.ch/news/general-purpose-office-network-reorganisation.

  18. Multi-Threaded DNA Tag/Anti-Tag Library Generator for Multi-Core Platforms

    Science.gov (United States)

    2009-05-01

    base pair)  Watson ‐ Crick  strand pairs that bind perfectly within pairs, but poorly across pairs. A variety  of  DNA  strand hybridization metrics...AFRL-RI-RS-TR-2009-131 Final Technical Report May 2009 MULTI-THREADED DNA TAG/ANTI-TAG LIBRARY GENERATOR FOR MULTI-CORE PLATFORMS...TYPE Final 3. DATES COVERED (From - To) Jun 08 – Feb 09 4. TITLE AND SUBTITLE MULTI-THREADED DNA TAG/ANTI-TAG LIBRARY GENERATOR FOR MULTI-CORE

  19. General purpose programmable accelerator board

    Science.gov (United States)

    Robertson, Perry J.; Witzke, Edward L.

    2001-01-01

    A general purpose accelerator board and acceleration method comprising use of: one or more programmable logic devices; a plurality of memory blocks; bus interface for communicating data between the memory blocks and devices external to the board; and dynamic programming capabilities for providing logic to the programmable logic device to be executed on data in the memory blocks.

  20. Scheduler-specific Confidentiality for Multi-Threaded Programs and Its Logic-Based Verification

    NARCIS (Netherlands)

    Huisman, Marieke; Ngo, Minh Tri

    2011-01-01

    Observational determinism has been proposed in the literature as a way to ensure confidentiality for multi-threaded programs. Intuitively, a program is observationally deterministic if the behavior of the public variables is deterministic, i.e., independent of the private variables and the

  1. Scheduler-Specific Confidentiality for Multi-Threaded Programs and Its Logic-Based Verification

    NARCIS (Netherlands)

    Huisman, Marieke; Ngo, Minh Tri; Beckert, B.; Damiani, F.; Gurov, D.

    2012-01-01

    Observational determinism has been proposed in the literature as a way to ensure condentiality for multi-threaded programs. Intuitively, a program is observationally deterministic if the behavior of the public variables is deterministic, i.e., independent of the private variables and the scheduling

  2. General Purpose Heat Source Simulator

    Science.gov (United States)

    Emrich, Bill

    2008-01-01

    The General Purpose Heat Source (GPHS) simulator project is designed to replicate through the use of electrical heaters, the form, fit, and function of actual GPHS modules which generate heat through the radioactive decay of Pu238. The use of electrically heated modules rather than modules containing Pu238 facilitates the testing of spacecraft subsystems and systems without sacrificing the quantity and quality of the test data gathered. Previous GPHS activities are centered around developing robust heater designs with sizes and weights that closely matched those of actual Pu238 fueled GPHS blocks. These efforts were successful, although their maximum temperature capabilities were limited to around 850 C. New designs are being pursued which also replicate the sizes and weights of actual Pu238 fueled GPHS blocks but will allow operation up to 1100 C.

  3. UTLEON3 Exploring Fine-Grain Multi-Threading in FPGAs

    CERN Document Server

    Daněk, Martin; Kohout, Lukáš; Sýkora, Jaroslav; Bartosinski, Roman

    2013-01-01

    This book describes a specification, microarchitecture, VHDL implementation and evaluation of a SPARC v8 CPU with fine-grain multi-threading, called micro-threading. The CPU, named UTLEON3, is an alternative platform for exploring CPU multi-threading that is compatible with the industry-standard GRLIB package. The processor microarchitecture was designed to map in an efficient way the data-flow scheme on a classical von Neumann pipelined processing used in common processors, while retaining full binary compatibility with existing legacy programs.  Describes and documents a working SPARC v8, with fine-grain multithreading and fast context switch; Provides VHDL sources for the described processor; Describes a latency-tolerant framework for coupling hardware accelerators to microthreaded processor pipelines; Includes programming by example in the micro-threaded assembly language.    

  4. Multi-Threaded Dense Linear Algebra Libraries for Low-Power Asymmetric Multicore Processors

    OpenAIRE

    Catalán, Sandra; Herrero, José R.; Igual, Francisco D.; Rodríguez-Sánchez, Rafael; Quintana-Ortí, Enrique S.

    2015-01-01

    Dense linear algebra libraries, such as BLAS and LAPACK, provide a relevant collection of numerical tools for many scientific and engineering applications. While there exist high performance implementations of the BLAS (and LAPACK) functionality for many current multi-threaded architectures,the adaption of these libraries for asymmetric multicore processors (AMPs)is still pending. In this paper we address this challenge by developing an asymmetry-aware implementation of the BLAS, based on the...

  5. Platform-independent method for computer aided schematic drawings

    Science.gov (United States)

    Vell, Jeffrey L [Slingerlands, NY; Siganporia, Darius M [Clifton Park, NY; Levy, Arthur J [Fort Lauderdale, FL

    2012-02-14

    A CAD/CAM method is disclosed for a computer system to capture and interchange schematic drawing and associated design information. The schematic drawing and design information are stored in an extensible, platform-independent format.

  6. A platform independent communication library for distributed computing

    NARCIS (Netherlands)

    Groen, D.; Rieder, S.; Grosso, P.; de Laat, C.; Portegies Zwart, S.

    2010-01-01

    We present MPWide, a platform independent communication library for performing message passing between supercomputers. Our library couples several local MPI applications through a long distance network using, for example, optical links. The implementation is deliberately kept light-weight, platform

  7. Geant4-MT: bringing multi-threading into Geant4 production

    International Nuclear Information System (INIS)

    Ahn, S.; Apostolakis, J.; Cosmo, G.; Nowak, A.; Asai, M.; Brandt, D.; Dotti, A.; Coopermann, G.; Dong, X.; Jun, Soon Yung

    2013-01-01

    Geant4-MT is the multi-threaded version of the Geant4 particle transport code. The key goals for the design of Geant4-MT have been a) the need to reduce the memory footprint of the multi-threaded application compared to the use of separate jobs and processes; b) to create an easy migration of the existing applications; and c) to use efficiently many threads or cores, by scaling up to tens and potentially hundreds of workers. The first public release of a Geant4- MT prototype was made in 2011. We report on the revision of Geant4-MT for inclusion in the production-level release scheduled for end of 2013. This has involved significant re-engineering of the prototype in order to incorporate it into the main Geant4 development line, and the porting of Geant4-MT threading code to additional platforms. In order to make the porting of applications as simple as possible, refinements addressed the needs of standalone applications. Further adaptations were created to improve the fit with the frameworks of High Energy Physics experiments. We report on performances measurements on Intel Xeon TM , AMD Opteron TM the first trials of Geant4-MT on the Intel Many Integrated Cores (MIC) architecture, in the form of the Xeon Phi TM co-processor. These indicate near-linear scaling through about 200 threads on 60 cores, when holding fixed the number of events per thread. (authors)

  8. 7 CFR 254.1 - General purpose.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 4 2010-01-01 2010-01-01 false General purpose. 254.1 Section 254.1 Agriculture... GENERAL REGULATIONS AND POLICIES-FOOD DISTRIBUTION ADMINISTRATION OF THE FOOD DISTRIBUTION PROGRAM FOR INDIAN HOUSEHOLDS IN OKLAHOMA § 254.1 General purpose. This part sets the requirement under which...

  9. 22 CFR 309.1 - General purpose.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true General purpose. 309.1 Section 309.1 Foreign Relations PEACE CORPS DEBT COLLECTION General Provisions § 309.1 General purpose. This part prescribes the procedures to be used by the United States Peace Corps (Peace Corps) in the collection and/or disposal of non...

  10. 10 CFR 205.350 - General purpose.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false General purpose. 205.350 Section 205.350 Energy DEPARTMENT OF ENERGY OIL ADMINISTRATIVE PROCEDURES AND SANCTIONS Electric Power System Permits and Reports....350 General purpose. The purpose of this rule is to establish a procedure for the Office of...

  11. 12 CFR 1703.31 - General purposes.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false General purposes. 1703.31 Section 1703.31 Banks and Banking OFFICE OF FEDERAL HOUSING ENTERPRISE OVERSIGHT, DEPARTMENT OF HOUSING AND URBAN... Legal Proceedings in Which OFHEO Is Not a Named Party § 1703.31 General purposes. The purposes of this...

  12. MT-ADRES: multi-threading on coarse-grained reconfigurable architecture

    DEFF Research Database (Denmark)

    Wu, Kehuai; Kanstein, Andreas; Madsen, Jan

    2008-01-01

    The coarse-grained reconfigurable architecture ADRES (architecture for dynamically reconfigurable embedded systems) and its compiler offer high instruction-level parallelism (ILP) to applications by means of a sparsely interconnected array of functional units and register files. As high-ILP archi......The coarse-grained reconfigurable architecture ADRES (architecture for dynamically reconfigurable embedded systems) and its compiler offer high instruction-level parallelism (ILP) to applications by means of a sparsely interconnected array of functional units and register files. As high......-ILP architectures achieve only low parallelism when executing partially sequential code segments, which is also known as Amdahl's law, this article proposes to extend ADRES to MT-ADRES (multi-threaded ADRES) to also exploit thread-level parallelism. On MT-ADRES architectures, the array can be partitioned...

  13. Servicing a globally broadcast interrupt signal in a multi-threaded computer

    Science.gov (United States)

    Attinella, John E.; Davis, Kristan D.; Musselman, Roy G.; Satterfield, David L.

    2015-12-29

    Methods, apparatuses, and computer program products for servicing a globally broadcast interrupt signal in a multi-threaded computer comprising a plurality of processor threads. Embodiments include an interrupt controller indicating in a plurality of local interrupt status locations that a globally broadcast interrupt signal has been received by the interrupt controller. Embodiments also include a thread determining that a local interrupt status location corresponding to the thread indicates that the globally broadcast interrupt signal has been received by the interrupt controller. Embodiments also include the thread processing one or more entries in a global interrupt status bit queue based on whether global interrupt status bits associated with the globally broadcast interrupt signal are locked. Each entry in the global interrupt status bit queue corresponds to a queued global interrupt.

  14. Development of a General Purpose Gamification Framework

    OpenAIRE

    Vea, Eivind

    2016-01-01

    This report describes the design and implementation of a general purpose gamification framework developed in JavaScript on the Metor platform. Gamification is described as the use of game elements in none-game contexts. The purpose is to encourage and change user behaviour. Examples of existing gamification use cases and frameworks are described. A demo game shows how a general purpose framework can be used.

  15. General purpose code for Monte Carlo simulations

    International Nuclear Information System (INIS)

    Wilcke, W.W.

    1983-01-01

    A general-purpose computer called MONTHY has been written to perform Monte Carlo simulations of physical systems. To achieve a high degree of flexibility the code is organized like a general purpose computer, operating on a vector describing the time dependent state of the system under simulation. The instruction set of the computer is defined by the user and is therefore adaptable to the particular problem studied. The organization of MONTHY allows iterative and conditional execution of operations

  16. General purpose computers in real time

    International Nuclear Information System (INIS)

    Biel, J.R.

    1989-01-01

    I see three main trends in the use of general purpose computers in real time. The first is more processing power. The second is the use of higher speed interconnects between computers (allowing more data to be delivered to the processors). The third is the use of larger programs running in the computers. Although there is still work that needs to be done, I believe that all indications are that the online need for general purpose computers should be available for the SCC and LHC machines. 2 figs

  17. A Platform-Independent Plugin for Navigating Online Radiology Cases.

    Science.gov (United States)

    Balkman, Jason D; Awan, Omer A

    2016-06-01

    Software methods that enable navigation of radiology cases on various digital platforms differ between handheld devices and desktop computers. This has resulted in poor compatibility of online radiology teaching files across mobile smartphones, tablets, and desktop computers. A standardized, platform-independent, or "agnostic" approach for presenting online radiology content was produced in this work by leveraging modern hypertext markup language (HTML) and JavaScript web software technology. We describe the design and evaluation of this software, demonstrate its use across multiple viewing platforms, and make it publicly available as a model for future development efforts.

  18. Towards Fast Reverse Time Migration Kernels using Multi-threaded Wavefront Diamond Tiling

    KAUST Repository

    Malas, T.

    2015-09-13

    Today’s high-end multicore systems are characterized by a deep memory hierarchy, i.e., several levels of local and shared caches, with limited size and bandwidth per core. The ever-increasing gap between the processor and memory speed will further exacerbate the problem and has lead the scientific community to revisit numerical software implementations to better suit the underlying memory subsystem for performance (data reuse) as well as energy efficiency (data locality). The authors propose a novel multi-threaded wavefront diamond blocking (MWD) implementation in the context of stencil computations, which represents the core operation for seismic imaging in oil industry. The stencil diamond formulation introduces temporal blocking for high data reuse in the upper cache levels. The wavefront optimization technique ensures data locality by allowing multiple threads to share common adjacent point stencil. Therefore, MWD is able to take up the aforementioned challenges by alleviating the cache size limitation and releasing pressure from the memory bandwidth. Performance comparisons are shown against the optimized 25-point stencil standard seismic imaging scheme using spatial and temporal blocking and demonstrate the effectiveness of MWD.

  19. Real-time SHVC software decoding with multi-threaded parallel processing

    Science.gov (United States)

    Gudumasu, Srinivas; He, Yuwen; Ye, Yan; He, Yong; Ryu, Eun-Seok; Dong, Jie; Xiu, Xiaoyu

    2014-09-01

    This paper proposes a parallel decoding framework for scalable HEVC (SHVC). Various optimization technologies are implemented on the basis of SHVC reference software SHM-2.0 to achieve real-time decoding speed for the two layer spatial scalability configuration. SHVC decoder complexity is analyzed with profiling information. The decoding process at each layer and the up-sampling process are designed in parallel and scheduled by a high level application task manager. Within each layer, multi-threaded decoding is applied to accelerate the layer decoding speed. Entropy decoding, reconstruction, and in-loop processing are pipeline designed with multiple threads based on groups of coding tree units (CTU). A group of CTUs is treated as a processing unit in each pipeline stage to achieve a better trade-off between parallelism and synchronization. Motion compensation, inverse quantization, and inverse transform modules are further optimized with SSE4 SIMD instructions. Simulations on a desktop with an Intel i7 processor 2600 running at 3.4 GHz show that the parallel SHVC software decoder is able to decode 1080p spatial 2x at up to 60 fps (frames per second) and 1080p spatial 1.5x at up to 50 fps for those bitstreams generated with SHVC common test conditions in the JCT-VC standardization group. The decoding performance at various bitrates with different optimization technologies and different numbers of threads are compared in terms of decoding speed and resource usage, including processor and memory.

  20. Implementation of the ATLAS trigger within the multi-threaded software framework AthenaMT

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00225867; The ATLAS collaboration

    2017-01-01

    We present an implementation of the ATLAS High Level Trigger, HLT, that provides parallel execution of trigger algorithms within the ATLAS multithreaded software framework, AthenaMT. This development will enable the ATLAS HLT to meet future challenges due to the evolution of computing hardware and upgrades of the Large Hadron Collider, LHC, and ATLAS Detector. During the LHC data-taking period starting in 2021, luminosity will reach up to three times the original design value. Luminosity will increase further, to up to 7.5 times the design value, in 2026 following LHC and ATLAS upgrades. This includes an upgrade of the ATLAS trigger architecture that will result in an increase in the HLT input rate by a factor of 4 to 10 compared to the current maximum rate of 100 kHz. The current ATLAS multiprocess framework, AthenaMP, manages a number of processes that each execute algorithms sequentially for different events. AthenaMT will provide a fully multi-threaded environment that will additionally enable concurrent ...

  1. Design, Implementation and Testing of a Tiny Multi-Threaded DNS64 Server

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2016-03-01

    Full Text Available DNS64 is going to be an important service (together with NAT64 in the upcoming years of the IPv6 transition enabling the clients having only IPv6 addresses to reach the servers having only IPv4 addresses (the majority of the servers on the Internet today. This paper describes the design, implementation and functional testing of MTD64, a flexible, easy to use, multi-threaded DNS64 proxy published as a free software under the GPLv2 license. All the theoretical background is introduced including the DNS message format, the operation of the DNS64 plus NAT64 solution and the construction of the IPv4-embedded IPv6 addresses. Our design decisions are fully disclosed from the high level ones to the details. Implementation is introduced at high level only as the details can be found in the developer documentation. The most important parts of a through functional testing are included as well as the results of some basic performance comparison with BIND.

  2. Summary of JENDL-2 general purpose file

    Energy Technology Data Exchange (ETDEWEB)

    Nakagawa, Tsuneo [ed.

    1984-06-15

    The general purpose file of the second version of Japanese Evaluated Nuclear Data Library (JENDL-2) was released in December 1982. Recently, descriptive data were added to JENDL-2 and at the same time the first revision of numerical data was performed. JENDL-2 (Rev.1) consists of the data for 89 nuclides and about 211,000 records in the ENDF/B-IV format. In this report, full listings of presently added descriptive data are given to summarize the JENDL-2 general purpose file. The 2200-m/sec and 14-MeV cross sections, resonance integrals, Maxwellian and fission spectrum averaged cross sections are given in a table. Average cross sections were also calculated in suitable energy intervals.

  3. Summary of JENDL-2 general purpose file

    International Nuclear Information System (INIS)

    Nakagawa, Tsuneo

    1984-06-01

    The general purpose file of the second version of Japanese Evaluated Nuclear Data Library (JENDL-2) was released in December 1982. Recently, descriptive data were added to JENDL-2 and at the same time the first revision of numerical data was performed. JENDL-2 (Rev1) consists of the data for 89 nuclides and about 211,000 records in the ENDF/B-IV format. In this report, full listings of presently added descriptive data are given to summarize the JENDL-2 general purpose file. The 2200-m/sec and 14-MeV cross sections, resonance integrals, Maxwellian and fission spectrum averaged cross sections are given in a table. Average cross sections were also calculated in suitable energy intervals. (author)

  4. General-purpose radiographic and fluoroscopic table

    International Nuclear Information System (INIS)

    Ishizaki, Noritaka

    1982-01-01

    A new series of diagnostic tables, Model DT-KEL, was developed for general-purpose radiographic and fluoroscopic systems. Through several investigations, the table was so constructed that the basic techniques be general radiography and GI examination, and other techniques be optionally added. The diagnostic tables involve the full series of the type for various purposes and are systematized with the surrounding equipment. A retractable mechanism of grids was adopted first for general use. The fine grids with a density of 57 lines per cm, which was adopted in KEL-2, reduced the X-ray doses by 16 percent. (author)

  5. General-purpose RFQ design program

    International Nuclear Information System (INIS)

    Wadlinger, E.A.

    1984-01-01

    We have written a general-purpose, radio-frequency quadrupole (RFQ) design program that allows maximum flexibility in picking design algorithms. This program optimizes the RFQ on any combination of design parameters while simultaneously satisfying mutually compatible, physically required constraint equations. It can be very useful for deriving various scaling laws for RFQs. This program has a friendly user interface in addition to checking the consistency of the user-defined requirements and is written to minimize the effort needed to incorporate additional constraint equations. We describe the program and present some examples

  6. Report of the general purpose detector group

    International Nuclear Information System (INIS)

    Barbaro-Galtieri, A.; Bartel, W.; Bulos, F.; Cool, R.; Hanson, G.; Koetz, U.; Kottahaus, R.; Loken, S.; Luke, D.; Rothenberg, A.

    1975-01-01

    A general purpose detector for PEP is described. The main components of this detector are a l meter radius, 15 kilogauss superconducting solenoidal magnet with drift chambers to detect and measure the momentum of charged particles, a liquid argon neutral detector and hadron calorimeter, and a system of Cherenkov and time-of-flight counters for identification of charged hadrons. A major consideration in the design of this detector was that it be flexible: the magnet coil and drift chambers form a core around which various apparatus for specialized detection can be placed

  7. General Purpose Crate (GPC) for control applications

    International Nuclear Information System (INIS)

    Singh, Kundan; Munda, Deepak K.; Jain, Mamta; Archunan, M.; Barua, P.; Ajith Kumar, B.P.

    2011-01-01

    A General Purpose Crate (GPC) capable of handling digital and analog Inputs/Outputs signals has been developed at Inter University Accelerator Centre (IUAC), New Delhi, for accelerator control system applications. The system includes back-plane bus with on board plugged-in single board computer with PC104 and Ethernet interface, running Linux operating system. The bus control logic is designed on the back-plane pcb itself, making the system more rugged. The various types of digital and analog input/output modules can be plugged into the back plane bus randomly with standard euro connectors, which provides highly reliable and dust free contacts. Maximum eight modules can be inserted into the crate. The total power consumption for various types of modules and back-plane controller is approximately 50 watts. The multi-output DC power supply from COSEL has been used in the crate. The general purpose crate is software compatible with the CAMAC crates used in the accelerator control system. (author)

  8. High Resolution Modelling of the Congo River's Multi-Threaded Main Stem Hydraulics

    Science.gov (United States)

    Carr, A. B.; Trigg, M.; Tshimanga, R.; Neal, J. C.; Borman, D.; Smith, M. W.; Bola, G.; Kabuya, P.; Mushie, C. A.; Tschumbu, C. L.

    2017-12-01

    We present the results of a summer 2017 field campaign by members of the Congo River users Hydraulics and Morphology (CRuHM) project, and a subsequent reach-scale hydraulic modelling study on the Congo's main stem. Sonar bathymetry, ADCP transects, and water surface elevation data have been collected along the Congo's heavily multi-threaded middle reach, which exhibits complex in-channel hydraulic processes that are not well understood. To model the entire basin's hydrodynamics, these in-channel hydraulic processes must be parameterised since it is not computationally feasible to represent them explicitly. Furthermore, recent research suggests that relative to other large global rivers, in-channel flows on the Congo represent a relatively large proportion of total flow through the river-floodplain system. We therefore regard sufficient representation of in-channel hydraulic processes as a Congo River hydrodynamic research priority. To enable explicit representation of in-channel hydraulics, we develop a reach-scale (70 km), high resolution hydraulic model. Simulation of flow through individual channel threads provides new information on flow depths and velocities, and will be used to inform the parameterisation of a broader basin-scale hydrodynamic model. The basin-scale model will ultimately be used to investigate floodplain fluxes, flood wave attenuation, and the impact of future hydrological change scenarios on basin hydrodynamics. This presentation will focus on the methodology we use to develop a reach-scale bathymetric DEM. The bathymetry of only a small proportion of channel threads can realistically be captured, necessitating some estimation of the bathymetry of channels not surveyed. We explore different approaches to this bathymetry estimation, and the extent to which it influences hydraulic model predictions. The CRuHM project is a consortium comprising the Universities of Kinshasa, Rhodes, Dar es Salaam, Bristol, and Leeds, and is funded by Royal

  9. General-Purpose Software For Computer Graphics

    Science.gov (United States)

    Rogers, Joseph E.

    1992-01-01

    NASA Device Independent Graphics Library (NASADIG) is general-purpose computer-graphics package for computer-based engineering and management applications which gives opportunity to translate data into effective graphical displays for presentation. Features include two- and three-dimensional plotting, spline and polynomial interpolation, control of blanking of areas, multiple log and/or linear axes, control of legends and text, control of thicknesses of curves, and multiple text fonts. Included are subroutines for definition of areas and axes of plots; setup and display of text; blanking of areas; setup of style, interpolation, and plotting of lines; control of patterns and of shading of colors; control of legends, blocks of text, and characters; initialization of devices; and setting of mixed alphabets. Written in FORTRAN 77.

  10. SRAC95; general purpose neutronics code system

    International Nuclear Information System (INIS)

    Okumura, Keisuke; Tsuchihashi, Keichiro; Kaneko, Kunio.

    1996-03-01

    SRAC is a general purpose neutronics code system applicable to core analyses of various types of reactors. Since the publication of JAERI-1302 for the revised SRAC in 1986, a number of additions and modifications have been made for nuclear data libraries and programs. Thus, the new version SRAC95 has been completed. The system consists of six kinds of nuclear data libraries(ENDF/B-IV, -V, -VI, JENDL-2, -3.1, -3.2), five modular codes integrated into SRAC95; collision probability calculation module (PIJ) for 16 types of lattice geometries, Sn transport calculation modules(ANISN, TWOTRAN), diffusion calculation modules(TUD, CITATION) and two optional codes for fuel assembly and core burn-up calculations(newly developed ASMBURN, revised COREBN). In this version, many new functions and data are implemented to support nuclear design studies of advanced reactors, especially for burn-up calculations. SRAC95 is available not only on conventional IBM-compatible computers but also on scalar or vector computers with the UNIX operating system. This report is the SRAC95 users manual which contains general description, contents of revisions, input data requirements, detail information on usage, sample input data and list of available libraries. (author)

  11. SRAC95; general purpose neutronics code system

    Energy Technology Data Exchange (ETDEWEB)

    Okumura, Keisuke; Tsuchihashi, Keichiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Kaneko, Kunio

    1996-03-01

    SRAC is a general purpose neutronics code system applicable to core analyses of various types of reactors. Since the publication of JAERI-1302 for the revised SRAC in 1986, a number of additions and modifications have been made for nuclear data libraries and programs. Thus, the new version SRAC95 has been completed. The system consists of six kinds of nuclear data libraries(ENDF/B-IV, -V, -VI, JENDL-2, -3.1, -3.2), five modular codes integrated into SRAC95; collision probability calculation module (PIJ) for 16 types of lattice geometries, Sn transport calculation modules(ANISN, TWOTRAN), diffusion calculation modules(TUD, CITATION) and two optional codes for fuel assembly and core burn-up calculations(newly developed ASMBURN, revised COREBN). In this version, many new functions and data are implemented to support nuclear design studies of advanced reactors, especially for burn-up calculations. SRAC95 is available not only on conventional IBM-compatible computers but also on scalar or vector computers with the UNIX operating system. This report is the SRAC95 users manual which contains general description, contents of revisions, input data requirements, detail information on usage, sample input data and list of available libraries. (author).

  12. Multi-threaded algorithms for GPGPU in the ATLAS High Level Trigger

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00212700; The ATLAS collaboration

    2017-01-01

    General purpose Graphics Processor Units (GPGPU) are being evaluated for possible future inclusion in an upgraded ATLAS High Level Trigger farm. We have developed a demonstrator including GPGPU implementations of Inner Detector and Muon tracking and Calorimeter clustering within the ATLAS software framework. ATLAS is a general purpose particle physics experiment located on the LHC collider at CERN. The ATLAS Trigger system consists of two levels, with Level-1 implemented in hardware and the High Level Trigger implemented in software running on a farm of commodity CPU. The High Level Trigger reduces the trigger rate from the 100 kHz Level-1 acceptance rate to 1.5 kHz for recording, requiring an average per-event processing time of ∼ 250 ms for this task. The selection in the high level trigger is based on reconstructing tracks in the Inner Detector and Muon Spectrometer and clusters of energy deposited in the Calorimeter. Performing this reconstruction within the available farm resources presents a significa...

  13. Multi-Threaded Algorithms for GPGPU in the ATLAS High Level Trigger

    Science.gov (United States)

    Conde Muíño, P.; ATLAS Collaboration

    2017-10-01

    General purpose Graphics Processor Units (GPGPU) are being evaluated for possible future inclusion in an upgraded ATLAS High Level Trigger farm. We have developed a demonstrator including GPGPU implementations of Inner Detector and Muon tracking and Calorimeter clustering within the ATLAS software framework. ATLAS is a general purpose particle physics experiment located on the LHC collider at CERN. The ATLAS Trigger system consists of two levels, with Level-1 implemented in hardware and the High Level Trigger implemented in software running on a farm of commodity CPU. The High Level Trigger reduces the trigger rate from the 100 kHz Level-1 acceptance rate to 1.5 kHz for recording, requiring an average per-event processing time of ∼ 250 ms for this task. The selection in the high level trigger is based on reconstructing tracks in the Inner Detector and Muon Spectrometer and clusters of energy deposited in the Calorimeter. Performing this reconstruction within the available farm resources presents a significant challenge that will increase significantly with future LHC upgrades. During the LHC data taking period starting in 2021, luminosity will reach up to three times the original design value. Luminosity will increase further to 7.5 times the design value in 2026 following LHC and ATLAS upgrades. Corresponding improvements in the speed of the reconstruction code will be needed to provide the required trigger selection power within affordable computing resources. Key factors determining the potential benefit of including GPGPU as part of the HLT processor farm are: the relative speed of the CPU and GPGPU algorithm implementations; the relative execution times of the GPGPU algorithms and serial code remaining on the CPU; the number of GPGPU required, and the relative financial cost of the selected GPGPU. We give a brief overview of the algorithms implemented and present new measurements that compare the performance of various configurations exploiting GPGPU cards.

  14. 7 CFR 2902.48 - General purpose household cleaners.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false General purpose household cleaners. 2902.48 Section... PROCUREMENT Designated Items § 2902.48 General purpose household cleaners. (a) Definition. Products designed... procurement preference for qualifying biobased general purpose household cleaners. By that date, Federal...

  15. 7 CFR 2902.37 - General purpose de-icers.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false General purpose de-icers. 2902.37 Section 2902.37... Items § 2902.37 General purpose de-icers. (a) Definition. Chemical products (e.g., salt, fluids) that... preference for qualifying biobased general purpose de-icers. By that date, Federal agencies that have the...

  16. 47 CFR 32.6124 - General purpose computers expense.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false General purpose computers expense. 32.6124... General purpose computers expense. This account shall include the costs of personnel whose principal job is the physical operation of general purpose computers and the maintenance of operating systems. This...

  17. AthenaMT: upgrading the ATLAS software framework for the many-core world with multi-threading

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00100895; The ATLAS collaboration; Baines, John; Bold, Tomasz; Calafiura, Paolo; Farrell, Steven; Malon, David; Ritsch, Elmar; Stewart, Graeme; Snyder, Scott; Tsulaia, Vakhtang; Wynne, Benjamin; van Gemmeren, Peter

    2017-01-01

    ATLAS’s current software framework, Gaudi/Athena, has been very successful for the experiment in LHC Runs 1 and 2. However, its single threaded design has been recognized for some time to be increasingly problematic as CPUs have increased core counts and decreased available memory per core. Even the multi-process version of Athena, AthenaMP, will not scale to the range of architectures we expect to use beyond Run2. After concluding a rigorous requirements phase, where many design components were examined in detail, ATLAS has begun the migration to a new data-flow driven, multi-threaded framework, which enables the simultaneous processing of singleton, thread unsafe legacy Algorithms, cloned Algorithms that execute concurrently in their own threads with different Event contexts, and fully re-entrant, thread safe Algorithms. In this paper we report on the process of modifying the framework to safely process multiple concurrent events in different threads, which entails significant changes in the underlying ha...

  18. AthenaMT: Upgrading the ATLAS Software Framework for the Many-Core World with Multi-Threading

    CERN Document Server

    Leggett, Charles; The ATLAS collaboration; Bold, Tomasz; Calafiura, Paolo; Farrell, Steven; Malon, David; Ritsch, Elmar; Stewart, Graeme; Snyder, Scott; Tsulaia, Vakhtang; Wynne, Benjamin; van Gemmeren, Peter

    2016-01-01

    ATLAS's current software framework, Gaudi/Athena, has been very successful for the experiment in LHC Runs 1 and 2. However, its single threaded design has been recognised for some time to be increasingly problematic as CPUs have increased core counts and decreased available memory per core. Even the multi-process version of Athena, AthenaMP, will not scale to the range of architectures we expect to use beyond Run2. After concluding a rigorous requirements phase, where many design components were examined in detail, ATLAS has begun the migration to a new data-flow driven, multi-threaded framework, which enables the simultaneous processing of singleton, thread unsafe legacy Algorithms, cloned Algorithms that execute concurrently in their own threads with different Event contexts, and fully re-entrant, thread safe Algorithms. In this paper we will report on the process of modifying the framework to safely process multiple concurrent events in different threads, which entails significant changes in the underlying...

  19. 7 CFR 225.1 - General purpose and scope.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 4 2010-01-01 2010-01-01 false General purpose and scope. 225.1 Section 225.1... AGRICULTURE CHILD NUTRITION PROGRAMS SUMMER FOOD SERVICE PROGRAM General § 225.1 General purpose and scope... primary purpose of the Program is to provide food service to children from needy areas during periods when...

  20. 7 CFR 253.1 - General purpose and scope.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 4 2010-01-01 2010-01-01 false General purpose and scope. 253.1 Section 253.1... AGRICULTURE GENERAL REGULATIONS AND POLICIES-FOOD DISTRIBUTION ADMINISTRATION OF THE FOOD DISTRIBUTION PROGRAM FOR HOUSEHOLDS ON INDIAN RESERVATIONS § 253.1 General purpose and scope. This part describes the terms...

  1. 7 CFR 277.1 - General purpose and scope.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 4 2010-01-01 2010-01-01 false General purpose and scope. 277.1 Section 277.1 Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF... AGENCIES § 277.1 General purpose and scope. (a) Purpose. This part establishes uniform requirements for the...

  2. 7 CFR 245.1 - General purpose and scope.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 4 2010-01-01 2010-01-01 false General purpose and scope. 245.1 Section 245.1 Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF... SCHOOLS § 245.1 General purpose and scope. (a) This part established the responsibilities of State...

  3. 7 CFR 285.1 - General purpose and scope.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 4 2010-01-01 2010-01-01 false General purpose and scope. 285.1 Section 285.1... COMMONWEALTH OF PUERTO RICO § 285.1 General purpose and scope. This part describes the general terms and... government of the Commonwealth of Puerto Rico for the purpose of designing and conducting a nutrition...

  4. 7 CFR 220.1 - General purpose and scope.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 4 2010-01-01 2010-01-01 false General purpose and scope. 220.1 Section 220.1 Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE CHILD NUTRITION PROGRAMS SCHOOL BREAKFAST PROGRAM § 220.1 General purpose and scope. This part...

  5. 7 CFR 1485.10 - General purpose and scope.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false General purpose and scope. 1485.10 Section 1485.10... FOREIGN MARKETS FOR AGRICULTURAL COMMODITIES Market Access Program § 1485.10 General purpose and scope. (a.../Market Access Program (EIP/MAP). It also establishes the general terms and conditions applicable to MAP...

  6. 7 CFR 281.1 - General purpose and scope.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 4 2010-01-01 2010-01-01 false General purpose and scope. 281.1 Section 281.1 Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF... RESERVATIONS § 281.1 General purpose and scope. (a) These regulations govern the operation of the Food Stamp...

  7. 7 CFR 215.1 - General purpose and scope.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 4 2010-01-01 2010-01-01 false General purpose and scope. 215.1 Section 215.1... AGRICULTURE CHILD NUTRITION PROGRAMS SPECIAL MILK PROGRAM FOR CHILDREN § 215.1 General purpose and scope. This part announces the policies and prescribes the general regulations with respect to the Special Milk...

  8. 7 CFR 248.1 - General purpose and scope.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 4 2010-01-01 2010-01-01 false General purpose and scope. 248.1 Section 248.1... AGRICULTURE CHILD NUTRITION PROGRAMS WIC FARMERS' MARKET NUTRITION PROGRAM (FMNP) General § 248.1 General purpose and scope. This part announces regulations under which the Secretary of Agriculture shall carry...

  9. 46 CFR 7.1 - General purpose of boundary lines.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false General purpose of boundary lines. 7.1 Section 7.1 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY PROCEDURES APPLICABLE TO THE PUBLIC BOUNDARY LINES General § 7.1 General purpose of boundary lines. The lines in this part delineate the application of the...

  10. 7 CFR 246.1 - General purpose and scope.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 4 2010-01-01 2010-01-01 false General purpose and scope. 246.1 Section 246.1... General § 246.1 General purpose and scope. This part announces regulations under which the Secretary of... health by reason of inadequate nutrition or health care, or both. The purpose of the Program is to...

  11. 7 CFR 250.1 - General purpose and scope.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 4 2010-01-01 2010-01-01 false General purpose and scope. 250.1 Section 250.1... AGRICULTURE GENERAL REGULATIONS AND POLICIES-FOOD DISTRIBUTION DONATION OF FOODS FOR USE IN THE UNITED STATES, ITS TERRITORIES AND POSSESSIONS AND AREAS UNDER ITS JURISDICTION General § 250.1 General purpose and...

  12. 7 CFR 1728.10 - General purpose and scope.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 11 2010-01-01 2010-01-01 false General purpose and scope. 1728.10 Section 1728.10 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE ELECTRIC STANDARDS AND SPECIFICATIONS FOR MATERIALS AND CONSTRUCTION § 1728.10 General purpose and...

  13. 7 CFR 251.1 - General purpose and scope.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 4 2010-01-01 2010-01-01 false General purpose and scope. 251.1 Section 251.1... AGRICULTURE GENERAL REGULATIONS AND POLICIES-FOOD DISTRIBUTION THE EMERGENCY FOOD ASSISTANCE PROGRAM § 251.1 General purpose and scope. This part announces the policies and prescribes the regulations necessary to...

  14. 7 CFR 235.1 - General purpose and scope.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 4 2010-01-01 2010-01-01 false General purpose and scope. 235.1 Section 235.1 Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE CHILD NUTRITION PROGRAMS STATE ADMINISTRATIVE EXPENSE FUNDS § 235.1 General purpose and scope...

  15. 7 CFR 226.1 - General purpose and scope.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 4 2010-01-01 2010-01-01 false General purpose and scope. 226.1 Section 226.1 Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE CHILD NUTRITION PROGRAMS CHILD AND ADULT CARE FOOD PROGRAM General § 226.1 General purpose and...

  16. 21 CFR 864.4010 - General purpose reagent.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false General purpose reagent. 864.4010 Section 864.4010 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES HEMATOLOGY AND PATHOLOGY DEVICES Specimen Preparation Reagents § 864.4010 General purpose...

  17. 47 CFR 32.2124 - General purpose computers.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false General purpose computers. 32.2124 Section 32... General purpose computers. (a) This account shall include the original cost of computers and peripheral... financial, statistical, or other business analytical reports; preparation of payroll, customer bills, and...

  18. Standalone General Purpose Data Logger Design and Implementation

    African Journals Online (AJOL)

    This paper describes the design of a general purpose data logger that is compatible with a variety of transducers, potentially permitting the measurement and recording of a wide range of phenomena. The recorded data can be retrieved to a PC via an RS-232 serial port. The standalone general purpose data logger ...

  19. A general purpose code for Monte Carlo simulations

    International Nuclear Information System (INIS)

    Wilcke, W.W.; Rochester Univ., NY

    1984-01-01

    A general-purpose computer code MONTHY has been written to perform Monte Carlo simulations of physical systems. To achieve a high degree of flexibility the code is organized like a general purpose computer, operating on a vector describing the time dependent state of the system under simulation. The instruction set of the 'computer' is defined by the user and is therefore adaptable to the particular problem studied. The organization of MONTHY allows iterative and conditional execution of operations. (orig.)

  20. Validation of a virtual source model of medical linac for Monte Carlo dose calculation using multi-threaded Geant4

    Science.gov (United States)

    Aboulbanine, Zakaria; El Khayati, Naïma

    2018-04-01

    The use of phase space in medical linear accelerator Monte Carlo (MC) simulations significantly improves the execution time and leads to results comparable to those obtained from full calculations. The classical representation of phase space stores directly the information of millions of particles, producing bulky files. This paper presents a virtual source model (VSM) based on a reconstruction algorithm, taking as input a compressed file of roughly 800 kb derived from phase space data freely available in the International Atomic Energy Agency (IAEA) database. This VSM includes two main components; primary and scattered particle sources, with a specific reconstruction method developed for each. Energy spectra and other relevant variables were extracted from IAEA phase space and stored in the input description data file for both sources. The VSM was validated for three photon beams: Elekta Precise 6 MV/10 MV and a Varian TrueBeam 6 MV. Extensive calculations in water and comparisons between dose distributions of the VSM and IAEA phase space were performed to estimate the VSM precision. The Geant4 MC toolkit in multi-threaded mode (Geant4-[mt]) was used for fast dose calculations and optimized memory use. Four field configurations were chosen for dose calculation validation to test field size and symmetry effects, , , and for squared fields, and for an asymmetric rectangular field. Good agreement in terms of formalism, for 3%/3 mm and 2%/3 mm criteria, for each evaluated radiation field and photon beam was obtained within a computation time of 60 h on a single WorkStation for a 3 mm voxel matrix. Analyzing the VSM’s precision in high dose gradient regions, using the distance to agreement concept (DTA), showed also satisfactory results. In all investigated cases, the mean DTA was less than 1 mm in build-up and penumbra regions. In regards to calculation efficiency, the event processing speed is six times faster using Geant4-[mt] compared to sequential

  1. 7 CFR 227.1 - General purpose and scope.

    Science.gov (United States)

    2010-01-01

    ... management training of school foodservice personnel, and (c) the conduct of nutrition education activities in... Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE CHILD NUTRITION PROGRAMS NUTRITION EDUCATION AND TRAINING PROGRAM General § 227.1 General purpose...

  2. On the System and Engineering Design of the General Purpose ...

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 13; Issue 5. On the System and Engineering Design of the General Purpose Electronic Digital Computer at TIFR. Rangaswamy Narasimhan. Classics Volume 13 Issue 5 May 2008 pp 490-501 ...

  3. Geographical parthenogenesis: General purpose genotypes and frozen niche variation

    DEFF Research Database (Denmark)

    Vrijenhoek, Robert C.; Parker, Dave

    2009-01-01

    hypotheses concerning the evolution of niche breadth in asexual species - the "general-purpose genotype" (GPG) and "frozen niche-variation" (FNV) models. The two models are often portrayed as mutually exclusive, respectively viewing clonal lineages as generalists versus specialists. Nonetheless...

  4. Efficient probabilistic model checking on general purpose graphic processors

    NARCIS (Netherlands)

    Bosnacki, D.; Edelkamp, S.; Sulewski, D.; Pasareanu, C.S.

    2009-01-01

    We present algorithms for parallel probabilistic model checking on general purpose graphic processing units (GPGPUs). For this purpose we exploit the fact that some of the basic algorithms for probabilistic model checking rely on matrix vector multiplication. Since this kind of linear algebraic

  5. Weldability of general purpose heat source new-process iridium

    International Nuclear Information System (INIS)

    Kanne, W.R.

    1987-01-01

    Weldability tests on General Purpose Heat Source (GPHS) iridium capsules showed that a new iridium fabrication process reduced susceptibility to underbead cracking. Seventeen capsules were welded (a total of 255 welds) in four categories and the number of cracks in each weld was measured

  6. General-purpose heat source development. Phase I: design requirements

    International Nuclear Information System (INIS)

    Snow, E.C.; Zocher, R.W.

    1978-09-01

    Studies have been performed to determine the necessary design requirements for a 238 PuO 2 General-Purpose Heat Source (GPHS). Systems and missions applications, as well as accident conditions, were considered. The results of these studies, along with the recommended GPHS design requirements, are given in this report

  7. General-Purpose Data Containers for Science and Engineering

    International Nuclear Information System (INIS)

    2015-01-01

    In 2012 the SG38 international committee was formed to develop a modern structure to replace the ENDF-6 format for storing evaluated nuclear reaction data on a computer system. This committee divided the project into seven tasks. One of these tasks, the design of General-Purpose Data Containers (GPDCs), is described in this article. What type of data does SG38 need to store and why is the task called General-Purpose Data Containers? The most common types of data in an evaluated nuclear reaction database are representations of physical functions in tabulated forms. There is also a need to store 1-dimensional functions using truncated Legendre or polynomial (or others) expansions. The phrase General-Purpose implies that the containers are to be designed to store generic forms of tabulated data rather than one for each physical function. Also, where possible, it would be beneficial to design containers that can store data forms not currently used in evaluated nuclear database or at least be easily extended. In addition to containers for storing physical functions as tabulated data, other types of containers are needed. There exists a desire within SG38 to support the storage of documentation at various levels within an evaluated file. Containers for storing non-functional data (e.g., a list of numbers) as well as units and labels for axes are also needed. Herein, containers for storing physical functions are called functional containers. One of the goals for the general-purpose data containers task is to design containers that will be useful to other scientific and engineering applications. To meet this goal, task members should think outside of the immediate needs of evaluated nuclear data to ensure that the containers are general- purpose rather than simply repackaged versions of existing containers. While the examples in this article may be specific to nuclear reaction data, it is hoped that the end product will be useful for other applications. To this end, some

  8. Speed Control of General Purpose Engine with Electronic Governor

    Science.gov (United States)

    Sawut, Umerujan; Tohti, Gheyret; Takigawa, Buso; Tsuji, Teruo

    This paper presents a general purpose engine speed control system with an electronic governor in order to improve the current system with a mechanical governor which shows unstable characteristics by change of mecanical friction or A/F ratio (Air/Fuel ratio). For the control system above, there are problems that the feedback signal is only a crank angle because of cost and the controlled object is a general purpose engine which is strongly nonlinear. In order to overcome these problems, the system model is shown for the dynamic estimation of the amount of air flow and the robust controller is designed. That is, the proposed system includes the robust sliding-mode controller by the feedback signal of only a crank angle where Genetic Algorithm is applied for the controller design. The simulation and the experiments by MATLAB/Simulink are performed to show the effectiveness of our proposal.

  9. A Small Acoustic Goniometer for General Purpose Research.

    Science.gov (United States)

    Pook, Michael L; Loo, Sin Ming

    2016-04-29

    Understanding acoustic events and monitoring their occurrence is a useful aspect of many research projects. In particular, acoustic goniometry allows researchers to determine the source of an event based solely on the sound it produces. The vast majority of acoustic goniometry research projects used custom hardware targeted to the specific application under test. Unfortunately, due to the wide range of sensing applications, a flexible general purpose hardware/firmware system does not exist for this purpose. This article focuses on the development of such a system which encourages the continued exploration of general purpose hardware/firmware and lowers barriers to research in projects requiring the use of acoustic goniometry. Simulations have been employed to verify system feasibility, and a complete hardware implementation of the acoustic goniometer has been designed and field tested. The results are reported, and suggested areas for improvement and further exploration are discussed.

  10. Space shuttle general purpose computers (GPCs) (current and future versions)

    Science.gov (United States)

    1988-01-01

    Current and future versions of general purpose computers (GPCs) for space shuttle orbiters are represented in this frame. The two boxes on the left (AP101B) represent the current GPC configuration, with the input-output processor at far left and the central processing unit (CPU) at its side. The upgraded version combines both elements in a single unit (far right, AP101S).

  11. New Generation General Purpose Computer (GPC) compact IBM unit

    Science.gov (United States)

    1991-01-01

    New Generation General Purpose Computer (GPC) compact IBM unit replaces a two-unit earlier generation computer. The new IBM unit is documented in table top views alone (S91-26867, S91-26868), with the onboard equipment it supports including the flight deck CRT screen and keypad (S91-26866), and next to the two earlier versions it replaces (S91-26869).

  12. Survey of advanced general-purpose software for robot manipulators

    International Nuclear Information System (INIS)

    Latombe, J.C.

    1983-01-01

    Computer-controlled sensor-based robots will more and more common in industry. This paper attempts to survey the main trends of the development of advanced general-purpose software for robot manipulators. It is intended to make clear that robots are not only mechanical devices. They are truly programmable machines, and their programming, which occurs in an imperfectly modelled world,is somewhat different from conventional computer programming. (orig.)

  13. Development of General Purpose Data Acquisition Shell (GPDAS)

    International Nuclear Information System (INIS)

    Chung, Y.; Kim, K.

    1995-01-01

    This note is intended as an abbreviated introduction to the concept and the structure of General Purpose Data Acquisitions Shell (GPDAS) and assumes the reader has a certain level of familiarity with programming in general. The structure of the following sections consists of brief explanations of the concepts and commands of GPDAS, followed by several examples. Some of these are tabulated in the appendices at the end of this note

  14. General Purpose Multimedia Dataset - GarageBand 2008

    DEFF Research Database (Denmark)

    Meng, Anders

    This document describes a general purpose multimedia data-set to be used in cross-media machine learning problems. In more detail we describe the genre taxonomy applied at http://www.garageband.com, from where the data-set was collected, and how the taxonomy have been fused into a more human...... understandable taxonomy. Finally, a description of various features extracted from both the audio and text are presented....

  15. ENDF/B-4 General Purpose File 1974

    International Nuclear Information System (INIS)

    Schwerer, O.

    1980-04-01

    This document summarizes contents and documentation of the 1974 version of the General Purpose File of the ENDF/B Library maintained by the National Nuclear Data Center (NNDC) at the Brookhaven National Laboratory, USA. The Library contains numerical neutron reaction data for 90 isotopes or elements. The entire Library or selective retrievals from it can be obtained on magnetic tape from the IAEA Nuclear Data Section. (author)

  16. The architecture of Newton, a general-purpose dynamics simulator

    Science.gov (United States)

    Cremer, James F.; Stewart, A. James

    1989-01-01

    The architecture for Newton, a general-purpose system for simulating the dynamics of complex physical objects, is described. The system automatically formulates and analyzes equations of motion, and performs automatic modification of this system equations when necessitated by changes in kinematic relationships between objects. Impact and temporary contact are handled, although only using simple models. User-directed influence of simulations is achieved using Newton's module, which can be used to experiment with the control of many-degree-of-freedom articulated objects.

  17. General-purpose heat source development. Phase II: conceptual designs

    International Nuclear Information System (INIS)

    Snow, E.C.; Zocher, R.W.; Grinberg, I.M.; Hulbert, L.E.

    1978-11-01

    Basic geometric module shapes and fuel arrays were studied to determine how well they could be expected to meet the General Purpose Heat Source (GPHS) design requirements. Seven conceptual designs were selected, detailed drawings produced, and these seven concepts analyzed. Three of these design concepts were selected as GPHS Trial Designs to be reanalyzed in more detail and tested. The geometric studies leading to the selection of the seven conceptual designs, the analyses of these designs, and the selection of the three trial designs are discussed

  18. Installation of new Generation General Purpose Computer (GPC) compact unit

    Science.gov (United States)

    1991-01-01

    In the Kennedy Space Center's (KSC's) Orbiter Processing Facility (OPF) high bay 2, Spacecraft Electronics technician Ed Carter (right), wearing clean suit, prepares for (26864) and installs (26865) the new Generation General Purpose Computer (GPC) compact IBM unit in Atlantis', Orbiter Vehicle (OV) 104's, middeck avionics bay as Orbiter Systems Quality Control technician Doug Snider looks on. Both men work for NASA contractor Lockheed Space Operations Company. All three orbiters are being outfitted with the compact IBM unit, which replaces a two-unit earlier generation computer.

  19. Foam: A general purpose Monte Carlo cellular algorithm

    International Nuclear Information System (INIS)

    Jadach, S.

    2003-01-01

    A general-purpose, self-adapting Monte Carlo (MC) algorithm implemented in the program Foam is described. The high efficiency of the MC, that is small maximum weight or variance of the MC weight is achieved by means of dividing the integration domain into small cells. The cells can be n-dimensional simplices, hyperrectangles cells. The next cell to be divided and the position/direction of the division hyperplane is chosen by the algorithm which optimizes the ratio of the maximum weight to the average weight or (optionally) the total variance. The algorithm is able to deal, in principle, with an arbitrary pattern of the singularities in the distribution

  20. Using general-purpose compression algorithms for music analysis

    DEFF Research Database (Denmark)

    Louboutin, Corentin; Meredith, David

    2016-01-01

    General-purpose compression algorithms encode files as dictionaries of substrings with the positions of these strings’ occurrences. We hypothesized that such algorithms could be used for pattern discovery in music. We compared LZ77, LZ78, Burrows–Wheeler and COSIATEC on classifying folk song...... in the input data, COSIATEC outperformed LZ77 with a mean F1 score of 0.123, compared with 0.053 for LZ77. However, when the music was processed a voice at a time, the F1 score for LZ77 more than doubled to 0.124. We also discovered a significant correlation between compression factor and F1 score for all...

  1. How General-Purpose can a GPU be?

    Directory of Open Access Journals (Sweden)

    Philip Machanick

    2015-12-01

    Full Text Available The use of graphics processing units (GPUs in general-purpose computation (GPGPU is a growing field. GPU instruction sets, while implementing a graphics pipeline, draw from a range of single instruction multiple datastream (SIMD architectures characteristic of the heyday of supercomputers. Yet only one of these SIMD instruction sets has been of application on a wide enough range of problems to survive the era when the full range of supercomputer design variants was being explored: vector instructions. This paper proposes a reconceptualization of the GPU as a multicore design with minimal exotic modes of parallelism so as to make GPGPU truly general.

  2. High vacuum general purpose scattering chamber for nuclear reaction study

    International Nuclear Information System (INIS)

    Suresh Kumar; Ojha, S.C.

    2003-01-01

    To study the nuclear reactions induced by beam from medium energy accelerators, one of the most common facility required is a scattering chamber. In the scattering chamber, projectile collides with the target nucleus and the scattered reaction products are detected with various type of nuclear detector at different angles with respect to the beam. The experiments are performed under high vacuum to minimize the background reaction and the energy losses of the charged particles. To make the chamber general purpose various requirement of the experiments are incorporated into it. Changing of targets, changing angle of various detectors while in vacuum are the most desired features. The other features like ascertaining the beam spot size and position on the target, minimizing the background counts by proper beam dump, accurate positioning of the detector as per plan etc. are some of the important requirements

  3. General-purpose event generators for LHC physics

    CERN Document Server

    Buckley, Andy; Gieseke, Stefan; Grellscheid, David; Hoche, Stefan; Hoeth, Hendrik; Krauss, Frank; Lonnblad, Leif; Nurse, Emily; Richardson, Peter; Schumann, Steffen; Seymour, Michael H.; Sjostrand, Torbjorn; Skands, Peter; Webber, Bryan

    2011-01-01

    We review the physics basis, main features and use of general-purpose Monte Carlo event generators for the simulation of proton-proton collisions at the Large Hadron Collider. Topics included are: the generation of hard-scattering matrix elements for processes of interest, at both leading and next-to-leading QCD perturbative order; their matching to approximate treatments of higher orders based on the showering approximation; the parton and dipole shower formulations; parton distribution functions for event generators; non-perturbative aspects such as soft QCD collisions, the underlying event and diffractive processes; the string and cluster models for hadron formation; the treatment of hadron and tau decays; the inclusion of QED radiation and beyond-Standard-Model processes. We describe the principal features of the ARIADNE, Herwig++, PYTHIA 8 and SHERPA generators, together with the Rivet and Professor validation and tuning tools, and discuss the physics philosophy behind the proper use of these generators ...

  4. Incremental and developmental perspectives for general-purpose learning systems

    Directory of Open Access Journals (Sweden)

    Fernando Martínez-Plumed

    2017-02-01

    Full Text Available The stupefying success of Articial Intelligence (AI for specic problems, from recommender systems to self-driving cars, has not yet been matched with a similar progress in general AI systems, coping with a variety of (dierent problems. This dissertation deals with the long-standing problem of creating more general AI systems, through the analysis of their development and the evaluation of their cognitive abilities. It presents a declarative general-purpose learning system and a developmental and lifelong approach for knowledge acquisition, consolidation and forgetting. It also analyses the use of the use of more ability-oriented evaluation techniques for AI evaluation and provides further insight for the understanding of the concepts of development and incremental learning in AI systems.

  5. General purpose heat source task group. Final report

    International Nuclear Information System (INIS)

    1979-01-01

    The results of thermal analyses and impact tests on a modified design of a 238 Pu-fueled general purpose heat source (GPHS) for spacecraft power supplies are presented. This work was performed to establish the safety of a heat source with pyrolytic graphite insulator shells located either inside or outside the graphite impact shell. This safety is dependent on the degree of aerodynamic heating of the heat source during reentry and on the ability of the heat source capsule to withstand impact after reentry. Analysis of wind tunnel and impact test data result in a recommended GPHS design which should meet all temperature and safety requirements. Further wind tunnel tests, drop tests, and impact tests are recommended to verify the safety of this design

  6. Using the general-purpose reactivity indicator: challenging examples.

    Science.gov (United States)

    Anderson, James S M; Melin, Junia; Ayers, Paul W

    2016-03-01

    We elucidate the regioselectivity of nucleophilic attack on substituted benzenesulfonates, quinolines, and pyridines using a general-purpose reactivity indicator (GPRI) for electrophiles. We observe that the GPRI is most accurate when the incoming nucleophile resembles a point charge. We further observe that the GPRI often chooses reactive "dead ends" as the most reactive sites as well as sterically hindered reactive sites. This means that care must be taken to remove sites that are inherently unreactive. Generally, among sites where reactions actually occur, the GPRI identifies the sites in the molecule that lead to the kinetically favored product(s). Furthermore, the GPRI can discern which sites react with hard reagents and which sites react with soft reagents. Because it is currently impossible to use the mathematical framework of conceptual DFT to identify sterically inaccessible sites and reactive dead ends, the GPRI is primarily useful as an interpretative, not a predictive, tool.

  7. General-purpose parallel simulator for quantum computing

    International Nuclear Information System (INIS)

    Niwa, Jumpei; Matsumoto, Keiji; Imai, Hiroshi

    2002-01-01

    With current technologies, it seems to be very difficult to implement quantum computers with many qubits. It is therefore of importance to simulate quantum algorithms and circuits on the existing computers. However, for a large-size problem, the simulation often requires more computational power than is available from sequential processing. Therefore, simulation methods for parallel processors are required. We have developed a general-purpose simulator for quantum algorithms/circuits on the parallel computer (Sun Enterprise4500). It can simulate algorithms/circuits with up to 30 qubits. In order to test efficiency of our proposed methods, we have simulated Shor's factorization algorithm and Grover's database search, and we have analyzed robustness of the corresponding quantum circuits in the presence of both decoherence and operational errors. The corresponding results, statistics, and analyses are presented in this paper

  8. The ATLAS Trigger Algorithms for General Purpose Graphics Processor Units

    CERN Document Server

    Tavares Delgado, Ademar; The ATLAS collaboration

    2016-01-01

    The ATLAS Trigger Algorithms for General Purpose Graphics Processor Units Type: Talk Abstract: We present the ATLAS Trigger algorithms developed to exploit General­ Purpose Graphics Processor Units. ATLAS is a particle physics experiment located on the LHC collider at CERN. The ATLAS Trigger system has two levels, hardware-­based Level 1 and the High Level Trigger implemented in software running on a farm of commodity CPU. Performing the trigger event selection within the available farm resources presents a significant challenge that will increase future LHC upgrades. are being evaluated as a potential solution for trigger algorithms acceleration. Key factors determining the potential benefit of this new technology are the relative execution speedup, the number of GPUs required and the relative financial cost of the selected GPU. We have developed a trigger demonstrator which includes algorithms for reconstructing tracks in the Inner Detector and Muon Spectrometer and clusters of energy deposited in the Cal...

  9. The RHIC general purpose multiplexed analog to digital converter system

    International Nuclear Information System (INIS)

    Michnoff, R.

    1995-01-01

    A general purpose multiplexed analog to digital converter system is currently under development to support acquisition of analog signals for the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory. The system consists of a custom intelligent VME based controller module (V113) and a 14-bit 64 channel multiplexed A/D converter module (V114). The design features two independent scan groups, where one scan group is capable of acquiring 64 channels at 60 Hz, concurrently with the second scan group acquiring data at an aggregate rate of up to 80 k samples/second. An interface to the RHIC serially encoded event line is used to synchronize acquisition. Data is stored in a circular static RAM buffer on the controller module, then transferred to a commercial VMEbus CPU board and higher level workstations for plotting, report Generation, analysis and storage

  10. Using a cognitive architecture for general purpose service robot control

    Science.gov (United States)

    Puigbo, Jordi-Ysard; Pumarola, Albert; Angulo, Cecilio; Tellez, Ricardo

    2015-04-01

    A humanoid service robot equipped with a set of simple action skills including navigating, grasping, recognising objects or people, among others, is considered in this paper. By using those skills the robot should complete a voice command expressed in natural language encoding a complex task (defined as the concatenation of a number of those basic skills). As a main feature, no traditional planner has been used to decide skills to be activated, as well as in which sequence. Instead, the SOAR cognitive architecture acts as the reasoner by selecting which action the robot should complete, addressing it towards the goal. Our proposal allows to include new goals for the robot just by adding new skills (without the need to encode new plans). The proposed architecture has been tested on a human-sized humanoid robot, REEM, acting as a general purpose service robot.

  11. Foam: A general purpose Monte Carlo cellular algorithm

    International Nuclear Information System (INIS)

    Jadach, S.

    2002-01-01

    A general-purpose, self-adapting Monte Carlo (MC) algorithm implemented in the program Foam is described. The high efficiency of the MC, that is small maximum weight or variance of the MC weight is achieved by means of dividing the integration domain into small cells. The cells can be n-dimensional simplices, hyperrectangles or a Cartesian product of them. The grid of cells, called 'foam', is produced in the process of the binary split of the cells. The choice of the next cell to be divided and the position/direction of the division hyperplane is driven by the algorithm which optimizes the ratio of the maximum weight to the average weight or (optionally) the total variance. The algorithm is able to deal, in principle, with an arbitrary pattern of the singularities in the distribution. (author)

  12. Foam A General purpose Monte Carlo Cellular Algorithm

    CERN Document Server

    Jadach, Stanislaw

    2002-01-01

    A general-purpose, self-adapting Monte Carlo (MC) algorithm implemented in the program {\\tt Foam} is described. The high efficiency of the MC, that is small maximum weight or variance of the MC weight is achieved by means of dividing the integration domain into small cells. The cells can be $n$-dimensional simplices, hyperrectangles or a Cartesian product of them. The grid of cells, ``foam'', is produced in the process of the binary split of the cells. The next cell to be divided and the position/direction of the division hyperplane is chosen by the algorithm which optimizes the ratio of the maximum weight to the average weight or (optionally) the total variance. The algorithm is able to deal, in principle, with an arbitrary pattern of the singularities in the distribution.

  13. General-purpose software for science technology calculation

    International Nuclear Information System (INIS)

    Aikawa, Hiroshi

    1999-01-01

    We have developed many general-purpose softwares for parallel processing of science technology calculation. This paper reported six softwares such as STA (Seamless Thinking Aid) basic soft, parallel numerical computation library, grid formation software for parallel computer, real-time visualizing system, parallel benchmark test system and object-oriented parallel programing method. STA is a user interface software to perform a total environment for parallel programing, a network computing environment for various parallel computers and a desktop computing environment via Web. Some examples using the above softwares are explained. One of them is a simultaneous parallel calculation of both analysis of flow and structure of supersonic transport to design of them. The other is various kinds of computer parallel calculations for nuclear fusion reaction such as a molecular dynamic calculation and a calculation of reactor structure and fluid. These softs are opened to the public by the home page {http://guide.tokai.jaeri.go.jp/ccse/}. (S.Y.)

  14. The Efficiency of Linda for General Purpose Scientific Programming

    Directory of Open Access Journals (Sweden)

    Timothy G. Mattson

    1994-01-01

    Full Text Available Linda (Linda is a registered trademark of Scientific Computing Associates, Inc. is a programming language for coordinating the execution and interaction of processes. When combined with a language for computation (such as C or Fortran, the resulting hybrid language can be used to write portable programs for parallel and distributed multiple instruction multiple data (MIMD computers. The Linda programming model is based on operations that read, write, and erase a virtual shared memory. It is easy to use, and lets the programmer code in a very expressive, uncoupled programming style. These benefits, however, are of little value unless Linda programs execute efficiently. The goal of this article is to demonstrate that Linda programs are efficient making Linda an effective general purpose tool for programming MIMD parallel computers. Two arguments for Linda's efficiency are given; the first is based on Linda's implementation and the second on a range of case studies spanning a complete set of parallel algorithm classes.

  15. Selecting a general-purpose data compression algorithm

    Science.gov (United States)

    Mathews, Gary Jason

    1995-01-01

    The National Space Science Data Center's Common Data Formate (CDF) is capable of storing many types of data such as scalar data items, vectors, and multidimensional arrays of bytes, integers, or floating point values. However, regardless of the dimensionality and data type, the data break down into a sequence of bytes that can be fed into a data compression function to reduce the amount of data without losing data integrity and thus remaining fully reconstructible. Because of the diversity of data types and high performance speed requirements, a general-purpose, fast, simple data compression algorithm is required to incorporate data compression into CDF. The questions to ask are how to evaluate and compare compression algorithms, and what compression algorithm meets all requirements. The object of this paper is to address these questions and determine the most appropriate compression algorithm to use within the CDF data management package that would be applicable to other software packages with similar data compression needs.

  16. A PLC platform-independent structural analysis on FBD programs for digital reactor protection systems

    International Nuclear Information System (INIS)

    Jung, Sejin; Yoo, Junbeom; Lee, Young-Jun

    2017-01-01

    Highlights: • FBD has been widely used to implement safety-critical software for PLC-based systems. • The safety-critical software should be developed strictly with safety programming guidelines. • There are no argued rules that have specific links to higher guidelines NUREG/CR-6463 PLC platform-independently. • This paper proposes a set of rules on the structure of FBD programs with providing specific links to higher guidelines. • This paper also provides CASE tool ‘FBD Checker’ for analyzing the structure of FBD. - Abstract: FBD (function block diagram) has been widely used to implement safety-critical software for PLC (programmable logic controller)-based digital nuclear reactor protection systems. The software should be developed strictly in accordance with safety programming guidelines such as NUREG/CR-6463. Software engineering tools of PLC vendors enable us to present structural analyses using FBD programs, but specific rules pertaining to the guidelines are enclosed within the commercial tools, and specific links to the guidelines are not clearly communicated. This paper proposes a set of rules on the structure of FBD programs in accordance with guidelines, and we develop an automatic analysis tool for FBD programs written in the PLCopen TC6 format. With the proposed tool, any FBD program that is transformed into an open format can be analyzed the PLC platform-independently. We consider a case study on FBD programs obtained from a preliminary version of a Korean nuclear power plant, and we demonstrate the effectiveness and potential of the proposed rules and analysis tool.

  17. FASTQSim: platform-independent data characterization and in silico read generation for NGS datasets.

    Science.gov (United States)

    Shcherbina, Anna

    2014-08-15

    High-throughput next generation sequencing technologies have enabled rapid characterization of clinical and environmental samples. Consequently, the largest bottleneck to actionable data has become sample processing and bioinformatics analysis, creating a need for accurate and rapid algorithms to process genetic data. Perfectly characterized in silico datasets are a useful tool for evaluating the performance of such algorithms. Background contaminating organisms are observed in sequenced mixtures of organisms. In silico samples provide exact truth. To create the best value for evaluating algorithms, in silico data should mimic actual sequencer data as closely as possible. FASTQSim is a tool that provides the dual functionality of NGS dataset characterization and metagenomic data generation. FASTQSim is sequencing platform-independent, and computes distributions of read length, quality scores, indel rates, single point mutation rates, indel size, and similar statistics for any sequencing platform. To create training or testing datasets, FASTQSim has the ability to convert target sequences into in silico reads with specific error profiles obtained in the characterization step. FASTQSim enables users to assess the quality of NGS datasets. The tool provides information about read length, read quality, repetitive and non-repetitive indel profiles, and single base pair substitutions. FASTQSim allows the user to simulate individual read datasets that can be used as standardized test scenarios for planning sequencing projects or for benchmarking metagenomic software. In this regard, in silico datasets generated with the FASTQsim tool hold several advantages over natural datasets: they are sequencing platform independent, extremely well characterized, and less expensive to generate. Such datasets are valuable in a number of applications, including the training of assemblers for multiple platforms, benchmarking bioinformatics algorithm performance, and creating challenge

  18. General Purpose Technologies and their Implications for International Trade

    Directory of Open Access Journals (Sweden)

    Petsas Iordanis

    2015-09-01

    Full Text Available This paper develops a simple model of trade and “quality-ladders” growth without scale effects to study the implications of general purpose technologies (GPTs for international trade. GPTs refer to a certain type of drastic innovations, such as electrification, the transistor, and the Internet, that are characterized by the pervasiveness in use, innovational complementarities, and technological dynamism. The model presents a two-country (Home and Foreign dynamic general equilibrium framework and incorporates GPT diffusion within Home that exhibits endogenous Schumpeterian growth. The model analyzes the long-run and transitional dynamic effects of a new GPT on the pattern of trade and relative wages. The main findings of the paper are: 1 when the GPT diffusion across industries is governed by S-curve dynamics, there are two steady-state equilibria: the initial steadystate arises before the adoption of the new GPT and the final one is reached after the GPT diffusion process has been completed, 2 when all industries at Home have adopted the new GPT, Home enjoys comparative advantage in a greater range of industries compared to Foreign, 3 during the transitional dynamics, Foreign gains back its competitiveness in some of the industries that lost its comparative advantage to Home.

  19. General-purpose event generators for LHC physics

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, Andy [PPE Group, School of Physics and Astronomy, University of Edinburgh, EH25 9PN (United Kingdom); Butterworth, Jonathan [Department of Physics and Astronomy, University College London, WC1E 6BT (United Kingdom); Gieseke, Stefan [Institute for Theoretical Physics, Karlsruhe Institute of Technology, D-76128 Karlsruhe (Germany); Grellscheid, David [Institute for Particle Physics Phenomenology, Durham University, DH1 3LE (United Kingdom); Hoeche, Stefan [SLAC National Accelerator Laboratory, Menlo Park, CA 94025 (United States); Hoeth, Hendrik; Krauss, Frank [Institute for Particle Physics Phenomenology, Durham University, DH1 3LE (United Kingdom); Loennblad, Leif [Department of Astronomy and Theoretical Physics, Lund University (Sweden); PH Department, TH Unit, CERN, CH-1211 Geneva 23 (Switzerland); Nurse, Emily [Department of Physics and Astronomy, University College London, WC1E 6BT (United Kingdom); Richardson, Peter [Institute for Particle Physics Phenomenology, Durham University, DH1 3LE (United Kingdom); Schumann, Steffen [Institute for Theoretical Physics, University of Heidelberg, 69120 Heidelberg (Germany); Seymour, Michael H. [School of Physics and Astronomy, University of Manchester, M13 9PL (United Kingdom); Sjoestrand, Torbjoern [Department of Astronomy and Theoretical Physics, Lund University (Sweden); Skands, Peter [PH Department, TH Unit, CERN, CH-1211 Geneva 23 (Switzerland); Webber, Bryan, E-mail: webber@hep.phy.cam.ac.uk [Cavendish Laboratory, J.J. Thomson Avenue, Cambridge CB3 0HE (United Kingdom)

    2011-07-15

    We review the physics basis, main features and use of general-purpose Monte Carlo event generators for the simulation of proton-proton collisions at the Large Hadron Collider. Topics included are: the generation of hard scattering matrix elements for processes of interest, at both leading and next-to-leading QCD perturbative order; their matching to approximate treatments of higher orders based on the showering approximation; the parton and dipole shower formulations; parton distribution functions for event generators; non-perturbative aspects such as soft QCD collisions, the underlying event and diffractive processes; the string and cluster models for hadron formation; the treatment of hadron and tau decays; the inclusion of QED radiation and beyond Standard Model processes. We describe the principal features of the ARIADNE, Herwig++, PYTHIA 8 and SHERPA generators, together with the Rivet and Professor validation and tuning tools, and discuss the physics philosophy behind the proper use of these generators and tools. This review is aimed at phenomenologists wishing to understand better how parton-level predictions are translated into hadron-level events as well as experimentalists seeking a deeper insight into the tools available for signal and background simulation at the LHC.

  20. General-purpose event generators for LHC physics

    International Nuclear Information System (INIS)

    Buckley, Andy; Butterworth, Jonathan; Gieseke, Stefan; Grellscheid, David; Hoeche, Stefan; Hoeth, Hendrik; Krauss, Frank; Loennblad, Leif; Nurse, Emily; Richardson, Peter; Schumann, Steffen; Seymour, Michael H.; Sjoestrand, Torbjoern; Skands, Peter; Webber, Bryan

    2011-01-01

    We review the physics basis, main features and use of general-purpose Monte Carlo event generators for the simulation of proton-proton collisions at the Large Hadron Collider. Topics included are: the generation of hard scattering matrix elements for processes of interest, at both leading and next-to-leading QCD perturbative order; their matching to approximate treatments of higher orders based on the showering approximation; the parton and dipole shower formulations; parton distribution functions for event generators; non-perturbative aspects such as soft QCD collisions, the underlying event and diffractive processes; the string and cluster models for hadron formation; the treatment of hadron and tau decays; the inclusion of QED radiation and beyond Standard Model processes. We describe the principal features of the ARIADNE, Herwig++, PYTHIA 8 and SHERPA generators, together with the Rivet and Professor validation and tuning tools, and discuss the physics philosophy behind the proper use of these generators and tools. This review is aimed at phenomenologists wishing to understand better how parton-level predictions are translated into hadron-level events as well as experimentalists seeking a deeper insight into the tools available for signal and background simulation at the LHC.

  1. A VMEbus general-purpose data acquisition system

    International Nuclear Information System (INIS)

    Ninane, A.; Nemry, M.; Martou, J.L.; Somers, F.

    1992-01-01

    We present a general-purpose, VMEbus based, multiprocessor data acquisition and monitoring system. Events, handled by a master CPU, are kept at the disposal of data storage and monitoring processes which can run on distinct processors. They access either the complete set of data or a fraction of them, minimizing the acquisition dead-time. The system is built with the VxWorks 5.0 real time kernel to which we have added device drivers for data acquisition and monitoring. The acquisition is controlled and the data are displayed on a workstation. The user interface is written in C ++ and re-uses the classes of the Interviews and the NIH libraries. The communication between the control workstation and the VMEbus processors is made through SUN RPCs on an Ethernet link. The system will be used for, CAMAC based, data acquisition for nuclear physics experiments as well as for the VXI data taking with the 4π configuration (100 neutron detectors) of the Brussels-Caen-Louvian-Strasbourg DEMON collaboration. (author)

  2. High-Speed General Purpose Genetic Algorithm Processor.

    Science.gov (United States)

    Hoseini Alinodehi, Seyed Pourya; Moshfe, Sajjad; Saber Zaeimian, Masoumeh; Khoei, Abdollah; Hadidi, Khairollah

    2016-07-01

    In this paper, an ultrafast steady-state genetic algorithm processor (GAP) is presented. Due to the heavy computational load of genetic algorithms (GAs), they usually take a long time to find optimum solutions. Hardware implementation is a significant approach to overcome the problem by speeding up the GAs procedure. Hence, we designed a digital CMOS implementation of GA in [Formula: see text] process. The proposed processor is not bounded to a specific application. Indeed, it is a general-purpose processor, which is capable of performing optimization in any possible application. Utilizing speed-boosting techniques, such as pipeline scheme, parallel coarse-grained processing, parallel fitness computation, parallel selection of parents, dual-population scheme, and support for pipelined fitness computation, the proposed processor significantly reduces the processing time. Furthermore, by relying on a built-in discard operator the proposed hardware may be used in constrained problems that are very common in control applications. In the proposed design, a large search space is achievable through the bit string length extension of individuals in the genetic population by connecting the 32-bit GAPs. In addition, the proposed processor supports parallel processing, in which the GAs procedure can be run on several connected processors simultaneously.

  3. Interfacial Properties of EXXPRO(TM) and General Purpose Elastomers

    Science.gov (United States)

    Zhang, Y.; Rafailovich, M.; Sokolov, Jon; Qu, S.; Ge, S.; Ngyuen, D.; Li, Z.; Peiffer, D.; Song, L.; Dias, J. A.; McElrath, K. O.

    1998-03-01

    EXXPRO(Trademark) elastomers are used for tires and many other applications. This elastomer (denoted as BIMS) is a random copolymer of p-methylstyrene (MS) and polyisobutylene (I) with varying degrees of PMS content and bromination (B) on the p-methyl group. BIMS is impermeable to gases, and has good heat, ozone and flex resistance. Very often general purpose elastomers are blended with BIMS. The interfacial width between polybutadiene and BIMS is a sensitive function of the Br level and PMS content. By neutron reflectivity (NR), we studied the dynamics of interface formation as a function of time and temperature for BIMS with varying degrees of PMS and Br. We found that in addition to the bulk parameters, the total film thickness and the proximity of an interactive surface can affect the interfacial interaction rates. The interfacial properties can also be modified by inclusion of particles, such as carbon black (a filler component in tire rubbers). Results will be presented on the relation between the interfacial width as measured by NR and compatibilization studies via AFM and LFM.

  4. General-purpose event generators for LHC physics

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, Andy; /Edinburgh U.; Butterworth, Jonathan; /University Coll. London; Gieseke, Stefan; /Karlsruhe U., ITP; Grellscheid, David; /Durham U., IPPP; Hoche, Stefan; /SLAC; Hoeth, Hendrik; Krauss, Frank; /Durham U., IPPP; Lonnblad, Leif; /Lund U., Dept. Theor. Phys. /CERN; Nurse, Emily; /University Coll. London; Richardson, Peter; /Durham U., IPPP; Schumann, Steffen; /Heidelberg U.; Seymour, Michael H.; /Manchester U.; Sjostrand, Torbjorn; /Lund U., Dept. Theor. Phys.; Skands, Peter; /CERN; Webber, Bryan; /Cambridge U.

    2011-03-03

    We review the physics basis, main features and use of general-purpose Monte Carlo event generators for the simulation of proton-proton collisions at the Large Hadron Collider. Topics included are: the generation of hard-scattering matrix elements for processes of interest, at both leading and next-to-leading QCD perturbative order; their matching to approximate treatments of higher orders based on the showering approximation; the parton and dipole shower formulations; parton distribution functions for event generators; non-perturbative aspects such as soft QCD collisions, the underlying event and diffractive processes; the string and cluster models for hadron formation; the treatment of hadron and tau decays; the inclusion of QED radiation and beyond-Standard-Model processes. We describe the principal features of the Ariadne, Herwig++, Pythia 8 and Sherpa generators, together with the Rivet and Professor validation and tuning tools, and discuss the physics philosophy behind the proper use of these generators and tools. This review is aimed at phenomenologists wishing to understand better how parton-level predictions are translated into hadron-level events as well as experimentalists wanting a deeper insight into the tools available for signal and background simulation at the LHC.

  5. Foam A General Purpose Cellular Monte Carlo Event Generator

    CERN Document Server

    Jadach, Stanislaw

    2003-01-01

    A general purpose, self-adapting, Monte Carlo (MC) event generator (simulator) is described. The high efficiency of the MC, that is small maximum weight or variance of the MC weight is achieved by means of dividing the integration domain into small cells. The cells can be $n$-dimensional simplices, hyperrectangles or Cartesian product of them. The grid of cells, called ``foam'', is produced in the process of the binary split of the cells. The choice of the next cell to be divided and the position/direction of the division hyper-plane is driven by the algorithm which optimizes the ratio of the maximum weight to the average weight or (optionally) the total variance. The algorithm is able to deal, in principle, with an arbitrary pattern of the singularities in the distribution. As any MC generator, it can also be used for the MC integration. With the typical personal computer CPU, the program is able to perform adaptive integration/simulation at relatively small number of dimensions ($\\leq 16$). With the continu...

  6. SNAP: A General Purpose Network Analysis and Graph Mining Library.

    Science.gov (United States)

    Leskovec, Jure; Sosič, Rok

    2016-10-01

    Large networks are becoming a widely used abstraction for studying complex systems in a broad set of disciplines, ranging from social network analysis to molecular biology and neuroscience. Despite an increasing need to analyze and manipulate large networks, only a limited number of tools are available for this task. Here, we describe Stanford Network Analysis Platform (SNAP), a general-purpose, high-performance system that provides easy to use, high-level operations for analysis and manipulation of large networks. We present SNAP functionality, describe its implementational details, and give performance benchmarks. SNAP has been developed for single big-memory machines and it balances the trade-off between maximum performance, compact in-memory graph representation, and the ability to handle dynamic graphs where nodes and edges are being added or removed over time. SNAP can process massive networks with hundreds of millions of nodes and billions of edges. SNAP offers over 140 different graph algorithms that can efficiently manipulate large graphs, calculate structural properties, generate regular and random graphs, and handle attributes and meta-data on nodes and edges. Besides being able to handle large graphs, an additional strength of SNAP is that networks and their attributes are fully dynamic, they can be modified during the computation at low cost. SNAP is provided as an open source library in C++ as well as a module in Python. We also describe the Stanford Large Network Dataset, a set of social and information real-world networks and datasets, which we make publicly available. The collection is a complementary resource to our SNAP software and is widely used for development and benchmarking of graph analytics algorithms.

  7. Use of general purpose graphics processing units with MODFLOW

    Science.gov (United States)

    Hughes, Joseph D.; White, Jeremy T.

    2013-01-01

    To evaluate the use of general-purpose graphics processing units (GPGPUs) to improve the performance of MODFLOW, an unstructured preconditioned conjugate gradient (UPCG) solver has been developed. The UPCG solver uses a compressed sparse row storage scheme and includes Jacobi, zero fill-in incomplete, and modified-incomplete lower-upper (LU) factorization, and generalized least-squares polynomial preconditioners. The UPCG solver also includes options for sequential and parallel solution on the central processing unit (CPU) using OpenMP. For simulations utilizing the GPGPU, all basic linear algebra operations are performed on the GPGPU; memory copies between the central processing unit CPU and GPCPU occur prior to the first iteration of the UPCG solver and after satisfying head and flow criteria or exceeding a maximum number of iterations. The efficiency of the UPCG solver for GPGPU and CPU solutions is benchmarked using simulations of a synthetic, heterogeneous unconfined aquifer with tens of thousands to millions of active grid cells. Testing indicates GPGPU speedups on the order of 2 to 8, relative to the standard MODFLOW preconditioned conjugate gradient (PCG) solver, can be achieved when (1) memory copies between the CPU and GPGPU are optimized, (2) the percentage of time performing memory copies between the CPU and GPGPU is small relative to the calculation time, (3) high-performance GPGPU cards are utilized, and (4) CPU-GPGPU combinations are used to execute sequential operations that are difficult to parallelize. Furthermore, UPCG solver testing indicates GPGPU speedups exceed parallel CPU speedups achieved using OpenMP on multicore CPUs for preconditioners that can be easily parallelized.

  8. SPIDR, a general-purpose readout system for pixel ASICs

    International Nuclear Information System (INIS)

    Heijden, B. van der; Visser, J.; Beuzekom, M. van; Boterenbrood, H.; Munneke, B.; Schreuder, F.; Kulis, S.

    2017-01-01

    The SPIDR (Speedy PIxel Detector Readout) system is a flexible general-purpose readout platform that can be easily adapted to test and characterize new and existing detector readout ASICs. It is originally designed for the readout of pixel ASICs from the Medipix/Timepix family, but other types of ASICs or front-end circuits can be read out as well. The SPIDR system consists of an FPGA board with memory and various communication interfaces, FPGA firmware, CPU subsystem and an API library on the PC . The FPGA firmware can be adapted to read out other ASICs by re-using IP blocks. The available IP blocks include a UDP packet builder, 1 and 10 Gigabit Ethernet MAC's and a 'soft core' CPU . Currently the firmware is targeted at the Xilinx VC707 development board and at a custom board called Compact-SPIDR . The firmware can easily be ported to other Xilinx 7 series and ultra scale FPGAs. The gap between an ASIC and the data acquisition back-end is bridged by the SPIDR system. Using the high pin count VITA 57 FPGA Mezzanine Card (FMC) connector only a simple chip carrier PCB is required. A 1 and a 10 Gigabit Ethernet interface handle the connection to the back-end. These can be used simultaneously for high-speed data and configuration over separate channels. In addition to the FMC connector, configurable inputs and outputs are available for synchronization with other detectors. A high resolution (≈ 27 ps bin size) Time to Digital converter is provided for time stamping events in the detector. The SPIDR system is frequently used as readout for the Medipix3 and Timepix3 ASICs. Using the 10 Gigabit Ethernet interface it is possible to read out a single chip at full bandwidth or up to 12 chips at a reduced rate. Another recent application is the test-bed for the VeloPix ASIC, which is developed for the Vertex Detector of the LHCb experiment. In this case the SPIDR system processes the 20 Gbps scrambled data stream from the VeloPix and distributes it over four

  9. A damage mechanics based general purpose interface/contact element

    Science.gov (United States)

    Yan, Chengyong

    laboratory test data presented in the literature. The results demonstrate that the proposed element and the damage law perform very well. The most important scientific contribution of this dissertation is the proposed damage criterion based on second law of thermodynamic and entropy of the system. The proposed general purpose interface/contact element is another contribution of this research. Compared to the previous adhoc interface elements proposed in the literature, the new one is, much more powerful and includes creep, plastic deformations, sliding, temperature, damage, cyclic behavior and fatigue life in a unified formulation.

  10. CLOUDCLOUD : general-purpose instrument monitoring and data managing software

    Science.gov (United States)

    Dias, António; Amorim, António; Tomé, António

    2016-04-01

    An effective experiment is dependent on the ability to store and deliver data and information to all participant parties regardless of their degree of involvement in the specific parts that make the experiment a whole. Having fast, efficient and ubiquitous access to data will increase visibility and discussion, such that the outcome will have already been reviewed several times, strengthening the conclusions. The CLOUD project aims at providing users with a general purpose data acquisition, management and instrument monitoring platform that is fast, easy to use, lightweight and accessible to all participants of an experiment. This work is now implemented in the CLOUD experiment at CERN and will be fully integrated with the experiment as of 2016. Despite being used in an experiment of the scale of CLOUD, this software can also be used in any size of experiment or monitoring station, from single computers to large networks of computers to monitor any sort of instrument output without influencing the individual instrument's DAQ. Instrument data and meta data is stored and accessed via a specially designed database architecture and any type of instrument output is accepted using our continuously growing parsing application. Multiple databases can be used to separate different data taking periods or a single database can be used if for instance an experiment is continuous. A simple web-based application gives the user total control over the monitored instruments and their data, allowing data visualization and download, upload of processed data and the ability to edit existing instruments or add new instruments to the experiment. When in a network, new computers are immediately recognized and added to the system and are able to monitor instruments connected to them. Automatic computer integration is achieved by a locally running python-based parsing agent that communicates with a main server application guaranteeing that all instruments assigned to that computer are

  11. A platform-independent method for detecting errors in metagenomic sequencing data: DRISEE.

    Directory of Open Access Journals (Sweden)

    Kevin P Keegan

    Full Text Available We provide a novel method, DRISEE (duplicate read inferred sequencing error estimation, to assess sequencing quality (alternatively referred to as "noise" or "error" within and/or between sequencing samples. DRISEE provides positional error estimates that can be used to inform read trimming within a sample. It also provides global (whole sample error estimates that can be used to identify samples with high or varying levels of sequencing error that may confound downstream analyses, particularly in the case of studies that utilize data from multiple sequencing samples. For shotgun metagenomic data, we believe that DRISEE provides estimates of sequencing error that are more accurate and less constrained by technical limitations than existing methods that rely on reference genomes or the use of scores (e.g. Phred. Here, DRISEE is applied to (non amplicon data sets from both the 454 and Illumina platforms. The DRISEE error estimate is obtained by analyzing sets of artifactual duplicate reads (ADRs, a known by-product of both sequencing platforms. We present DRISEE as an open-source, platform-independent method to assess sequencing error in shotgun metagenomic data, and utilize it to discover previously uncharacterized error in de novo sequence data from the 454 and Illumina sequencing platforms.

  12. Development of a platform-independent receiver control system for SISIFOS

    Science.gov (United States)

    Lemke, Roland; Olberg, Michael

    1998-05-01

    Up to now receiver control software was a time consuming development usually written by receiver engineers who had mainly the hardware in mind. We are presenting a low-cost and very flexible system which uses a minimal interface to the real hardware, and which makes it easy to adapt to new receivers. Our system uses Tcl/Tk as a graphical user interface (GUI), SpecTcl as a GUI builder, Pgplot as plotting software, a simple query language (SQL) database for information storage and retrieval, Ethernet socket to socket communication and SCPI as a command control language. The complete system is in principal platform independent but for cost saving reasons we are using it actually on a PC486 running Linux 2.0.30, which is a copylefted Unix. The only hardware dependent part are the digital input/output boards, analog to digital and digital to analog convertors. In the case of the Linux PC we are using a device driver development kit to integrate the boards fully into the kernel of the operating system, which indeed makes them look like an ordinary device. The advantage of this system is firstly the low price and secondly the clear separation between the different software components which are available for many operating systems. If it is not possible, due to CPU performance limitations, to run all the software in a single machine,the SQL-database or the graphical user interface could be installed on separate computers.

  13. High-Throughput Tabular Data Processor - Platform independent graphical tool for processing large data sets.

    Science.gov (United States)

    Madanecki, Piotr; Bałut, Magdalena; Buckley, Patrick G; Ochocka, J Renata; Bartoszewski, Rafał; Crossman, David K; Messiaen, Ludwine M; Piotrowski, Arkadiusz

    2018-01-01

    High-throughput technologies generate considerable amount of data which often requires bioinformatic expertise to analyze. Here we present High-Throughput Tabular Data Processor (HTDP), a platform independent Java program. HTDP works on any character-delimited column data (e.g. BED, GFF, GTF, PSL, WIG, VCF) from multiple text files and supports merging, filtering and converting of data that is produced in the course of high-throughput experiments. HTDP can also utilize itemized sets of conditions from external files for complex or repetitive filtering/merging tasks. The program is intended to aid global, real-time processing of large data sets using a graphical user interface (GUI). Therefore, no prior expertise in programming, regular expression, or command line usage is required of the user. Additionally, no a priori assumptions are imposed on the internal file composition. We demonstrate the flexibility and potential of HTDP in real-life research tasks including microarray and massively parallel sequencing, i.e. identification of disease predisposing variants in the next generation sequencing data as well as comprehensive concurrent analysis of microarray and sequencing results. We also show the utility of HTDP in technical tasks including data merge, reduction and filtering with external criteria files. HTDP was developed to address functionality that is missing or rudimentary in other GUI software for processing character-delimited column data from high-throughput technologies. Flexibility, in terms of input file handling, provides long term potential functionality in high-throughput analysis pipelines, as the program is not limited by the currently existing applications and data formats. HTDP is available as the Open Source software (https://github.com/pmadanecki/htdp).

  14. Lipseys Quest for the Micro-foundations of GPT-the General Purpose Engine

    NARCIS (Netherlands)

    Van der Kooij, B.J.G.

    2016-01-01

    The construct of the General Purpose Technology misses its micro-foundation (as observed by Richard Lipsey). We present a possible solution in the General Purpose Engines. These are the basic innovations and the clusters of contributing and derived innovation, that appear in a Schumpeterian 'cluster

  15. 78 FR 65300 - Notice of Availability (NOA) for General Purpose Warehouse and Information Technology Center...

    Science.gov (United States)

    2013-10-31

    ... (NOA) for General Purpose Warehouse and Information Technology Center Construction (GPW/IT)--Tracy Site... proposed action to construct a General Purpose Warehouse and Information Technology Center at Defense..., Suite 02G09, Alexandria, VA 22350- 3100. FOR FURTHER INFORMATION CONTACT: Ann Engelberger at (703) 767...

  16. An Aerodynamic Database for the Mk 82 General Purpose Low Drag Bomb

    National Research Council Canada - National Science Library

    Krishnamoorthy, L

    1997-01-01

    The drag database of the Mk 82 General Purpose Low Drag bomb, the primary gravity weapon in the RAAF inventory, has some shortcomings in the quality and traceability of data, and in the variations due...

  17. Implementation elements for conversion of general-purpose freeway lane into high-occupancy-vehicle lane

    Science.gov (United States)

    1997-01-01

    Conversion of a general-purpose freeway into a high-occupancy-vehicle (HOV) lane is an alternative to infrastructure addition for HOV system implementation. Research indicates that lane conversion is feasible technically if sufficient HOV usage and m...

  18. Low Overhead Real-Time Computing With General Purpose Operating Systems

    National Research Council Canada - National Science Library

    Raymond, Michael

    2004-01-01

    .... In larger systems and more recently, general-purpose operating systems such as SGI IRIX and Linux are used for new projects because they already have multiprocessor and device driver support as well a large user base...

  19. Catalog of physical protection equipment. Book 3: Volume VII. General purpose display components

    International Nuclear Information System (INIS)

    1977-06-01

    A catalog of commercially available physical protection equipment has been prepared under MITRE contract AT(49-24)-0376 for use by the U. S. Nuclear Regulatory Commission (NRC). Included is information on barrier structures and equipment, interior and exterior intrusion detection sensors, entry (access) control devices, surveillance and alarm assessment equipment, contraband detection sensors, automated response equipment, general purpose displays and general purpose communications, with one volume devoted to each of these eight areas. For each item of equipment the information included consists of performance, physical, cost and supply/logistics data. The entire catalog is contained in three notebooks for ease in its use by licensing and inspection staff at NRC

  20. An integrated development framework for rapid development of platform-independent and reusable satellite on-board software

    Science.gov (United States)

    Ziemke, Claas; Kuwahara, Toshinori; Kossev, Ivan

    2011-09-01

    Even in the field of small satellites, the on-board data handling subsystem has become complex and powerful. With the introduction of powerful CPUs and the availability of considerable amounts of memory on-board a small satellite it has become possible to utilize the flexibility and power of contemporary platform-independent real-time operating systems. Especially the non-commercial sector such like university institutes and community projects such as AMSAT or SSETI are characterized by the inherent lack of financial as well as manpower resources. The opportunity to utilize such real-time operating systems will contribute significantly to achieve a successful mission. Nevertheless the on-board software of a satellite is much more than just an operating system. It has to fulfill a multitude of functional requirements such as: Telecommand interpretation and execution, execution of control loops, generation of telemetry data and frames, failure detection isolation and recovery, the communication with peripherals and so on. Most of the aforementioned tasks are of generic nature and have to be conducted on any satellite with only minor modifications. A general set of functional requirements as well as a protocol for communication is defined in the SA ECSS-E-70-41A standard "Telemetry and telecommand packet utilization". This standard not only defines the communication protocol of the satellite-ground link but also defines a set of so called services which have to be available on-board of every compliant satellite and which are of generic nature. In this paper, a platform-independent and reusable framework is described which is implementing not only the ECSS-E-70-41A standard but also functionalities for interprocess communication, scheduling and a multitude of tasks commonly performed on-board of a satellite. By making use of the capabilities of the high-level programming language C/C++, the powerful open source library BOOST, the real-time operating system RTEMS and

  1. Report on the operation and utilization of general purpose use computer system 2001

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, Kunihiko; Watanabe, Reiko; Tsugawa, Kazuko; Tsuda, Kenzo; Yamamoto, Takashi; Nakamura, Osamu; Kamimura, Tetsuo [National Inst. for Fusion Science, Toki, Gifu (Japan)

    2001-09-01

    The General Purpose Use Computer System of National Institute for Fusion Science was replaced in January, 2001. The System is almost fully used after the first three months operation. Reported here is the process of the introduction of the new system and the state of the operation and utilization of the System between January and March, 2001, especially the detailed utilization of March. (author)

  2. Some thermo-electromagnetic applications to fusion technology of a general purpose CAD package

    International Nuclear Information System (INIS)

    Girdinio, P.; Molfino, P.; Molinari, G.; Raia, G.; Rosatelli, F.; Viviani, A.

    1985-01-01

    A general purpose CAD package is applied to the solution of problems related to fusion technology. The problems solved are the interacting electromagnetic and thermal fields in a resistive toroidal coil and the design of the poloidal field coils in Tokamak machines. In both cases, the procedure used is reported and the results obtained are displayed and discussed

  3. Some thermo-electromagnetic applications to fusion technology of a general purpose CAD package

    International Nuclear Information System (INIS)

    Girdinio, P.; Molfino, P.; Molinari, G.; Viviani, A.; Raia, G.; Rosatelli, F.

    1984-01-01

    A general purpose CAD package is applied to the solution of problems related to fusion technology. The problems solved are the interacting electromagnetic and thermal fields in a resistive toroidal coil and the design of the poloidal field coils in Tokamak machines. In both cases, the procedure used is reported and the results obtained are displayed and discussed. (author)

  4. BALTORO a general purpose code for coupling discrete ordinates and Monte-Carlo radiation transport calculations

    International Nuclear Information System (INIS)

    Zazula, J.M.

    1983-01-01

    The general purpose code BALTORO was written for coupling the three-dimensional Monte-Carlo /MC/ with the one-dimensional Discrete Ordinates /DO/ radiation transport calculations. The quantity of a radiation-induced /neutrons or gamma-rays/ nuclear effect or the score from a radiation-yielding nuclear effect can be analysed in this way. (author)

  5. An Evaluation of Classroom Activities and Exercises in ELT Classroom for General Purposes Course

    Science.gov (United States)

    Zohrabi, Mohammad

    2011-01-01

    It is through effective implementation of activities and exercises which students can be motivated and consequently lead to language learning. However, as an insider, the experience of teaching English for General Purposes (EGP) course indicates that it has some problems which need to be modified. In order to evaluate the EGP course,…

  6. General-purpose Monte Carlo codes for neutron and photon transport calculations. MVP version 3

    International Nuclear Information System (INIS)

    Nagaya, Yasunobu

    2017-01-01

    JAEA has developed a general-purpose neutron/photon transport Monte Carlo code MVP. This paper describes the recent development of the MVP code and reviews the basic features and capabilities. In addition, capabilities implemented in Version 3 are also described. (author)

  7. Experience of application of the general-purpose pressure and pressure drop transformers on nitrogen tetroxide

    International Nuclear Information System (INIS)

    Grishchuk, M.Kh.

    1979-01-01

    An experience of application of the general-purpose pressure and pressure drop transformers at the Nuclear Power Engineering Institute of the BSSR Academy of Sciences for measurements on nitrogen tetroxide has been described. The concrete recommendations on the types of transformers and the volume of preparational work before putting them into operation have been given

  8. A general-purpose trigger processor system and its application to fast vertex trigger

    International Nuclear Information System (INIS)

    Hazumi, M.; Banas, E.; Natkaniec, Z.; Ostrowicz, W.

    1997-12-01

    A general-purpose hardware trigger system has been developed. The system comprises programmable trigger processors and pattern generator/samplers. The hardware design of the system is described. An application as a prototype of the very fast vertex trigger in an asymmetric B-factory at KEK is also explained. (author)

  9. A general purpose program system for high energy physics experiment data acquisition and analysis

    International Nuclear Information System (INIS)

    Li Shuren; Xing Yuguo; Jin Bingnian

    1985-01-01

    This paper introduced the functions, structure and system generation of a general purpose program system (Fermilab MULTI) for high energy physics experiment data acquisition and analysis. Works concerning the reconstruction of MULTI system level 0.5 which can be run on the computer PDP-11/23 are also introduced briefly

  10. Perbandingan Kemampuan Embedded Computer dengan General Purpose Computer untuk Pengolahan Citra

    Directory of Open Access Journals (Sweden)

    Herryawan Pujiharsono

    2017-08-01

    Full Text Available Perkembangan teknologi komputer membuat pengolahan citra saat ini banyak dikembangkan untuk dapat membantu manusia di berbagai bidang pekerjaan. Namun, tidak semua bidang pekerjaan dapat dikembangkan dengan pengolahan citra karena tidak mendukung penggunaan komputer sehingga mendorong pengembangan pengolahan citra dengan mikrokontroler atau mikroprosesor khusus. Perkembangan mikrokontroler dan mikroprosesor memungkinkan pengolahan citra saat ini dapat dikembangkan dengan embedded computer atau single board computer (SBC. Penelitian ini bertujuan untuk menguji kemampuan embedded computer dalam mengolah citra dan membandingkan hasilnya dengan komputer pada umumnya (general purpose computer. Pengujian dilakukan dengan mengukur waktu eksekusi dari empat operasi pengolahan citra yang diberikan pada sepuluh ukuran citra. Hasil yang diperoleh pada penelitian ini menunjukkan bahwa optimasi waktu eksekusi embedded computer lebih baik jika dibandingkan dengan general purpose computer dengan waktu eksekusi rata-rata embedded computer adalah 4-5 kali waktu eksekusi general purpose computer dan ukuran citra maksimal yang tidak membebani CPU terlalu besar untuk embedded computer adalah 256x256 piksel dan untuk general purpose computer adalah 400x300 piksel.

  11. 21 CFR 862.2050 - General purpose laboratory equipment labeled or promoted for a specific medical use.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false General purpose laboratory equipment labeled or... TOXICOLOGY DEVICES Clinical Laboratory Instruments § 862.2050 General purpose laboratory equipment labeled or promoted for a specific medical use. (a) Identification. General purpose laboratory equipment labeled or...

  12. Discrete-Event Execution Alternatives on General Purpose Graphical Processing Units

    International Nuclear Information System (INIS)

    Perumalla, Kalyan S.

    2006-01-01

    Graphics cards, traditionally designed as accelerators for computer graphics, have evolved to support more general-purpose computation. General Purpose Graphical Processing Units (GPGPUs) are now being used as highly efficient, cost-effective platforms for executing certain simulation applications. While most of these applications belong to the category of time-stepped simulations, little is known about the applicability of GPGPUs to discrete event simulation (DES). Here, we identify some of the issues and challenges that the GPGPU stream-based interface raises for DES, and present some possible approaches to moving DES to GPGPUs. Initial performance results on simulation of a diffusion process show that DES-style execution on GPGPU runs faster than DES on CPU and also significantly faster than time-stepped simulations on either CPU or GPGPU.

  13. Design method of general-purpose driving circuit for CCD based on CPLD

    International Nuclear Information System (INIS)

    Zhang Yong; Tang Benqi; Xiao Zhigang; Wang Zujun; Huang Shaoyan

    2005-01-01

    It is very important for studying the radiation damage effects and mechanism systematically about CCD to develop a general-purpose test platform. The paper discusses the design method of general-purpose driving circuit for CCD based on CPLD and the realization approach. A main controller has being designed to read the data file from the outer memory, setup the correlative parameter registers and produce the driving pulses according with parameter request strictly, which is based on MAX7000S by using MAX-PLUS II software. The basic driving circuit module has being finished based on this method. The output waveform of the module is the same figure as the simulation waveform. The result indicates that the design method is feasible. (authors)

  14. Application of a general purpose finite element program system in pressure vessel technology

    International Nuclear Information System (INIS)

    Aamodt, B.; Sandsmark, N.; Medonos, S.

    1977-01-01

    Main advantages of using general purpose finite element program systems in structural analysis are summarized. Several illustrative applications of the program system SESAM-69 to pressure vessel problems are described. The first example is a dynamic analysis of the motor housing of the internal main circulation pump of a BWR nuclear reactor. The next example is a transient heat conduction and stress analysis of deflector of feeding nozzle of PWR nuclear reactor. Then, numerical calculations of stress intensity factors and fatigue crack growth of semi-elliptical surface cracks are discussed. And finally, an elasto-plastic analysis of a thick plate with edge-cracks is considered. It is concluded that due to the fact that general purpose finite element program systems are general and user-orientated, they will gain increasingly higher popularity in the years ahead

  15. General-purpose heat source project and space nuclear safety and fuels program. Progress report

    International Nuclear Information System (INIS)

    Maraman, W.J.

    1979-12-01

    This formal monthly report covers the studies related to the use of 238 PuO 2 in radioisotopic power systems carried out for the Advanced Nuclear Systems and Projects Division of the Los Alamos Scientific Laboratory. The two programs involved are general-purpose heat source development and space nuclear safety and fuels. Most of the studies discussed hear are of a continuing nature. Results and conclusions described may change as the work continues

  16. Development of general-purpose particle and heavy ion transport monte carlo code

    International Nuclear Information System (INIS)

    Iwase, Hiroshi; Nakamura, Takashi; Niita, Koji

    2002-01-01

    The high-energy particle transport code NMTC/JAM, which has been developed at JAERI, was improved for the high-energy heavy ion transport calculation by incorporating the JQMD code, the SPAR code and the Shen formula. The new NMTC/JAM named PHITS (Particle and Heavy-Ion Transport code System) is the first general-purpose heavy ion transport Monte Carlo code over the incident energies from several MeV/nucleon to several GeV/nucleon. (author)

  17. Comparison of progressive addition lenses for general purpose and for computer vision: an office field study.

    Science.gov (United States)

    Jaschinski, Wolfgang; König, Mirjam; Mekontso, Tiofil M; Ohlendorf, Arne; Welscher, Monique

    2015-05-01

    Two types of progressive addition lenses (PALs) were compared in an office field study: 1. General purpose PALs with continuous clear vision between infinity and near reading distances and 2. Computer vision PALs with a wider zone of clear vision at the monitor and in near vision but no clear distance vision. Twenty-three presbyopic participants wore each type of lens for two weeks in a double-masked four-week quasi-experimental procedure that included an adaptation phase (Weeks 1 and 2) and a test phase (Weeks 3 and 4). Questionnaires on visual and musculoskeletal conditions as well as preferences regarding the type of lenses were administered. After eight more weeks of free use of the spectacles, the preferences were assessed again. The ergonomic conditions were analysed from photographs. Head inclination when looking at the monitor was significantly lower by 2.3 degrees with the computer vision PALs than with the general purpose PALs. Vision at the monitor was judged significantly better with computer PALs, while distance vision was judged better with general purpose PALs; however, the reported advantage of computer vision PALs differed in extent between participants. Accordingly, 61 per cent of the participants preferred the computer vision PALs, when asked without information about lens design. After full information about lens characteristics and additional eight weeks of free spectacle use, 44 per cent preferred the computer vision PALs. On average, computer vision PALs were rated significantly better with respect to vision at the monitor during the experimental part of the study. In the final forced-choice ratings, approximately half of the participants preferred either the computer vision PAL or the general purpose PAL. Individual factors seem to play a role in this preference and in the rated advantage of computer vision PALs. © 2015 The Authors. Clinical and Experimental Optometry © 2015 Optometry Australia.

  18. A new general purpose event horizon finder for 3D numerical spacetimes

    International Nuclear Information System (INIS)

    Diener, Peter

    2003-01-01

    I present a new general purpose event horizon finder for full 3D numerical spacetimes. It works by evolving a complete null surface backwards in time. The null surface is described as the zero-level set of a scalar function, which in principle is defined everywhere. This description of the surface allows the surface, trivially, to change topology, making this event horizon finder able to handle numerical spacetimes where two (or more) black holes merge into a single final black hole

  19. A prepaid case study: Ready Credit’s general-purpose & transit-fare programs

    OpenAIRE

    Philip Keitel

    2012-01-01

    Today, prepaid cards are used in dozens of payment applications. To examine the most recent developments, the Payment Cards Center of the Federal Reserve Bank of Philadelphia hosted a workshop on August 22, 2011. Leading the workshop was Tim Walsh, president and chief executive officer of Ready Credit Corporation, a firm that developed network-branded prepaid cards for use in transit-fare systems and also markets general-purpose, reloadable prepaid cards to consumers. Walsh discussed the uniq...

  20. Design of a general purpose (RS-232C) analog-to-digital data converter

    International Nuclear Information System (INIS)

    Ali, Q.

    1995-01-01

    The purpose of this project is to design a general purpose hardware that interfaces analog devices with any desirable computer supporting the RS-232 interface. The hardware incorporates bidirectional data transmission at 1,200 bps, 2,400 bps, 4800 bps, 9,600 bps, 19,200 pbs and 38400 bps. The communication / processing software has been written in C language that incorporates the idea of the potability of the software from one environment to the other. (author)

  1. Design of low-cost general purpose microcontroller based neuromuscular stimulator.

    Science.gov (United States)

    Koçer, S; Rahmi Canal, M; Güler, I

    2000-04-01

    In this study, a general purpose, low-cost, programmable, portable and high performance stimulator is designed and implemented. For this purpose, a microcontroller is used in the design of the stimulator. The duty cycle and amplitude of the designed system can be controlled using a keyboard. The performance test of the system has shown that the results are reliable. The overall system can be used as the neuromuscular stimulator under safe conditions.

  2. Design of General-purpose Industrial signal acquisition system in a large scientific device

    Science.gov (United States)

    Ren, Bin; Yang, Lei

    2018-02-01

    In order to measure the industrial signal of a large scientific device experiment, a set of industrial data general-purpose acquisition system has been designed. It can collect 4~20mA current signal and 0~10V voltage signal. Through the practical experiments, it shows that the system is flexible, reliable, convenient and economical, and the system has characters of high definition and strong anti-interference ability. Thus, the system fully meets the design requirements..

  3. Child first language and adult second language are both tied to general-purpose learning systems.

    Science.gov (United States)

    Hamrick, Phillip; Lum, Jarrad A G; Ullman, Michael T

    2018-02-13

    Do the mechanisms underlying language in fact serve general-purpose functions that preexist this uniquely human capacity? To address this contentious and empirically challenging issue, we systematically tested the predictions of a well-studied neurocognitive theory of language motivated by evolutionary principles. Multiple metaanalyses were performed to examine predicted links between language and two general-purpose learning systems, declarative and procedural memory. The results tied lexical abilities to learning only in declarative memory, while grammar was linked to learning in both systems in both child first language and adult second language, in specific ways. In second language learners, grammar was associated with only declarative memory at lower language experience, but with only procedural memory at higher experience. The findings yielded large effect sizes and held consistently across languages, language families, linguistic structures, and tasks, underscoring their reliability and validity. The results, which met the predicted pattern, provide comprehensive evidence that language is tied to general-purpose systems both in children acquiring their native language and adults learning an additional language. Crucially, if language learning relies on these systems, then our extensive knowledge of the systems from animal and human studies may also apply to this domain, leading to predictions that might be unwarranted in the more circumscribed study of language. Thus, by demonstrating a role for these systems in language, the findings simultaneously lay a foundation for potentially important advances in the study of this critical domain.

  4. Design of tallying function for general purpose Monte Carlo particle transport code JMCT

    International Nuclear Information System (INIS)

    Shangguan Danhua; Li Gang; Deng Li; Zhang Baoyin

    2013-01-01

    A new postponed accumulation algorithm was proposed. Based on JCOGIN (J combinatorial geometry Monte Carlo transport infrastructure) framework and the postponed accumulation algorithm, the tallying function of the general purpose Monte Carlo neutron-photon transport code JMCT was improved markedly. JMCT gets a higher tallying efficiency than MCNP 4C by 28% for simple geometry model, and JMCT is faster than MCNP 4C by two orders of magnitude for complicated repeated structure model. The available ability of tallying function for JMCT makes firm foundation for reactor analysis and multi-step burnup calculation. (authors)

  5. General-purpose chemical analyzer for on-line analyses of radioactive solutions

    International Nuclear Information System (INIS)

    Spencer, W.A.; Kronberg, J.W.

    1983-01-01

    An automated analyzer is being developed to perform analytical measurements on radioactive solutions on-line in a hostile environment. This General Purpose Chemical Analyzer (GPCA) samples a process stream, adds reagents, measures solution absorbances or electrode potentials, and automatically calculates the results. The use of modular components, under microprocessor control, permits a single analyzer design to carry out many types of analyses. This paper discusses the more important design criteria for the GPCA, and describes the equipment being tested in a prototype unit

  6. Software design of a general purpose data acquisition and control executive

    International Nuclear Information System (INIS)

    Labiak, W.G.; Minor, E.G.

    1981-01-01

    The software design of an executive which performs general purpose data acquisition, monitoring, and control is presented. The executive runs on a memory-based mini or micro-computer and communicates with a disk-based computer where data analysis and display are done. The executive design stresses reliability and versatility, and has yielded software which can provide control and monitoring for widely different hardware systems. Applications of this software on two major fusion energy experiments at Lawrence Livermore National Laboratory will be described

  7. Specialized Monte Carlo codes versus general-purpose Monte Carlo codes

    International Nuclear Information System (INIS)

    Moskvin, Vadim; DesRosiers, Colleen; Papiez, Lech; Lu, Xiaoyi

    2002-01-01

    The possibilities of Monte Carlo modeling for dose calculations and optimization treatment are quite limited in radiation oncology applications. The main reason is that the Monte Carlo technique for dose calculations is time consuming while treatment planning may require hundreds of possible cases of dose simulations to be evaluated for dose optimization. The second reason is that general-purpose codes widely used in practice, require an experienced user to customize them for calculations. This paper discusses the concept of Monte Carlo code design that can avoid the main problems that are preventing wide spread use of this simulation technique in medical physics. (authors)

  8. General purpose graphics-processing-unit implementation of cosmological domain wall network evolution.

    Science.gov (United States)

    Correia, J R C C C; Martins, C J A P

    2017-10-01

    Topological defects unavoidably form at symmetry breaking phase transitions in the early universe. To probe the parameter space of theoretical models and set tighter experimental constraints (exploiting the recent advances in astrophysical observations), one requires more and more demanding simulations, and therefore more hardware resources and computation time. Improving the speed and efficiency of existing codes is essential. Here we present a general purpose graphics-processing-unit implementation of the canonical Press-Ryden-Spergel algorithm for the evolution of cosmological domain wall networks. This is ported to the Open Computing Language standard, and as a consequence significant speedups are achieved both in two-dimensional (2D) and 3D simulations.

  9. Generalized Fluid System Simulation Program (GFSSP) Version 6 - General Purpose Thermo-Fluid Network Analysis Software

    Science.gov (United States)

    Majumdar, Alok; Leclair, Andre; Moore, Ric; Schallhorn, Paul

    2011-01-01

    GFSSP stands for Generalized Fluid System Simulation Program. It is a general-purpose computer program to compute pressure, temperature and flow distribution in a flow network. GFSSP calculates pressure, temperature, and concentrations at nodes and calculates flow rates through branches. It was primarily developed to analyze Internal Flow Analysis of a Turbopump Transient Flow Analysis of a Propulsion System. GFSSP development started in 1994 with an objective to provide a generalized and easy to use flow analysis tool for thermo-fluid systems.

  10. A Real-Time Programmer's Tour of General-Purpose L4 Microkernels

    OpenAIRE

    Ruocco Sergio

    2008-01-01

    Abstract L4-embedded is a microkernel successfully deployed in mobile devices with soft real-time requirements. It now faces the challenges of tightly integrated systems, in which user interface, multimedia, OS, wireless protocols, and even software-defined radios must run on a single CPU. In this paper we discuss the pros and cons of L4-embedded for real-time systems design, focusing on the issues caused by the extreme speed optimisations it inherited from its general-purpose ancestors. Sinc...

  11. A Real-Time Programmer's Tour of General-Purpose L4 Microkernels

    OpenAIRE

    Sergio Ruocco

    2008-01-01

    L4-embedded is a microkernel successfully deployed in mobile devices with soft real-time requirements. It now faces the challenges of tightly integrated systems, in which user interface, multimedia, OS, wireless protocols, and even software-defined radios must run on a single CPU. In this paper we discuss the pros and cons of L4-embedded for real-time systems design, focusing on the issues caused by the extreme speed optimisations it inherited from its general-purpose ancestors. Since these i...

  12. Development of a large-scale general purpose two-phase flow analysis code

    International Nuclear Information System (INIS)

    Terasaka, Haruo; Shimizu, Sensuke

    2001-01-01

    A general purpose three-dimensional two-phase flow analysis code has been developed for solving large-scale problems in industrial fields. The code uses a two-fluid model to describe the conservation equations for two-phase flow in order to be applicable to various phenomena. Complicated geometrical conditions are modeled by FAVOR method in structured grid systems, and the discretization equations are solved by a modified SIMPLEST scheme. To reduce computing time a matrix solver for the pressure correction equation is parallelized with OpenMP. Results of numerical examples show that the accurate solutions can be obtained efficiently and stably. (author)

  13. General purpose graphics-processing-unit implementation of cosmological domain wall network evolution

    Science.gov (United States)

    Correia, J. R. C. C. C.; Martins, C. J. A. P.

    2017-10-01

    Topological defects unavoidably form at symmetry breaking phase transitions in the early universe. To probe the parameter space of theoretical models and set tighter experimental constraints (exploiting the recent advances in astrophysical observations), one requires more and more demanding simulations, and therefore more hardware resources and computation time. Improving the speed and efficiency of existing codes is essential. Here we present a general purpose graphics-processing-unit implementation of the canonical Press-Ryden-Spergel algorithm for the evolution of cosmological domain wall networks. This is ported to the Open Computing Language standard, and as a consequence significant speedups are achieved both in two-dimensional (2D) and 3D simulations.

  14. General-purpose heat source project and space nuclear safety fuels program. Progress report, February 1980

    International Nuclear Information System (INIS)

    Maraman, W.J.

    1980-05-01

    This formal monthly report covers the studies related to the use of 238 PuO 2 in radioisotopic power systems carried out for the Advanced Nuclear Systems and Projects Division of the Los Alamos Scientific Laboratory. The two programs involved are: General-Purpose Heat Source Development and Space Nuclear Safety and Fuels. Most of the studies discussed here are of a continuing nature. Results and conclusions described may change as the work continues. Published reference to the results cited in this report should not be made without the explicit permission of the person in charge of the work

  15. Real-time radar signal processing using GPGPU (general-purpose graphic processing unit)

    Science.gov (United States)

    Kong, Fanxing; Zhang, Yan Rockee; Cai, Jingxiao; Palmer, Robert D.

    2016-05-01

    This study introduces a practical approach to develop real-time signal processing chain for general phased array radar on NVIDIA GPUs(Graphical Processing Units) using CUDA (Compute Unified Device Architecture) libraries such as cuBlas and cuFFT, which are adopted from open source libraries and optimized for the NVIDIA GPUs. The processed results are rigorously verified against those from the CPUs. Performance benchmarked in computation time with various input data cube sizes are compared across GPUs and CPUs. Through the analysis, it will be demonstrated that GPGPUs (General Purpose GPU) real-time processing of the array radar data is possible with relatively low-cost commercial GPUs.

  16. Implementation of the dynamic Monte Carlo method for transient analysis in the general purpose code Tripoli

    Energy Technology Data Exchange (ETDEWEB)

    Sjenitzer, Bart L.; Hoogenboom, J. Eduard, E-mail: B.L.Sjenitzer@TUDelft.nl, E-mail: J.E.Hoogenboom@TUDelft.nl [Delft University of Technology (Netherlands)

    2011-07-01

    A new Dynamic Monte Carlo method is implemented in the general purpose Monte Carlo code Tripoli 4.6.1. With this new method incorporated, a general purpose code can be used for safety transient analysis, such as the movement of a control rod or in an accident scenario. To make the Tripoli code ready for calculating on dynamic systems, the Tripoli scheme had to be altered to incorporate time steps, to include the simulation of delayed neutron precursors and to simulate prompt neutron chains. The modified Tripoli code is tested on two sample cases, a steady-state system and a subcritical system and the resulting neutron fluxes behave just as expected. The steady-state calculation has a constant neutron flux over time and this result shows the stability of the calculation. The neutron flux stays constant with acceptable variance. This also shows that the starting conditions are determined correctly. The sub-critical case shows that the code can also handle dynamic systems with a varying neutron flux. (author)

  17. Implementation of the dynamic Monte Carlo method for transient analysis in the general purpose code Tripoli

    International Nuclear Information System (INIS)

    Sjenitzer, Bart L.; Hoogenboom, J. Eduard

    2011-01-01

    A new Dynamic Monte Carlo method is implemented in the general purpose Monte Carlo code Tripoli 4.6.1. With this new method incorporated, a general purpose code can be used for safety transient analysis, such as the movement of a control rod or in an accident scenario. To make the Tripoli code ready for calculating on dynamic systems, the Tripoli scheme had to be altered to incorporate time steps, to include the simulation of delayed neutron precursors and to simulate prompt neutron chains. The modified Tripoli code is tested on two sample cases, a steady-state system and a subcritical system and the resulting neutron fluxes behave just as expected. The steady-state calculation has a constant neutron flux over time and this result shows the stability of the calculation. The neutron flux stays constant with acceptable variance. This also shows that the starting conditions are determined correctly. The sub-critical case shows that the code can also handle dynamic systems with a varying neutron flux. (author)

  18. Power performance of the general-purpose heat source radioisotope thermoelectric generator

    International Nuclear Information System (INIS)

    Bennett, G.L.; Lombardo, J.J.; Rock, B.J.

    1986-01-01

    The General-Purpose Heat Source Radioisotope Thermoelectric Generator (GRHS-RTG) has been developed under the sponsorship of the Department of Energy (DOE) to provide electrical power for the National Aeronautics and Space Administration (NASA) Galileo mission to Jupiter and the joint NASA/European Space Agency (ESA) Ulysses mission to study the polar regions of the sun. A total of five nuclear-heated generators and one electrically heated generator have been built and tested, proving out the design concept and meeting the specification requirements. The GPHS-RTG design is built upon the successful-technology used in the RTGs flown on the two NASA Voyager spacecraft and two US Air Force communications satellites. THe GPHS-RTG converts about 4400 W(t) from the nuclear heat source into at least 285 W(e) at beginning of mission (BOM). The GPHS-RTG consists of two major components: the General-Purpose Heat Source (GPHS) and the Converter. A conceptual drawing of the GPHs-RTG is presented and its design and performance are described

  19. Economic selection index development for Beefmaster cattle II: General-purpose breeding objective.

    Science.gov (United States)

    Ochsner, K P; MacNeil, M D; Lewis, R M; Spangler, M L

    2017-05-01

    An economic selection index was developed for Beefmaster cattle in a general-purpose production system in which bulls are mated to a combination of heifers and mature cows, with resulting progeny retained as replacements or sold at weaning. National average prices from 2010 to 2014 were used to establish income and expenses for the system. Genetic parameters were obtained from the literature. Economic values were estimated by simulating 100,000 animals and approximating the partial derivatives of the profit function by perturbing traits 1 at a time, by 1 unit, while holding the other traits constant at their respective means. Relative economic values for the objective traits calving difficultly direct (CDd), calving difficulty maternal (CDm), weaning weight direct (WWd), weaning weight maternal (WWm), mature cow weight (MW), and heifer pregnancy (HP) were -2.11, -1.53, 18.49, 11.28, -33.46, and 1.19, respectively. Consequently, under the scenario assumed herein, the greatest improvements in profitability could be made by decreasing maintenance energy costs associated with MW followed by improvements in weaning weight. The accuracy of the index lies between 0.218 (phenotypic-based index selection) and 0.428 (breeding values known without error). Implementation of this index would facilitate genetic improvement and increase profitability of Beefmaster cattle operations with a general-purpose breeding objective when replacement females are retained and with weaned calves as the sale end point.

  20. General-purpose readout electronics for white neutron source at China Spallation Neutron Source.

    Science.gov (United States)

    Wang, Q; Cao, P; Qi, X; Yu, T; Ji, X; Xie, L; An, Q

    2018-01-01

    The under-construction White Neutron Source (WNS) at China Spallation Neutron Source is a facility for accurate measurements of neutron-induced cross section. Seven spectrometers are planned at WNS. As the physical objectives of each spectrometer are different, the requirements for readout electronics are not the same. In order to simplify the development of the readout electronics, this paper presents a general method for detector signal readout. This method has advantages of expansibility and flexibility, which makes it adaptable to most detectors at WNS. In the WNS general-purpose readout electronics, signals from any kinds of detectors are conditioned by a dedicated signal conditioning module corresponding to this detector, and then digitized by a common waveform digitizer with high speed and high precision (1 GSPS at 12-bit) to obtain the full waveform data. The waveform digitizer uses a field programmable gate array chip to process the data stream and trigger information in real time. PXI Express platform is used to support the functionalities of data readout, clock distribution, and trigger information exchange between digitizers and trigger modules. Test results show that the performance of the WNS general-purpose readout electronics can meet the requirements of the WNS spectrometers.

  1. General Purpose Data-Driven Online System Health Monitoring with Applications to Space Operations

    Science.gov (United States)

    Iverson, David L.; Spirkovska, Lilly; Schwabacher, Mark

    2010-01-01

    Modern space transportation and ground support system designs are becoming increasingly sophisticated and complex. Determining the health state of these systems using traditional parameter limit checking, or model-based or rule-based methods is becoming more difficult as the number of sensors and component interactions grows. Data-driven monitoring techniques have been developed to address these issues by analyzing system operations data to automatically characterize normal system behavior. System health can be monitored by comparing real-time operating data with these nominal characterizations, providing detection of anomalous data signatures indicative of system faults, failures, or precursors of significant failures. The Inductive Monitoring System (IMS) is a general purpose, data-driven system health monitoring software tool that has been successfully applied to several aerospace applications and is under evaluation for anomaly detection in vehicle and ground equipment for next generation launch systems. After an introduction to IMS application development, we discuss these NASA online monitoring applications, including the integration of IMS with complementary model-based and rule-based methods. Although the examples presented in this paper are from space operations applications, IMS is a general-purpose health-monitoring tool that is also applicable to power generation and transmission system monitoring.

  2. Operation of general purpose stepping motor controllers at the National Synchrotron Light Source

    International Nuclear Information System (INIS)

    Stubblefield, F.W.

    1986-10-01

    A prototype and four copies of a general purpose subsystem for mechanical positioning of detectors, samples, and beam line optical elements which constitute experiments at the National Synchrotron Light Source facility of Brookhaven National Laboratory have been constructed and placed into operation. Construction of a sixth subsystem is nearing completion. The subsystems effect mechanical positioning by controlling a set of stepping motors and their associated position encoders. The units are general purpose in the sense that they receive commands over a standard 9600 baud asynchronous serial line compatible with the RS-232-C electrical signal standard, generate TTL-compatible streams of stepping pulses which can be used with a wide variety of stepping motors, and read back position values from a number of different types and models of position encoder. The basic structure of the motor controller subsystem will be briefly reviewed. Short descriptions of the positioning apparatus actuated at each of the test and experiment stations employing a motor control unit are given. Additions and enhancements to the subsystem made in response to problems indicated by actual operation of the four installed units are described in more detail

  3. Operation of general purpose stepping motor controllers at the National Synchrotron Light Source

    International Nuclear Information System (INIS)

    Stubblefield, F.W.

    1987-01-01

    A prototype and four copies of a general purpose subsystem for mechanical positioning of detectors, samples, and beam line optical elements which constitute experiments at the National Synchrotron Light Source facility of Brookhaven National Laboratory have been constructed and placed into operation. Construction of a sixth subsystem is nearing completion. The subsystems effect mechanical positioning by controlling a set of stepping motors and their associated position encoders. The units are general purpose in the sense that they receive commands over a standard 9600 baud asynchronous serial line compatible with the RS-232-C electrical signal standard, generate TTL-compatible streams of stepping pulses which can be used with a wide variety of stepping motors, and read back position values from a number of different types and models of position encoder. The basic structure of the motor controller subsystem is briefly reviewed. Short descriptions of the positioning apparatus actuated at each of the test and experiment stations employing a motor control unit are given. Additions and enhancements to the sub-system made in response to problems indicated by actual operation of the four installed units are described in more detail

  4. Knowledge Management Systems as an Interdisciplinary Communication and Personalized General-Purpose Technology

    Directory of Open Access Journals (Sweden)

    Ulrich Schmitt

    2015-10-01

    Full Text Available As drivers of human civilization, Knowledge Management (KM processes have co-evolved in line with General-Purpose-Technologies (GPT, such as writing, printing, and information and communication systems. As evidenced by the recent shift from information scarcity to abundance, GPTs are capable of drastically altering societies due to their game-changing impact on our spheres of work and personal development. This paper looks at the prospect of whether a novel Personal Knowledge Management (PKM concept supported by a prototype system has got what it takes to grow into a transformative General-Purpose-Technology. Following up on a series of papers, the KM scenario of a decentralizing revolution where individuals and self-organized groups yield more power and autonomy is examined according to a GPT's essential characteristics, including a wide scope for improvement and elaboration (in people's private, professional and societal life, applicability across a broad range of uses in a wide variety of products and processes (in multi-disciplinary educational and work contexts, and strong complementarities with existing or potential new technologies (like organizational KM Systems and a proposed World Heritage of Memes Repository. The result portrays the PKM concept as a strong candidate due to its personal, autonomous, bottom-up, collaborative, interdisciplinary, and creativity-supporting approach destined to advance the availability, quantity, and quality of the world extelligence and to allow for a wider sharing and faster diffusion of ideas across current disciplinary and opportunity divides.

  5. A Real-Time Programmer's Tour of General-Purpose L4 Microkernels

    Directory of Open Access Journals (Sweden)

    Ruocco Sergio

    2008-01-01

    Full Text Available Abstract L4-embedded is a microkernel successfully deployed in mobile devices with soft real-time requirements. It now faces the challenges of tightly integrated systems, in which user interface, multimedia, OS, wireless protocols, and even software-defined radios must run on a single CPU. In this paper we discuss the pros and cons of L4-embedded for real-time systems design, focusing on the issues caused by the extreme speed optimisations it inherited from its general-purpose ancestors. Since these issues can be addressed with a minimal performance loss, we conclude that, overall, the design of real-time systems based on L4-embedded is possible, and facilitated by a number of design features unique to microkernels and the L4 family.

  6. General purpose dynamic Monte Carlo with continuous energy for transient analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sjenitzer, B. L.; Hoogenboom, J. E. [Delft Univ. of Technology, Dept. of Radiation, Radionuclide and Reactors, Mekelweg 15, 2629JB Delft (Netherlands)

    2012-07-01

    For safety assessments transient analysis is an important tool. It can predict maximum temperatures during regular reactor operation or during an accident scenario. Despite the fact that this kind of analysis is very important, the state of the art still uses rather crude methods, like diffusion theory and point-kinetics. For reference calculations it is preferable to use the Monte Carlo method. In this paper the dynamic Monte Carlo method is implemented in the general purpose Monte Carlo code Tripoli4. Also, the method is extended for use with continuous energy. The first results of Dynamic Tripoli demonstrate that this kind of calculation is indeed accurate and the results are achieved in a reasonable amount of time. With the method implemented in Tripoli it is now possible to do an exact transient calculation in arbitrary geometry. (authors)

  7. Generic functional requirements for a NASA general-purpose data base management system

    Science.gov (United States)

    Lohman, G. M.

    1981-01-01

    Generic functional requirements for a general-purpose, multi-mission data base management system (DBMS) for application to remotely sensed scientific data bases are detailed. The motivation for utilizing DBMS technology in this environment is explained. The major requirements include: (1) a DBMS for scientific observational data; (2) a multi-mission capability; (3) user-friendly; (4) extensive and integrated information about data; (5) robust languages for defining data structures and formats; (6) scientific data types and structures; (7) flexible physical access mechanisms; (8) ways of representing spatial relationships; (9) a high level nonprocedural interactive query and data manipulation language; (10) data base maintenance utilities; (11) high rate input/output and large data volume storage; and adaptability to a distributed data base and/or data base machine configuration. Detailed functions are specified in a top-down hierarchic fashion. Implementation, performance, and support requirements are also given.

  8. A General Purpose Feature Extractor for Light Detection and Ranging Data

    Directory of Open Access Journals (Sweden)

    Edwin B. Olson

    2010-11-01

    Full Text Available Feature extraction is a central step of processing Light Detection and Ranging (LIDAR data. Existing detectors tend to exploit characteristics of specific environments: corners and lines from indoor (rectilinear environments, and trees from outdoor environments. While these detectors work well in their intended environments, their performance in different environments can be poor. We describe a general purpose feature detector for both 2D and 3D LIDAR data that is applicable to virtually any environment. Our method adapts classic feature detection methods from the image processing literature, specifically the multi-scale Kanade-Tomasi corner detector. The resulting method is capable of identifying highly stable and repeatable features at a variety of spatial scales without knowledge of environment, and produces principled uncertainty estimates and corner descriptors at same time. We present results on both software simulation and standard datasets, including the 2D Victoria Park and Intel Research Center datasets, and the 3D MIT DARPA Urban Challenge dataset.

  9. A general purpose feature extractor for light detection and ranging data.

    Science.gov (United States)

    Li, Yangming; Olson, Edwin B

    2010-01-01

    Feature extraction is a central step of processing Light Detection and Ranging (LIDAR) data. Existing detectors tend to exploit characteristics of specific environments: corners and lines from indoor (rectilinear) environments, and trees from outdoor environments. While these detectors work well in their intended environments, their performance in different environments can be poor. We describe a general purpose feature detector for both 2D and 3D LIDAR data that is applicable to virtually any environment. Our method adapts classic feature detection methods from the image processing literature, specifically the multi-scale Kanade-Tomasi corner detector. The resulting method is capable of identifying highly stable and repeatable features at a variety of spatial scales without knowledge of environment, and produces principled uncertainty estimates and corner descriptors at same time. We present results on both software simulation and standard datasets, including the 2D Victoria Park and Intel Research Center datasets, and the 3D MIT DARPA Urban Challenge dataset.

  10. Edge corrections to electromagnetic Casimir energies from general-purpose Mathieu-function routines

    Science.gov (United States)

    Blose, Elizabeth Noelle; Ghimire, Biswash; Graham, Noah; Stratton-Smith, Jeremy

    2015-01-01

    Scattering theory methods make it possible to calculate the Casimir energy of a perfectly conducting elliptic cylinder opposite a perfectly conducting plane in terms of Mathieu functions. In the limit of zero radius, the elliptic cylinder becomes a finite-width strip, which allows for the study of edge effects. However, existing packages for computing Mathieu functions are insufficient for this calculation because none can compute Mathieu functions of both the first and second kind for complex arguments. To address this shortcoming, we have written a general-purpose Mathieu-function package, based on algorithms developed by Alhargan. We use these routines to find edge corrections to the proximity force approximation for the Casimir energy of a perfectly conducting strip opposite a perfectly conducting plane.

  11. A General Purpose Connections type CTI Server Based on SIP Protocol and Its Implementation

    Science.gov (United States)

    Watanabe, Toru; Koizumi, Hisao

    In this paper, we propose a general purpose connections type CTI (Computer Telephony Integration) server that provides various CTI services such as voice logging where the CTI server communicates with IP-PBX using the SIP (Session Initiation Protocol), and accumulates voice packets of external line telephone call flowing between an IP telephone for extension and a VoIP gateway connected to outside line networks. The CTI server realizes CTI services such as voice logging, telephone conference, or IVR (interactive voice response) with accumulating and processing voice packets sampled. Furthermore, the CTI server incorporates a web server function which can provide various CTI services such as a Web telephone directory via a Web browser to PCs, cellular telephones or smart-phones in mobile environments.

  12. The General-Purpose Heat Source Radioisotope Thermoelectric Generator: Power for the Galileo and Ulysses missions

    International Nuclear Information System (INIS)

    Bennett, G.L.; Lombardo, J.J.; Hemler, R.J.; Peterson, J.R.

    1986-01-01

    Electrical power for NASA's Galileo mission to Jupiter and ESA's Ulysses mission to explore the polar regions of the Sun will be provided by General-Purpose Heat Source Radioisotope Thermo-electric Generators (GPHS-RTGs). Building upon the successful RTG technology used in the Voyager program, each GPHS-RTG will provide at least 285 W(e) at beginning-of-mission. The design concept has been proven through extensive tests of an electrically heated Engineering Unit and a nuclear-heated Qualification Unit. Four flight generators have been successfully assembled and tested for use on the Galileo and Ulysses spacecraft. All indications are that the GPHS-RTGs will meet or exceed the power requirement of the missions

  13. INGEN: a general-purpose mesh generator for finite element codes

    International Nuclear Information System (INIS)

    Cook, W.A.

    1979-05-01

    INGEN is a general-purpose mesh generator for two- and three-dimensional finite element codes. The basic parts of the code are surface and three-dimensional region generators that use linear-blending interpolation formulas. These generators are based on an i, j, k index scheme that is used to number nodal points, construct elements, and develop displacement and traction boundary conditions. This code can generate truss elements (2 modal points); plane stress, plane strain, and axisymmetry two-dimensional continuum elements (4 to 8 nodal points); plate elements (4 to 8 nodal points); and three-dimensional continuum elements (8 to 21 nodal points). The traction loads generated are consistent with the element generated. The expansion--contraction option is of special interest. This option makes it possible to change an existing mesh such that some regions are refined and others are made coarser than the original mesh. 9 figures

  14. Development of general-purpose software to analyze the static thermal characteristic of nuclear power plant

    International Nuclear Information System (INIS)

    Nakao, Yoshinobu; Koda, Eiichi; Takahashi, Toru

    2009-01-01

    We have developed the general-purpose software by which static thermal characteristic of the power generation system is analyzed easily. This software has the notable features as follows. It has the new algorithm to solve non-linear simultaneous equations to analyze the static thermal characteristics such as heat and mass balance, efficiencies, etc. of various power generation systems. It has the flexibility for setting calculation conditions. It is able to be executed on the personal computer easily and quickly. We ensured that it is able to construct heat and mass balance diagrams of main steam system of nuclear power plant and calculate the power output and efficiencies of the system. Furthermore, we evaluated various heat recovery measures of steam generator blowdown water and found that this software could be a useful operation aid for planning effective changes in support of power stretch. (author)

  15. The ICVSIE: A General Purpose Integral Equation Method for Bio-Electromagnetic Analysis.

    Science.gov (United States)

    Gomez, Luis J; Yucel, Abdulkadir C; Michielssen, Eric

    2018-03-01

    An internally combined volume surface integral equation (ICVSIE) for analyzing electromagnetic (EM) interactions with biological tissue and wide ranging diagnostic, therapeutic, and research applications, is proposed. The ICVSIE is a system of integral equations in terms of volume and surface equivalent currents in biological tissue subject to fields produced by externally or internally positioned devices. The system is created by using equivalence principles and solved numerically; the resulting current values are used to evaluate scattered and total electric fields, specific absorption rates, and related quantities. The validity, applicability, and efficiency of the ICVSIE are demonstrated by EM analysis of transcranial magnetic stimulation, magnetic resonance imaging, and neuromuscular electrical stimulation. Unlike previous integral equations, the ICVSIE is stable regardless of the electric permittivities of the tissue or frequency of operation, providing an application-agnostic computational framework for EM-biomedical analysis. Use of the general purpose and robust ICVSIE permits streamlining the development, deployment, and safety analysis of EM-biomedical technologies.

  16. Real-time traffic sign recognition based on a general purpose GPU and deep-learning.

    Science.gov (United States)

    Lim, Kwangyong; Hong, Yongwon; Choi, Yeongwoo; Byun, Hyeran

    2017-01-01

    We present a General Purpose Graphics Processing Unit (GPGPU) based real-time traffic sign detection and recognition method that is robust against illumination changes. There have been many approaches to traffic sign recognition in various research fields; however, previous approaches faced several limitations when under low illumination or wide variance of light conditions. To overcome these drawbacks and improve processing speeds, we propose a method that 1) is robust against illumination changes, 2) uses GPGPU-based real-time traffic sign detection, and 3) performs region detecting and recognition using a hierarchical model. This method produces stable results in low illumination environments. Both detection and hierarchical recognition are performed in real-time, and the proposed method achieves 0.97 F1-score on our collective dataset, which uses the Vienna convention traffic rules (Germany and South Korea).

  17. Interfacing a General Purpose Fluid Network Flow Program with the SINDA/G Thermal Analysis Program

    Science.gov (United States)

    Schallhorn, Paul; Popok, Daniel

    1999-01-01

    A general purpose, one dimensional fluid flow code is currently being interfaced with the thermal analysis program Systems Improved Numerical Differencing Analyzer/Gaski (SINDA/G). The flow code, Generalized Fluid System Simulation Program (GFSSP), is capable of analyzing steady state and transient flow in a complex network. The flow code is capable of modeling several physical phenomena including compressibility effects, phase changes, body forces (such as gravity and centrifugal) and mixture thermodynamics for multiple species. The addition of GFSSP to SINDA/G provides a significant improvement in convective heat transfer modeling for SINDA/G. The interface development is conducted in multiple phases. This paper describes the first phase of the interface which allows for steady and quasi-steady (unsteady solid, steady fluid) conjugate heat transfer modeling.

  18. A low cost general purpose portable programmable master/slave manipulative appliance

    International Nuclear Information System (INIS)

    Cameron, W.

    1984-01-01

    The TRIUMF 100 μA 500 MeV cyclotron, located at the University of British Columbia, required a low cost, portable master/slave manipulative capability for experimental beam line servicing. A programmable capability was also required for the hot cell manipulators. A general purpose unit was developed that might also have applications in light manufacturing and medical rehabilitation. The project now in prototype testing represents a modular portable robot costing less than $5000 that is lead-through-teach programmable by either a master controller or hands-on lead-through. Task programs are stored and retrieved on any 32 k personal computer. An on-board proportional integral derivative controller (Motorola 6809 based) gives discrete positioning of the six degrees of freedom 2 kg capacity end effector

  19. Simrank: Rapid and sensitive general-purpose k-mer search tool

    Energy Technology Data Exchange (ETDEWEB)

    DeSantis, T.Z.; Keller, K.; Karaoz, U.; Alekseyenko, A.V; Singh, N.N.S.; Brodie, E.L; Pei, Z.; Andersen, G.L; Larsen, N.

    2011-04-01

    Terabyte-scale collections of string-encoded data are expected from consortia efforts such as the Human Microbiome Project (http://nihroadmap.nih.gov/hmp). Intra- and inter-project data similarity searches are enabled by rapid k-mer matching strategies. Software applications for sequence database partitioning, guide tree estimation, molecular classification and alignment acceleration have benefited from embedded k-mer searches as sub-routines. However, a rapid, general-purpose, open-source, flexible, stand-alone k-mer tool has not been available. Here we present a stand-alone utility, Simrank, which allows users to rapidly identify database strings the most similar to query strings. Performance testing of Simrank and related tools against DNA, RNA, protein and human-languages found Simrank 10X to 928X faster depending on the dataset. Simrank provides molecular ecologists with a high-throughput, open source choice for comparing large sequence sets to find similarity.

  20. A Real-Time Programmer's Tour of General-Purpose L4 Microkernels

    Directory of Open Access Journals (Sweden)

    Sergio Ruocco

    2008-02-01

    Full Text Available L4-embedded is a microkernel successfully deployed in mobile devices with soft real-time requirements. It now faces the challenges of tightly integrated systems, in which user interface, multimedia, OS, wireless protocols, and even software-defined radios must run on a single CPU. In this paper we discuss the pros and cons of L4-embedded for real-time systems design, focusing on the issues caused by the extreme speed optimisations it inherited from its general-purpose ancestors. Since these issues can be addressed with a minimal performance loss, we conclude that, overall, the design of real-time systems based on L4-embedded is possible, and facilitated by a number of design features unique to microkernels and the L4 family.

  1. General-purpose stepping motor-encoder positioning subsystem with standard asynchronous serial-line interface

    International Nuclear Information System (INIS)

    Stubblefield, F.W.; Alberi, J.L.

    1982-01-01

    A general-purpose mechanical positioning subsystem for open-loop control of experiment devices which have their positions established and read out by stepping motor-encoder combinations has been developed. The subsystem is to be used mainly for experiments to be conducted at the National Synchrotron Light Source at Brookhaven National Laboratory. The subsystem unit has been designed to be compatible with a wide variety of stepping motor and encoder types. The unit may be operated by any device capable of driving a standard RS-232-C asynchronous serial communication line. An informal survey has shown that several experiments at the Light Source will use one particular type of computer, operating system, and programming language. Accordingly, a library of subroutines compatible with this combination of computer system elements has been written to facilitate driving the positioning subsystem unit

  2. Development and application of General Purpose Data Acquisition Shell (GPDAS) at advanced photon source

    International Nuclear Information System (INIS)

    Chung, Youngjoo; Kim, Keeman.

    1991-01-01

    An operating system shell GPDAS (General Purpose Data Acquisition Shell) on MS-DOS-based microcomputers has been developed to provide flexibility in data acquisition and device control for magnet measurements at the Advanced Photon Source. GPDAS is both a command interpreter and an integrated script-based programming environment. It also incorporates the MS-DOS shell to make use of the existing utility programs for file manipulation and data analysis. Features include: alias definition, virtual memory, windows, graphics, data and procedure backup, background operation, script programming language, and script level debugging. Data acquisition system devices can be controlled through IEEE488 board, multifunction I/O board, digital I/O board and Gespac crate via Euro G-64 bus. GPDAS is now being used for diagnostics R ampersand D and accelerator physics studies as well as for magnet measurements. Their hardware configurations will also be discussed. 3 refs., 3 figs

  3. General Purpose Graphics Processing Unit Based High-Rate Rice Decompression and Reed-Solomon Decoding

    Energy Technology Data Exchange (ETDEWEB)

    Loughry, Thomas A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    As the volume of data acquired by space-based sensors increases, mission data compression/decompression and forward error correction code processing performance must likewise scale. This competency development effort was explored using the General Purpose Graphics Processing Unit (GPGPU) to accomplish high-rate Rice Decompression and high-rate Reed-Solomon (RS) decoding at the satellite mission ground station. Each algorithm was implemented and benchmarked on a single GPGPU. Distributed processing across one to four GPGPUs was also investigated. The results show that the GPGPU has considerable potential for performing satellite communication Data Signal Processing, with three times or better performance improvements and up to ten times reduction in cost over custom hardware, at least in the case of Rice Decompression and Reed-Solomon Decoding.

  4. Literature Review: Weldability of Iridium DOP-26 Alloy for General Purpose Heat Source

    Energy Technology Data Exchange (ETDEWEB)

    Burgardt, Paul [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pierce, Stanley W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-10-19

    The basic purpose of this paper is to provide a literature review relative to fabrication of the General Purpose Heat Source (GPHS) that is used to provide electrical power for deep space missions of NASA. The particular fabrication operation to be addressed here is arc welding of the GPHS encapsulation. A considerable effort was made to optimize the fabrication of the fuel pellets and of other elements of the encapsulation; that work will not be directly addressed in this paper. This report consists of three basic sections: 1) a brief description of the GPHS will be provided as background information for the reader; 2) mechanical properties and the optimization thereof as relevant to welding will be discussed; 3) a review of the arc welding process development and optimization will be presented. Since the welding equipment must be upgraded for future production, some discussion of the historical establishment of relevant welding variables and possible changes thereto will also be discussed.

  5. Developing wearable bio-feedback systems: a general-purpose platform.

    Science.gov (United States)

    Bianchi, Luigi; Babiloni, Fabio; Cincotti, Febo; Arrivas, Marco; Bollero, Patrizio; Marciani, Maria Grazia

    2003-06-01

    Microprocessors, even those in PocketPCs, have adequate power for many real-time biofeedback applications for disabled people. This power allows design of portable or wearable devices that are smaller and lighter, and that have longer battery life compared to notebook-based systems. In this paper, we discuss a general-purpose hardware/software solution based on industrial or consumer devices and a C++ framework. Its flexibility and modularity make it adaptable to a wide range of situations. Moreover, its design minimizes system requirements and programming effort, thus allowing efficient systems to be built quickly and easily. Our design has been used to build two brain computer interface systems that were easily ported from the Win32 platform.

  6. General-purpose heat source safety verification test series: SVT-11 through SVT-13

    International Nuclear Information System (INIS)

    George, T.G.; Pavone, D.

    1986-05-01

    The General-Purpose Heat Source (GPHS) is a modular component of the radioisotope thermoelectric generator that will provide power for the Galileo and Ulysses (formerly ISPM) space missions. The GPHS provides power by transmitting the heat of 238 Pu α-decay to an array of thermoelectric elements. Because the possibility of an orbital abort always exists, the heat source was designed and constructed to minimize plutonia release in any accident environment. The Safety Verification Test (SVT) series was formulated to evaluate the effectiveness of GPHS plutonia containment after atmospheric reentry and Earth impact. The first two reports (covering SVT-1 through SVT-10) described the results of flat, side-on, and angular module impacts against steel targets at 54 m/s. This report describes flat-on module impacts against concrete and granite targets, at velocities equivalent to or higher than previous SVTs

  7. ''Sheiva'' : a general purpose multi-parameter data acquisition and processing system at VECC

    International Nuclear Information System (INIS)

    Viyogi, Y.P.; Ganguly, N.K.

    1982-01-01

    A general purpose interactive software to be used with the PDP-15/76 on-line computer at VEC Centre for the acquisition and processing of data in nuclear physics experiments is described. The program can accommodate a maximum of thirty two inputs although the present hardware limits the number of inputs to eight. Particular emphasis is given to the problems of flexibility and ease of operation, memory optimisation and techniques dealing with experimenter-computer interaction. Various graphical methods for one- and two-dimensional data presentation are discussed. Specific problems of particle identification using detector telescopes have been dealt with carefully to handle experiments using several detector telescopes and those involving light particle-heavy particle coincidence studies. Steps needed to tailor this program towards utilisation for special experiments are also described. (author)

  8. A general purpose subroutine for fast fourier transform on a distributed memory parallel machine

    Science.gov (United States)

    Dubey, A.; Zubair, M.; Grosch, C. E.

    1992-01-01

    One issue which is central in developing a general purpose Fast Fourier Transform (FFT) subroutine on a distributed memory parallel machine is the data distribution. It is possible that different users would like to use the FFT routine with different data distributions. Thus, there is a need to design FFT schemes on distributed memory parallel machines which can support a variety of data distributions. An FFT implementation on a distributed memory parallel machine which works for a number of data distributions commonly encountered in scientific applications is presented. The problem of rearranging the data after computing the FFT is also addressed. The performance of the implementation on a distributed memory parallel machine Intel iPSC/860 is evaluated.

  9. ABAQUS-EPGEN: a general-purpose finite-element code. Volume 1. User's manual

    International Nuclear Information System (INIS)

    Hibbitt, H.D.; Karlsson, B.I.; Sorensen, E.P.

    1982-10-01

    This document is the User's Manual for ABAQUS/EPGEN, a general purpose finite element computer program, designed specifically to serve advanced structural analysis needs. The program contains very general libraries of elements, materials and analysis procedures, and is highly modular, so that complex combinations of features can be put together to model physical problems. The program is aimed at production analysis needs, and for this purpose aspects such as ease-of-use, reliability, flexibility and efficiency have received maximum attention. The input language is designed to make it straightforward to describe complicated models; the analysis procedures are highly automated with the program choosing time or load increments based on user supplied tolerances and controls; and the program offers a wide range of post-processing options for display of the analysis results

  10. A general-purpose process modelling framework for marine energy systems

    International Nuclear Information System (INIS)

    Dimopoulos, George G.; Georgopoulou, Chariklia A.; Stefanatos, Iason C.; Zymaris, Alexandros S.; Kakalis, Nikolaos M.P.

    2014-01-01

    Highlights: • Process modelling techniques applied in marine engineering. • Systems engineering approaches to manage the complexity of modern ship machinery. • General purpose modelling framework called COSSMOS. • Mathematical modelling of conservation equations and related chemical – transport phenomena. • Generic library of ship machinery component models. - Abstract: High fuel prices, environmental regulations and current shipping market conditions impose ships to operate in a more efficient and greener way. These drivers lead to the introduction of new technologies, fuels, and operations, increasing the complexity of modern ship energy systems. As a means to manage this complexity, in this paper we present the introduction of systems engineering methodologies in marine engineering via the development of a general-purpose process modelling framework for ships named as DNV COSSMOS. Shifting the focus from components – the standard approach in shipping- to systems, widens the space for optimal design and operation solutions. The associated computer implementation of COSSMOS is a platform that models, simulates and optimises integrated marine energy systems with respect to energy efficiency, emissions, safety/reliability and costs, under both steady-state and dynamic conditions. DNV COSSMOS can be used in assessment and optimisation of design and operation problems in existing vessels, new builds as well as new technologies. The main features and our modelling approach are presented and key capabilities are illustrated via two studies on the thermo-economic design and operation optimisation of a combined cycle system for large bulk carriers, and the transient operation simulation of an electric marine propulsion system

  11. Design of the SLAC RCE Platform: A General Purpose ATCA Based Data Acquisition System

    International Nuclear Information System (INIS)

    Herbst, R.; Claus, R.; Freytag, M.; Haller, G.; Huffer, M.; Maldonado, S.; Nishimura, K.; O'Grady, C.; Panetta, J.; Perazzo, A.; Reese, B.; Ruckman, L.; Thayer, J.G.; Weaver, M.

    2015-01-01

    The SLAC RCE platform is a general purpose clustered data acquisition system implemented on a custom ATCA compliant blade, called the Cluster On Board (COB). The core of the system is the Reconfigurable Cluster Element (RCE), which is a system-on-chip design based upon the Xilinx Zynq family of FPGAs, mounted on custom COB daughter-boards. The Zynq architecture couples a dual core ARM Cortex A9 based processor with a high performance 28nm FPGA. The RCE has 12 external general purpose bi-directional high speed links, each supporting serial rates of up to 12Gbps. 8 RCE nodes are included on a COB, each with a 10Gbps connection to an on-board 24-port Ethernet switch integrated circuit. The COB is designed to be used with a standard full-mesh ATCA backplane allowing multiple RCE nodes to be tightly interconnected with minimal interconnect latency. Multiple shelves can be clustered using the front panel 10-gbps connections. The COB also supports local and inter-blade timing and trigger distribution. An experiment specific Rear Transition Module adapts the 96 high speed serial links to specific experiments and allows an experiment-specific timing and busy feedback connection. This coupling of processors with a high performance FPGA fabric in a low latency, multiple node cluster allows high speed data processing that can be easily adapted to any physics experiment. RTEMS and Linux are both ported to the module. The RCE has been used or is the baseline for several current and proposed experiments (LCLS, HPS, LSST, ATLAS-CSC, LBNE, DarkSide, ILC-SiD, etc).

  12. Color and motion-based particle filter target tracking in a network of overlapping cameras with multi-threading and GPGPU Rastreo de objetivos por medio de filtros de partículas basados en color y movimiento en una red de cámaras con multi-hilo y GPGPU

    Directory of Open Access Journals (Sweden)

    Jorge Francisco Madrigal Díaz

    2013-03-01

    Full Text Available This paper describes an efficient implementation of multiple-target multiple-view tracking in video-surveillance sequences. It takes advantage of the capabilities of multiple core Central Processing Units (CPUs and of graphical processing units under the Compute Unifie Device Arquitecture (CUDA framework. The principle of our algorithm is 1 in each video sequence, to perform tracking on all persons to track by independent particle filters and 2 to fuse the tracking results of all sequences. Particle filters belong to the category of recursive Bayesian filters. They update a Monte-Carlo representation of the posterior distribution over the target position and velocity. For this purpose, they combine a probabilistic motion model, i.e. prior knowledge about how targets move (e.g. constant velocity and a likelihood model associated to the observations on targets. At this first level of single video sequences, the multi-threading library Threading Buildings Blocks (TBB has been used to parallelize the processing of the per-target independent particle filters. Afterwards at the higher level, we rely on General Purpose Programming on Graphical Processing Units (generally termed as GPGPU through CUDA in order to fuse target-tracking data collected on multiple video sequences, by solving the data association problem. Tracking results are presented on various challenging tracking datasets.Este artículo describe una implementación eficiente de un algoritmo de seguimiento de múlti­ples objetivos en múltiples vistas en secuencias de video vigilancia. Aprovecha las capacidades de las Unidades Centrales de Procesamiento (CPUs, por sus siglas en inglés de múltiples núcleos y de las unidades de procesamiento gráfico, bajo el entorno de desarrollo de Arquitec­tura Unificada de Dispositivos de Cómputo (CUDA, por sus siglas en inglés. El principio de nuestro algoritmo es: 1 aplicar el seguimiento visual en cada secuencia de video sobre todas las

  13. CASPER: Embedding Power Estimation and Hardware-Controlled Power Management in a Cycle-Accurate Micro-Architecture Simulation Platform for Many-Core Multi-Threading Heterogeneous Processors

    Directory of Open Access Journals (Sweden)

    Arun Ravindran

    2012-02-01

    Full Text Available Despite the promising performance improvement observed in emerging many-core architectures in high performance processors, high power consumption prohibitively affects their use and marketability in the low-energy sectors, such as embedded processors, network processors and application specific instruction processors (ASIPs. While most chip architects design power-efficient processors by finding an optimal power-performance balance in their design, some use sophisticated on-chip autonomous power management units, which dynamically reduce the voltage or frequencies of idle cores and hence extend battery life and reduce operating costs. For large scale designs of many-core processors, a holistic approach integrating both these techniques at different levels of abstraction can potentially achieve maximal power savings. In this paper we present CASPER, a robust instruction trace driven cycle-accurate many-core multi-threading micro-architecture simulation platform where we have incorporated power estimation models of a wide variety of tunable many-core micro-architectural design parameters, thus enabling processor architects to explore a sufficiently large design space and achieve power-efficient designs. Additionally CASPER is designed to accommodate cycle-accurate models of hardware controlled power management units, enabling architects to experiment with and evaluate different autonomous power-saving mechanisms to study the run-time power-performance trade-offs in embedded many-core processors. We have implemented two such techniques in CASPER–Chipwide Dynamic Voltage and Frequency Scaling, and Performance Aware Core-Specific Frequency Scaling, which show average power savings of 35.9% and 26.2% on a baseline 4-core SPARC based architecture respectively. This power saving data accounts for the power consumption of the power management units themselves. The CASPER simulation platform also provides users with complete support of SPARCV9

  14. Design of a general-purpose European compound screening library for EU-OPENSCREEN.

    Science.gov (United States)

    Horvath, Dragos; Lisurek, Michael; Rupp, Bernd; Kühne, Ronald; Specker, Edgar; von Kries, Jens; Rognan, Didier; Andersson, C David; Almqvist, Fredrik; Elofsson, Mikael; Enqvist, Per-Anders; Gustavsson, Anna-Lena; Remez, Nikita; Mestres, Jordi; Marcou, Gilles; Varnek, Alexander; Hibert, Marcel; Quintana, Jordi; Frank, Ronald

    2014-10-01

    This work describes a collaborative effort to define and apply a protocol for the rational selection of a general-purpose screening library, to be used by the screening platforms affiliated with the EU-OPENSCREEN initiative. It is designed as a standard source of compounds for primary screening against novel biological targets, at the request of research partners. Given the general nature of the potential applications of this compound collection, the focus of the selection strategy lies on ensuring chemical stability, absence of reactive compounds, screening-compliant physicochemical properties, loose compliance to drug-likeness criteria (as drug design is a major, but not exclusive application), and maximal diversity/coverage of chemical space, aimed at providing hits for a wide spectrum of drugable targets. Finally, practical availability/cost issues cannot be avoided. The main goal of this publication is to inform potential future users of this library about its conception, sources, and characteristics. The outline of the selection procedure, notably of the filtering rules designed by a large committee of European medicinal chemists and chemoinformaticians, may be of general methodological interest for the screening/medicinal chemistry community. The selection task of 200K molecules out of a pre-filtered set of 1.4M candidates was shared by five independent European research groups, each picking a subset of 40K compounds according to their own in-house methodology and expertise. An in-depth analysis of chemical space coverage of the library serves not only to characterize the collection, but also to compare the various chemoinformatics-driven selection procedures of maximal diversity sets. Compound selections contributed by various participating groups were mapped onto general-purpose self-organizing maps (SOMs) built on the basis of marketed drugs and bioactive reference molecules. In this way, the occupancy of chemical space by the EU-OPENSCREEN library could

  15. General-Purpose Heat Source development: Safety Verification Test Program. Bullet/fragment test series

    Energy Technology Data Exchange (ETDEWEB)

    George, T.G.; Tate, R.E.; Axler, K.M.

    1985-05-01

    The radioisotope thermoelectric generator (RTG) that will provide power for space missions contains 18 General-Purpose Heat Source (GPHS) modules. Each module contains four /sup 238/PuO/sub 2/-fueled clads and generates 250 W/sub (t)/. Because a launch-pad or post-launch explosion is always possible, we need to determine the ability of GPHS fueled clads within a module to survive fragment impact. The bullet/fragment test series, part of the Safety Verification Test Plan, was designed to provide information on clad response to impact by a compact, high-energy, aluminum-alloy fragment and to establish a threshold value of fragment energy required to breach the iridium cladding. Test results show that a velocity of 555 m/s (1820 ft/s) with an 18-g bullet is at or near the threshold value of fragment velocity that will cause a clad breach. Results also show that an exothermic Ir/Al reaction occurs if aluminum and hot iridium are in contact, a contact that is possible and most damaging to the clad within a narrow velocity range. The observed reactions between the iridium and the aluminum were studied in the laboratory and are reported in the Appendix.

  16. General-Purpose Heat Source Safety Verification Test program: Edge-on flyer plate tests

    International Nuclear Information System (INIS)

    George, T.G.

    1987-03-01

    The radioisotope thermoelectric generator (RTG) that will supply power for the Galileo and Ulysses space missions contains 18 General-Purpose Heat Source (GPHS) modules. The GPHS modules provide power by transmitting the heat of 238 Pu α-decay to an array of thermoelectric elements. Each module contains four 238 PuO 2 -fueled clads and generates 250 W(t). Because the possibility of a launch vehicle explosion always exists, and because such an explosion could generate a field of high-energy fragments, the fueled clads within each GPHS module must survive fragment impact. The edge-on flyer plate tests were included in the Safety Verification Test series to provide information on the module/clad response to the impact of high-energy plate fragments. The test results indicate that the edge-on impact of a 3.2-mm-thick, aluminum-alloy (2219-T87) plate traveling at 915 m/s causes the complete release of fuel from capsules contained within a bare GPHS module, and that the threshold velocity sufficient to cause the breach of a bare, simulant-fueled clad impacted by a 3.5-mm-thick, aluminum-alloy (5052-T0) plate is approximately 140 m/s

  17. Environmental assessment of general-purpose heat source safety verification testing

    International Nuclear Information System (INIS)

    1995-02-01

    This Environmental Assessment (EA) was prepared to identify and evaluate potential environmental, safety, and health impacts associated with the Proposed Action to test General-Purpose Heat Source (GPHS) Radioisotope Thermoelectric Generator (RTG) assemblies at the Sandia National Laboratories (SNL) 10,000-Foot Sled Track Facility, Albuquerque, New Mexico. RTGs are used to provide a reliable source of electrical power on board some spacecraft when solar power is inadequate during long duration space missions. These units are designed to convert heat from the natural decay of radioisotope fuel into electrical power. Impact test data are required to support DOE's mission to provide radioisotope power systems to NASA and other user agencies. The proposed tests will expand the available safety database regarding RTG performance under postulated accident conditions. Direct observations and measurements of GPHS/RTG performance upon impact with hard, unyielding surfaces are required to verify model predictions and to ensure the continual evolution of the RTG designs that perform safely under varied accident environments. The Proposed Action is to conduct impact testing of RTG sections containing GPHS modules with simulated fuel. End-On and Side-On impact test series are planned

  18. Evaluation and characterization of General Purpose Heat Source girth welds for the Cassini mission

    International Nuclear Information System (INIS)

    Lynch, C.M.; Moniz, P.F.; Reimus, M.A.H.

    1998-01-01

    General Purpose Heat Sources (GPHSs) are components of Radioisotopic thermoelectric Generators (RTGs) which provide electric power for deep space missions. Each GPHS consists of a 238 Pu oxide ceramic pellet encapsulated in a welded iridium alloy shell which forms a protective barrier against the release of plutonia in the unlikely event of a launch-pad failure or reentry incident. GPHS fueled clad girth weld flaw detection was paramount to ensuring this safety function, and was accomplished using both destructive and non-destructive evaluation techniques. The first girth weld produced from each welding campaign was metallographically examined for flaws such as incomplete weld penetration, cracks, or porosity which would render a GPHS unacceptable for flight applications. After an acceptable example weld was produced, the subsequently welded heat sources were evaluated non-destructively for flaws using ultrasonic immersion testing. Selected heat sources which failed ultrasonic testing would be radiographed, and/or, destructively evaluated to further characterize and document anomalous indications. Metallography was also performed on impacted heat sources to determine the condition of the welds

  19. General purpose nonlinear analysis program FINAS for elevated temperature design of FBR components

    International Nuclear Information System (INIS)

    Iwata, K.; Atsumo, H.; Kano, T.; Takeda, H.

    1982-01-01

    This paper presents currently available capabilities of a general purpose finite element nonlinear analysis program FINAS (FBR Inelastic Structural Analysis System) which has been developed at Power Reactor and Nuclear Fuel Development Corporation (PNC) since 1976 to support structural design of fast breeder reactor (FBR) components in Japan. This program is capable of treating inelastic responses of arbitrary complex structures subjected to static and dynamic load histories. Various types of finite element covering rods, beams, pipes, axisymmetric, two and three dimensional solids, plates and shells, are implemented in the program. The thermal elastic-plastic creep analysis is possible for each element type, with primary emphasis on the application to FBR components subjected to sustained or cyclic loads at elevated temperature. The program permits large deformation, buckling, fracture mechanics, and dynamic analyses for some of the element types and provides a number of options for automatic mesh generation and computer graphics. Some examples including elevated temperature effects are shown to demonstrate the accuracy and the efficiency of the program

  20. Nondestructive inspection of General Purpose Heat Source (GPHS) fueled clad girth welds

    International Nuclear Information System (INIS)

    Reimus, M. A. H.; George, T. G.; Lynch, C.; Padilla, M.; Moniz, P.; Guerrero, A.; Moyer, M. W.; Placr, A.

    1998-01-01

    The General-Purpose Heat Source (GPHS) provides power for space missions by transmitting the heat of 238 Pu decay to an array of thermoelectric elements. The GPHS is fabricated using an iridium-alloy to contain the 238 PuO 2 fuel pellet. GPHS capsules will be utilized in the upcoming Cassini mission to explore Saturn and its moons. The physical integrity of the girth weld is important to mission safety and performance. Because past experience had revealed a potential for initiation of small cracks in the girth weld overlap zone, a nondestructive inspection of each capsule weld is required. An ultrasonic method was used to inspect the welds of capsules fabricated for the Galileo mission. The instrument, transducer, and method used were state of the art at the time (early 1980s). The ultrasonic instrumentation and methods used to inspect the Cassini GPHSs was significantly upgraded from those used for the Galileo mission. GPHSs that had ultrasonic reflectors in excess of the reject specification level were subsequently inspected with radiography to provide additional engineering data used to accept/reject the heat source. This paper describes the Galileo-era ultrasonic instrumentation and methods and the subsequent upgrades made to support testing of Cassini GPHSs. Also discussed is the data obtained from radiographic examination and correlation to ultrasonic examination results

  1. Nondestructive inspection of General Purpose Heat Source (GPHS) fueled clad girth welds

    International Nuclear Information System (INIS)

    Reimus, M.A.; George, T.G.; Lynch, C.; Padilla, M.; Moniz, P.; Guerrero, A.; Moyer, M.W.; Placr, A.

    1998-01-01

    The General-Purpose Heat Source (GPHS) provides power for space missions by transmitting the heat of 238 Pu decay to an array of thermoelectric elements. The GPHS is fabricated using an iridium-alloy to contain the 238 PuO 2 fuel pellet. GPHS capsules will be utilized in the upcoming Cassini mission to explore Saturn and its moons. The physical integrity of the girth weld is important to mission safety and performance. Because past experience had revealed a potential for initiation of small cracks in the girth weld overlap zone, a nondestructive inspection of each capsule weld is required. An ultrasonic method was used to inspect the welds of capsules fabricated for the Galileo mission. The instrument, transducer, and method used were state of the art at the time (early 1980s). The ultrasonic instrumentation and methods used to inspect the Cassini GPHSs was significantly upgraded from those used for the Galileo mission. GPHSs that had ultrasonic reflectors in excess of the reject specification level were subsequently inspected with radiography to provide additional engineering data used to accept/reject the heat source. This paper describes the Galileo-era ultrasonic instrumentation and methods and the subsequent upgrades made to support testing of Cassini GPHSs. Also discussed is the data obtained from radiographic examination and correlation to ultrasonic examination results. copyright 1998 American Institute of Physics

  2. Applications of artificial intelligence to space station: General purpose intelligent sensor interface

    Science.gov (United States)

    Mckee, James W.

    1988-01-01

    This final report describes the accomplishments of the General Purpose Intelligent Sensor Interface task of the Applications of Artificial Intelligence to Space Station grant for the period from October 1, 1987 through September 30, 1988. Portions of the First Biannual Report not revised will not be included but only referenced. The goal is to develop an intelligent sensor system that will simplify the design and development of expert systems using sensors of the physical phenomena as a source of data. This research will concentrate on the integration of image processing sensors and voice processing sensors with a computer designed for expert system development. The result of this research will be the design and documentation of a system in which the user will not need to be an expert in such areas as image processing algorithms, local area networks, image processor hardware selection or interfacing, television camera selection, voice recognition hardware selection, or analog signal processing. The user will be able to access data from video or voice sensors through standard LISP statements without any need to know about the sensor hardware or software.

  3. Low-cost general purpose spectral display unit using an IBM PC

    International Nuclear Information System (INIS)

    Robinson, S.L.

    1985-10-01

    Many physics experiments require acquisition and analysis of spectral data. commercial minicomputer-based multichannel analyzers collect detected counts at various energies, create a histogram of the counts in memory, and display the resultant spectra. They acquire data and provide the user-to-display interface. The system discussed separates functions into the three modular components of data acquisition, storage, and display. This decoupling of functions allows the experimenter to use any number of detectors for data collection before forwarding up to 64 spectra to the display unit, thereby increasing data throughput over that available with commercial systems. An IBM PC was chosen for the low-cost, general purpose display unit. Up to four spectra may be displayed simultaneously in different colors. The histogram saves 1024 channels per detector, 640 of which may be distinctly displayed per spectra. The IEEE-488 standard provides the data path between the IBM PC and the data collection unit. Data is sent to the PC under interrupt control, using direct memory access. Display manipulations available via keyboard are also discussed

  4. Transforming the ASDEX Upgrade discharge control system to a general-purpose plasma control platform

    International Nuclear Information System (INIS)

    Treutterer, Wolfgang; Cole, Richard; Gräter, Alexander; Lüddecke, Klaus; Neu, Gregor; Rapson, Christopher; Raupp, Gerhard; Zasche, Dieter; Zehetbauer, Thomas

    2015-01-01

    Highlights: • Control framework split in core and custom part. • Core framework deployable in other fusion device environments. • Adaptible through customizable modules, plug-in support and generic interfaces. - Abstract: The ASDEX Upgrade Discharge Control System DCS is a modern and mature product, originally designed to regulate and supervise ASDEX Upgrade Tokamak plasma operation. In its core DCS is based on a generic, versatile real-time software framework with a plugin architecture that allows to easily combine, modify and extend control function modules in order to tailor the system to required features and let it continuously evolve with the progress of an experimental fusion device. Due to these properties other fusion experiments like the WEST project have expressed interest in adopting DCS. For this purpose, essential parts of DCS must be unpinned from the ASDEX Upgrade environment by exposure or introduction of generalised interfaces. Re-organisation of DCS modules allows distinguishing between intrinsic framework core functions and device-specific applications. In particular, DCS must be prepared for deployment in different system environments with their own realisations for user interface, pulse schedule preparation, parameter server, time and event distribution, diagnostic and actuator systems, network communication and data archiving. The article explains the principles of the revised DCS structure, derives the necessary interface definitions and describes major steps to achieve the separation between general-purpose framework and fusion device specific components.

  5. Transforming the ASDEX Upgrade discharge control system to a general-purpose plasma control platform

    Energy Technology Data Exchange (ETDEWEB)

    Treutterer, Wolfgang, E-mail: Wolfgang.Treutterer@ipp.mpg.de [Max-Planck-Institut für Plasmaphysik, Boltzmannstr. 2, 85748 Garching (Germany); Cole, Richard [Unlimited Computer Systems, Seeshaupter Str. 15, 82393 Iffeldorf (Germany); Gräter, Alexander [Max-Planck-Institut für Plasmaphysik, Boltzmannstr. 2, 85748 Garching (Germany); Lüddecke, Klaus [Unlimited Computer Systems, Seeshaupter Str. 15, 82393 Iffeldorf (Germany); Neu, Gregor; Rapson, Christopher; Raupp, Gerhard; Zasche, Dieter; Zehetbauer, Thomas [Max-Planck-Institut für Plasmaphysik, Boltzmannstr. 2, 85748 Garching (Germany)

    2015-10-15

    Highlights: • Control framework split in core and custom part. • Core framework deployable in other fusion device environments. • Adaptible through customizable modules, plug-in support and generic interfaces. - Abstract: The ASDEX Upgrade Discharge Control System DCS is a modern and mature product, originally designed to regulate and supervise ASDEX Upgrade Tokamak plasma operation. In its core DCS is based on a generic, versatile real-time software framework with a plugin architecture that allows to easily combine, modify and extend control function modules in order to tailor the system to required features and let it continuously evolve with the progress of an experimental fusion device. Due to these properties other fusion experiments like the WEST project have expressed interest in adopting DCS. For this purpose, essential parts of DCS must be unpinned from the ASDEX Upgrade environment by exposure or introduction of generalised interfaces. Re-organisation of DCS modules allows distinguishing between intrinsic framework core functions and device-specific applications. In particular, DCS must be prepared for deployment in different system environments with their own realisations for user interface, pulse schedule preparation, parameter server, time and event distribution, diagnostic and actuator systems, network communication and data archiving. The article explains the principles of the revised DCS structure, derives the necessary interface definitions and describes major steps to achieve the separation between general-purpose framework and fusion device specific components.

  6. General-Purpose Heat Source Development: Safety Test Program. Postimpact evaluation, Design Iteration Test 3

    International Nuclear Information System (INIS)

    Schonfeld, F.W.; George, T.G.

    1984-07-01

    The General-Purpose Heat Source(GPHS) provides power for space missions by transmitting the heat of 238 PuO 2 decay to thermoelectric elements. Because of the inevitable return of certain aborted missions, the heat source must be designed and constructed to survive both re-entry and Earth impact. The Design Iteration Test (DIT) series is part of an ongoing test program. In the third test (DIT-3), a full GPHS module was impacted at 58 m/s and 930 0 C. The module impacted the target at an angle of 30 0 to the pole of the large faces. The four capsules used in DIT-3 survived impact with minimal deformation; no internal cracks other than in the regions indicated by Savannah River Plant (SRP) preimpact nondestructive testing were observed in any of the capsules. The 30 0 impact orientation used in DIT-3 was considerably less severe than the flat-on impact utilized in DIT-1 and DIT-2. The four capsules used in DIT-1 survived, while two of the capsules used in DIT-2 breached; a small quantity (approx. = 50 μg) of 238 PuO 2 was released from the capsules breached in the DIT-2 impact. All of the capsules used in DIT-1 and DIT-2 were severely deformed and contained large internal cracks. Postimpact analyses of the DIT-3 test components are described, with emphasis on weld structure and the behavior of defects identified by SRP nondestructive testing

  7. ICECAP: an integrated, general-purpose, automation-assisted IC50/EC50 assay platform.

    Science.gov (United States)

    Li, Ming; Chou, Judy; King, Kristopher W; Jing, Jing; Wei, Dong; Yang, Liyu

    2015-02-01

    IC50 and EC50 values are commonly used to evaluate drug potency. Mass spectrometry (MS)-centric bioanalytical and biomarker labs are now conducting IC50/EC50 assays, which, if done manually, are tedious and error-prone. Existing bioanalytical sample preparation automation systems cannot meet IC50/EC50 assay throughput demand. A general-purpose, automation-assisted IC50/EC50 assay platform was developed to automate the calculations of spiking solutions and the matrix solutions preparation scheme, the actual spiking and matrix solutions preparations, as well as the flexible sample extraction procedures after incubation. In addition, the platform also automates the data extraction, nonlinear regression curve fitting, computation of IC50/EC50 values, graphing, and reporting. The automation-assisted IC50/EC50 assay platform can process the whole class of assays of varying assay conditions. In each run, the system can handle up to 32 compounds and up to 10 concentration levels per compound, and it greatly improves IC50/EC50 assay experimental productivity and data processing efficiency. © 2014 Society for Laboratory Automation and Screening.

  8. Optimization of a general-purpose, actively scanned proton beamline for ocular treatments: Geant4 simulations.

    Science.gov (United States)

    Piersimoni, Pierluigi; Rimoldi, Adele; Riccardi, Cristina; Pirola, Michele; Molinelli, Silvia; Ciocca, Mario

    2015-03-08

    The Italian National Center for Hadrontherapy (CNAO, Centro Nazionale di Adroterapia Oncologica), a synchrotron-based hospital facility, started the treatment of patients within selected clinical trials in late 2011 and 2012 with actively scanned proton and carbon ion beams, respectively. The activation of a new clinical protocol for the irradiation of uveal melanoma using the existing general-purpose proton beamline is foreseen for late 2014. Beam characteristics and patient treatment setup need to be tuned to meet the specific requirements for such a type of treatment technique. The aim of this study is to optimize the CNAO transport beamline by adding passive components and minimizing air gap to achieve the optimal conditions for ocular tumor irradiation. The CNAO setup with the active and passive components along the transport beamline, as well as a human eye-modeled detector also including a realistic target volume, were simulated using the Monte Carlo Geant4 toolkit. The strong reduction of the air gap between the nozzle and patient skin, as well as the insertion of a range shifter plus a patient-specific brass collimator at a short distance from the eye, were found to be effective tools to be implemented. In perspective, this simulation toolkit could also be used as a benchmark for future developments and testing purposes on commercial treatment planning systems.

  9. TACO: a general-purpose tool for predicting cell-type-specific transcription factor dimers.

    Science.gov (United States)

    Jankowski, Aleksander; Prabhakar, Shyam; Tiuryn, Jerzy

    2014-03-19

    Cooperative binding of transcription factor (TF) dimers to DNA is increasingly recognized as a major contributor to binding specificity. However, it is likely that the set of known TF dimers is highly incomplete, given that they were discovered using ad hoc approaches, or through computational analyses of limited datasets. Here, we present TACO (Transcription factor Association from Complex Overrepresentation), a general-purpose standalone software tool that takes as input any genome-wide set of regulatory elements and predicts cell-type-specific TF dimers based on enrichment of motif complexes. TACO is the first tool that can accommodate motif complexes composed of overlapping motifs, a characteristic feature of many known TF dimers. Our method comprehensively outperforms existing tools when benchmarked on a reference set of 29 known dimers. We demonstrate the utility and consistency of TACO by applying it to 152 DNase-seq datasets and 94 ChIP-seq datasets. Based on these results, we uncover a general principle governing the structure of TF-TF-DNA ternary complexes, namely that the flexibility of the complex is correlated with, and most likely a consequence of, inter-motif spacing.

  10. A Fast General-Purpose Clustering Algorithm Based on FPGAs for High-Throughput Data Processing

    CERN Document Server

    Annovi, A; The ATLAS collaboration; Castegnaro, A; Gatta, M

    2012-01-01

    We present a fast general-purpose algorithm for high-throughput clustering of data ”with a two dimensional organization”. The algorithm is designed to be implemented with FPGAs or custom electronics. The key feature is a processing time that scales linearly with the amount of data to be processed. This means that clustering can be performed in pipeline with the readout, without suffering from combinatorial delays due to looping multiple times through all the data. This feature makes this algorithm especially well suited for problems where the data has high density, e.g. in the case of tracking devices working under high-luminosity condition such as those of LHC or Super-LHC. The algorithm is organized in two steps: the first step (core) clusters the data; the second step analyzes each cluster of data to extract the desired information. The current algorithm is developed as a clustering device for modern high-energy physics pixel detectors. However, the algorithm has much broader field of applications. In ...

  11. DNA Processing and Reassembly on General Purpose FPGA-based Development Boards

    Directory of Open Access Journals (Sweden)

    SZÁSZ Csaba

    2017-05-01

    Full Text Available The great majority of researchers involved in microelectronics generally agree that many scientific challenges in life sciences have associated with them a powerful computational requirement that must be solved before scientific progress can be made. The current trend in Deoxyribonucleic Acid (DNA computing technologies is to develop special hardware platforms capable to provide the needed processing performance at lower cost. In this endeavor the FPGA-based (Field Programmable Gate Arrays configurations aimed to accelerate genome sequencing and reassembly plays a leading role. This paper emphasizes benefits and advantages using general purpose FPGA-based development boards in DNA reassembly applications beside the special hardware architecture solutions. An original approach is unfolded which outlines the versatility of high performance ready-to-use manufacturer development platforms endowed with powerful hardware resources fully optimized for high speed processing applications. The theoretical arguments are supported via an intuitive implementation example where the designer it is discharged from any hardware development effort and completely assisted in exclusive concentration only on software design issues providing greatly reduced application development cycles. The experiments prove that such boards available on the market are suitable to fulfill in all a wide range of DNA sequencing and reassembly applications.

  12. Explosion overpressure test series: General-Purpose Heat Source development: Safety Verification Test program

    International Nuclear Information System (INIS)

    Cull, T.A.; George, T.G.; Pavone, D.

    1986-09-01

    The General-Purpose Heat Source (GPHS) is a modular, radioisotope heat source that will be used in radioisotope thermoelectric generators (RTGs) to supply electric power for space missions. The first two uses will be the NASA Galileo and the ESA Ulysses missions. The RTG for these missions will contain 18 GPHS modules, each of which contains four 238 PuO 2 -fueled clads and generates 250 W/sub (t)/. A series of Safety Verification Tests (SVTs) was conducted to assess the ability of the GPHS modules to contain the plutonia in accident environments. Because a launch pad or postlaunch explosion of the Space Transportation System vehicle (space shuttle) is a conceivable accident, the SVT plan included a series of tests that simulated the overpressure exposure the RTG and GPHS modules could experience in such an event. Results of these tests, in which we used depleted UO 2 as a fuel simulant, suggest that exposure to overpressures as high as 15.2 MPa (2200 psi), without subsequent impact, does not result in a release of fuel

  13. A low-cost general purpose spectral display unit using an IBM PC

    International Nuclear Information System (INIS)

    Robinson, S.L.

    1986-01-01

    Many physics experiments require acquisition and analysis of spectral data. Commercial minicomputer-based multichannel analyzers collect detected counts at various energies, create a histogram of the counts in memory, and display the resultant spectra. They acquire data and provide the user-to-display interface. The system discussed separates functions into the three modular components of data acquisition, storage, and display. This decoupling of functions allows the experimenter to use any number of detectors for data collection before forwarding up to 64 spectra to the display unit, thereby increasing data throughput over that available with commercial systems. An IBM PC was chosen for the low-cost, general purpose display unit. Up to four spectra may be displayed simultaneously in different colors. The histogram saves 1024 channels per detector, 640 of which may be distinctly displayed per spectra. The IEEE-488 standard provides the data path between the IBM PC and the data collection unit. Data is sent to the PC under interrupt control, using direct memory access. Display manipulations available via keyboard are also discussed

  14. Modelling of a general purpose irradiation chamber using a Monte Carlo particle transport code

    International Nuclear Information System (INIS)

    Dhiyauddin Ahmad Fauzi; Sheik, F.O.A.; Nurul Fadzlin Hasbullah

    2013-01-01

    Full-text: The aim of this research is to stimulate the effectiveness use of a general purpose irradiation chamber to contain pure neutron particles obtained from a research reactor. The secondary neutron and gamma particles dose discharge from the chamber layers will be used as a platform to estimate the safe dimension of the chamber. The chamber, made up of layers of lead (Pb), shielding, polyethylene (PE), moderator and commercial grade aluminium (Al) cladding is proposed for the use of interacting samples with pure neutron particles in a nuclear reactor environment. The estimation was accomplished through simulation based on general Monte Carlo N-Particle transport code using Los Alamos MCNPX software. Simulations were performed on the model of the chamber subjected to high neutron flux radiation and its gamma radiation product. The model of neutron particle used is based on the neutron source found in PUSPATI TRIGA MARK II research reactor which holds a maximum flux value of 1 x 10 12 neutron/ cm 2 s. The expected outcomes of this research are zero gamma dose in the core of the chamber and neutron dose rate of less than 10 μSv/ day discharge from the chamber system. (author)

  15. General-purpose heat source project and space nuclear safety and fuels program. Progress report

    International Nuclear Information System (INIS)

    Maraman, W.J.

    1980-02-01

    Studies related to the use of 238 PuO 2 in radioisotopic power systems carried out for the Advanced Nuclear Systems and Projects Division of LASL are presented. The three programs involved are: general-purpose heat source development; space nuclear safety; and fuels program. Three impact tests were conducted to evaluate the effects of a high temperature reentry pulse and the use of CBCF on impact performance. Additionally, two 238 PuO 2 pellets were encapsulated in Ir-0.3% W for impact testing. Results of the clad development test and vent testing are noted. Results of the environmental tests are summarized. Progress on the Stirling isotope power systems test and the status of the improved MHW tests are indicated. The examination of the impact failure of the iridium shell of MHFT-65 at a fuel pass-through continued. A test plan was written for vibration testing of the assembled light-weight radioisotopic heater unit. Progress on fuel processing is reported

  16. Computing OpenSURF on OpenCL and General Purpose GPU

    Directory of Open Access Journals (Sweden)

    Wanglong Yan

    2013-10-01

    Full Text Available Speeded-Up Robust Feature (SURF algorithm is widely used for image feature detecting and matching in computer vision area. Open Computing Language (OpenCL is a framework for writing programs that execute across heterogeneous platforms consisting of CPUs, GPUs, and other processors. This paper introduces how to implement an open-sourced SURF program, namely OpenSURF, on general purpose GPU by OpenCL, and discusses the optimizations in terms of the thread architectures and memory models in detail. Our final OpenCL implementation of OpenSURF is on average 37% and 64% faster than the OpenCV SURF v2.4.5 CUDA implementation on NVidia's GTX660 and GTX460SE GPUs, repectively. Our OpenCL program achieved real-time performance (>25 Frames Per Second for almost all the input images with different sizes from 320*240 to 1024*768 on NVidia's GTX660 GPU, NVidia's GTX460SE GPU and AMD's Radeon HD 6850 GPU. Our OpenCL approach on NVidia's GTX660 GPU is more than 22.8 times faster than its original CPU version on Intel's Dual-Core E5400 2.7G on average.

  17. Litrani: a general purpose Monte-Carlo program simulating light propagation in isotropic or anisotropic media

    International Nuclear Information System (INIS)

    Gentit, F.-X.

    2002-01-01

    Litrani is a general purpose Monte-Carlo program simulating light propagation in any type of setup describable by the shapes provided by ROOT. Each shape may be made of a different material. Dielectric constant, absorption length and diffusion length of materials may depend upon wavelength. Dielectric constant and absorption length may be anisotropic. Each face of a volume is either partially or totally in contact with a face of another volume, or covered with some wrapping having defined characteristics of absorption, reflection and diffusion. When in contact with another face of another volume, the possibility exists to have a thin slice of width d and index n between the two faces. The program has various sources of light: spontaneous photons, photons coming from an optical fibre, photons generated by the crossing of particles or photons generated by an electromagnetic shower. The time and wavelength spectra of emitted photons may reproduce any scintillation spectrum. As detectors, phototubes, APD, or any general type of surface or volume detectors may be specified. The aim is to follow each photon until it is absorbed or detected. Quantities to be delivered by the program are the proportion of photons detected, and the time distribution for the arrival of these, or the various ways photons may be lost

  18. Geometric correction of radiographic images using general purpose image processing program

    International Nuclear Information System (INIS)

    Kim, Eun Kyung; Cheong, Ji Seong; Lee, Sang Hoon

    1994-01-01

    The present study was undertaken to compare geometric corrected image by general-purpose image processing program for the Apple Macintosh II computer (NIH Image, Adobe Photoshop) with standardized image by individualized custom fabricated alignment instrument. Two non-standardized periapical films with XCP film holder only were taken at the lower molar portion of 19 volunteers. Two standardized periapical films with customized XCP film holder with impression material on the bite-block were taken for each person. Geometric correction was performed with Adobe Photoshop and NIH Image program. Specially, arbitrary image rotation function of 'Adobe Photoshop' and subtraction with transparency function of 'NIH Image' were utilized. The standard deviations of grey values of subtracted images were used to measure image similarity. Average standard deviation of grey values of subtracted images if standardized group was slightly lower than that of corrected group. However, the difference was found to be statistically insignificant (p>0.05). It is considered that we can use 'NIH Image' and 'Adobe Photoshop' program for correction of nonstandardized film, taken with XCP film holder at lower molar portion.

  19. Design evolution and verification of the general-purpose heat source

    International Nuclear Information System (INIS)

    Schock, A.

    The General-Purpose Heat Source (GPHS) is a radioisotope heat source for use in space power systems. It employs a modular design, to make it adaptable to a wide range of energy conversion systems and power levels. Each 250 W module is completely autonomous, with its own passive safety provisions to prevent fuel release under all abort modes, including atmospheric reentry and earth impact. Prior development tests had demonstrated good impact survival as long as the iridium fuel capsules retained their ductility. This requires high impact temperatures, typically above 900 0 C and reasonably fine grain size, which in turn requires avoidance of excessive operating temperatures and reentry temperatures. These three requirements - on operating, reentry, and impact temperatures - are in mutual conflict, since thermal design changes to improve any one of these temperatures tend to worsen one or both of the others. This conflict creates a difficult design problem, which for a time threatened the success of the program. The present paper describes how this problem was overcome by successive design revisions, supplemented by thermal analyses and confirmatory vibration and impact tests; and how this may be achieved while raising the specific power of the GPHS to 83 W/lb, a 50% improvement over previously flown radioisotope heat sources

  20. Evaluating the multi-threading countermeasure

    CSIR Research Space (South Africa)

    Frieslaar, Ibraheem

    2016-12-01

    Full Text Available to obfuscate the individuals information from people attempting to intercept data. One of these cryptographic algorithms is the AES algorithm [1]. This algorithm has been declared to be the standard protocol to encrypt information by the The National Institute...-128 algo- rithm, four steps were followed: While the AES-128 algorithm was executing the encryption process, the power traces along with its corresponding input text were captured; a power leakage model was implemented where the guess of a key byte...

  1. 15 CFR 744.17 - Restrictions on certain exports and reexports of general purpose microprocessors for “military...

    Science.gov (United States)

    2010-01-01

    ... reexports of general purpose microprocessors for âmilitary end-usesâ and to âmilitary end-users.â 744.17...: END-USER AND END-USE BASED § 744.17 Restrictions on certain exports and reexports of general purpose microprocessors for “military end-uses” and to “military end-users.” (a) General prohibition. In addition to the...

  2. A general-purpose development environment for intelligent computer-aided training systems

    Science.gov (United States)

    Savely, Robert T.

    1990-01-01

    Space station training will be a major task, requiring the creation of large numbers of simulation-based training systems for crew, flight controllers, and ground-based support personnel. Given the long duration of space station missions and the large number of activities supported by the space station, the extension of space shuttle training methods to space station training may prove to be impractical. The application of artificial intelligence technology to simulation training can provide the ability to deliver individualized training to large numbers of personnel in a distributed workstation environment. The principal objective of this project is the creation of a software development environment which can be used to build intelligent training systems for procedural tasks associated with the operation of the space station. Current NASA Johnson Space Center projects and joint projects with other NASA operational centers will result in specific training systems for existing space shuttle crew, ground support personnel, and flight controller tasks. Concurrently with the creation of these systems, a general-purpose development environment for intelligent computer-aided training systems will be built. Such an environment would permit the rapid production, delivery, and evolution of training systems for space station crew, flight controllers, and other support personnel. The widespread use of such systems will serve to preserve task and training expertise, support the training of many personnel in a distributed manner, and ensure the uniformity and verifiability of training experiences. As a result, significant reductions in training costs can be realized while safety and the probability of mission success can be enhanced.

  3. The PennBMBI: Design of a General Purpose Wireless Brain-Machine-Brain Interface System.

    Science.gov (United States)

    Liu, Xilin; Zhang, Milin; Subei, Basheer; Richardson, Andrew G; Lucas, Timothy H; Van der Spiegel, Jan

    2015-04-01

    In this paper, a general purpose wireless Brain-Machine-Brain Interface (BMBI) system is presented. The system integrates four battery-powered wireless devices for the implementation of a closed-loop sensorimotor neural interface, including a neural signal analyzer, a neural stimulator, a body-area sensor node and a graphic user interface implemented on the PC end. The neural signal analyzer features a four channel analog front-end with configurable bandpass filter, gain stage, digitization resolution, and sampling rate. The target frequency band is configurable from EEG to single unit activity. A noise floor of 4.69 μVrms is achieved over a bandwidth from 0.05 Hz to 6 kHz. Digital filtering, neural feature extraction, spike detection, sensing-stimulating modulation, and compressed sensing measurement are realized in a central processing unit integrated in the analyzer. A flash memory card is also integrated in the analyzer. A 2-channel neural stimulator with a compliance voltage up to ± 12 V is included. The stimulator is capable of delivering unipolar or bipolar, charge-balanced current pulses with programmable pulse shape, amplitude, width, pulse train frequency and latency. A multi-functional sensor node, including an accelerometer, a temperature sensor, a flexiforce sensor and a general sensor extension port has been designed. A computer interface is designed to monitor, control and configure all aforementioned devices via a wireless link, according to a custom designed communication protocol. Wireless closed-loop operation between the sensory devices, neural stimulator, and neural signal analyzer can be configured. The proposed system was designed to link two sites in the brain, bridging the brain and external hardware, as well as creating new sensory and motor pathways for clinical practice. Bench test and in vivo experiments are performed to verify the functions and performances of the system.

  4. An auxiliary frequency tracking system for general purpose lock-in amplifiers

    Science.gov (United States)

    Xie, Kai; Chen, Liuhao; Huang, Anfeng; Zhao, Kai; Zhang, Hanlu

    2018-04-01

    Lock-in amplifiers (LIAs) are designed to measure weak signals submerged by noise. This is achieved with a signal modulator to avoid low-frequency noise and a narrow-band filter to suppress out-of-band noise. In asynchronous measurement, even a slight frequency deviation between the modulator and the reference may lead to measurement error because the filter’s passband is not flat. Because many commercial LIAs are unable to track frequency deviations, in this paper we propose an auxiliary frequency tracking system. We analyze the measurement error caused by the frequency deviation and propose both a tracking method and an auto-tracking system. This approach requires only three basic parameters, which can be obtained from any general purpose LIA via its communications interface, to calculate the frequency deviation from the phase difference. The proposed auxiliary tracking system is designed as a peripheral connected to the LIA’s serial port, removing the need for an additional power supply. The test results verified the effectiveness of the proposed system; the modified commercial LIA (model SR-850) was able to track the frequency deviation and continuous drift. For step frequency deviations, a steady tracking error of less than 0.001% was achieved within three adjustments, and the worst tracking accuracy was still better than 0.1% for a continuous frequency drift. The tracking system can be used to expand the application scope of commercial LIAs, especially for remote measurements in which the modulation clock and the local reference are separated.

  5. Use of a general-purpose heat-transfer code for casting simulation

    International Nuclear Information System (INIS)

    Erickson, W.C.

    1975-07-01

    The practical use of numerical techniques in simulating casting solidification dictate that a general purpose heat transfer code be used and that results be obtained in an easy-to-analyze format. Color film plotting routines were developed for use with NASA's CINDA-3G heat transfer code; the combination of which meet the above criteria. The subroutine LQSLTR written for SINDA, the successor to CINDA-3G, was verified by comparing calculated results obtained using LQSLTR with those obtained using the specific heat method for handling the heat of fusion. Excellent agreement existed when similar data was used. When the more restrictive requirement of a 1 0 F melting range was used, comparable results were obtained. Uranium and lead rod castings were cast in instrumented graphite molds and the solidification sequence simulated using CINDA-3G. Discrepancies attributed to initial assumptions of instantaneous mold filling, uniform melt temperature, and intimate metal/mold contact were encountered. Further calculations using a model incorporating a gap between the mold and casting showed that the intimate contact assumption could not be used; a three-dimensional model also showed that the thermocouple assemblies used with the platinum--platinum-10 percent rhodium were a significant perturbation to the system. An L-shaped steel casting was simulated and the results compared to those reported in the literature. The experimental data for this casting were reproduced within the accuracy permitted by the thermal conductivity of the sand, thus demonstrating that agreement can be obtained when the mold material does not act as a chill. (U.S.)

  6. VHS-tape system for general purpose computer. For next generation mass storage system

    International Nuclear Information System (INIS)

    Ukai, K.; Takano, M.; Shinohara, M.; Niki, K.; Suzuki, Y.; Hamada, T.; Ogawa, M.

    1994-07-01

    Mass storage is one of the key technology of next generation computer system. A huge amount of data is produced on a field of particle and nuclear physics. These data are raw data of experiments, analysis data, Monte Carlo simulations data, etc. We search a storage device for these data at the point of view of capacity, price, size, transfer speed, etc. We have selected a VHS-tape (12.7 mm-tape, helical scan) from many storage devices. Characteristics of the VHS-tape are as follows; capacity of 14.5 GB, size of 460 cm 3 , price of 1,000 yen (S-VHS tape for video use), and 1.996 MB/sec transfer speed at a sustained mode. Last year, we succeeded to operate the VHS-tape system on a workstation as a I/O device with read/write speed of 1.5 MB/sec. We have tested a VHS-tape system by connecting to the channel of the general purpose computer (Fujitsu M-780/10S) in our institute. We obtained a read and write speeds of 1.07 MB/sec and 1.72 MB/sec by FORTRAN test programs, respectively. Read speeds of an open reel tape and a 3480 type cassete tape by the same test programs are 1.13 MB/sec and 2.54 MB/sec, respectively. Speeds of write operation are 1.09 MB/sec and 2.54 MB/sec for the open reel and 3480 cassete tape, respectively. A start motion of the VHS-tape for read/write operations needs about 60 seconds. (author)

  7. Utilizing General Purpose Graphics Processing Units to Improve Performance of Computer Modelling and Visualization

    Science.gov (United States)

    Monk, J.; Zhu, Y.; Koons, P. O.; Segee, B. E.

    2009-12-01

    With the introduction of the G8X series of cards by nVidia an architecture called CUDA was released, virtually all subsequent video cards have had CUDA support. With this new architecture nVidia provided extensions for C/C++ that create an Application Programming Interface (API) allowing code to be executed on the GPU. Since then the concept of GPGPU (general purpose graphics processing unit) has been growing, this is the concept that the GPU is very good a algebra and running things in parallel so we should take use of that power for other applications. This is highly appealing in the area of geodynamic modeling, as multiple parallel solutions of the same differential equations at different points in space leads to a large speedup in simulation speed. Another benefit of CUDA is a programmatic method of transferring large amounts of data between the computer's main memory and the dedicated GPU memory located on the video card. In addition to being able to compute and render on the video card, the CUDA framework allows for a large speedup in the situation, such as with a tiled display wall, where the rendered pixels are to be displayed in a different location than where they are rendered. A CUDA extension for VirtualGL was developed allowing for faster read back at high resolutions. This paper examines several aspects of rendering OpenGL graphics on large displays using VirtualGL and VNC. It demonstrates how performance can be significantly improved in rendering on a tiled monitor wall. We present a CUDA enhanced version of VirtualGL as well as the advantages to having multiple VNC servers. It will discuss restrictions caused by read back and blitting rates and how they are affected by different sizes of virtual displays being rendered.

  8. Variable Conductance Heat Pipe Cooling of Stirling Convertor and General Purpose Heat Source

    Science.gov (United States)

    Tarau, Calin; Schwendeman, Carl; Anderson, William G.; Cornell, Peggy A.; Schifer, Nicholas A.

    2013-01-01

    In a Stirling Radioisotope Power System (RPS), heat must be continuously removed from the General Purpose Heat Source (GPHS) modules to maintain the modules and surrounding insulation at acceptable temperatures. The Stirling convertor normally provides this cooling. If the Stirling convertor stops in the current system, the insulation is designed to spoil, preventing damage to the GPHS at the cost of an early termination of the mission. An alkali-metal Variable Conductance Heat Pipe (VCHP) can be used to passively allow multiple stops and restarts of the Stirling convertor. In a previous NASA SBIR Program, Advanced Cooling Technologies, Inc. (ACT) developed a series of sodium VCHPs as backup cooling systems for Stirling RPS. The operation of these VCHPs was demonstrated using Stirling heater head simulators and GPHS simulators. In the most recent effort, a sodium VCHP with a stainless steel envelope was designed, fabricated and tested at NASA Glenn Research Center (GRC) with a Stirling convertor for two concepts; one for the Advanced Stirling Radioisotope Generator (ASRG) back up cooling system and one for the Long-lived Venus Lander thermal management system. The VCHP is designed to activate and remove heat from the stopped convertor at a 19 degC temperature increase from the nominal vapor temperature. The 19 degC temperature increase from nominal is low enough to avoid risking standard ASRG operation and spoiling of the Multi-Layer Insulation (MLI). In addition, the same backup cooling system can be applied to the Stirling convertor used for the refrigeration system of the Long-lived Venus Lander. The VCHP will allow the refrigeration system to: 1) rest during transit at a lower temperature than nominal; 2) pre-cool the modules to an even lower temperature before the entry in Venus atmosphere; 3) work at nominal temperature on Venus surface; 4) briefly stop multiple times on the Venus surface to allow scientific measurements. This paper presents the experimental

  9. Upscaling from research watersheds: an essential stage of trustworthy general-purpose hydrologic model building

    Science.gov (United States)

    McNamara, J. P.; Semenova, O.; Restrepo, P. J.

    2011-12-01

    Highly instrumented research watersheds provide excellent opportunities for investigating hydrologic processes. A danger, however, is that the processes observed at a particular research watershed are too specific to the watershed and not representative even of the larger scale watershed that contains that particular research watershed. Thus, models developed based on those partial observations may not be suitable for general hydrologic use. Therefore demonstrating the upscaling of hydrologic process from research watersheds to larger watersheds is essential to validate concepts and test model structure. The Hydrograph model has been developed as a general-purpose process-based hydrologic distributed system. In its applications and further development we evaluate the scaling of model concepts and parameters in a wide range of hydrologic landscapes. All models, either lumped or distributed, are based on a discretization concept. It is common practice that watersheds are discretized into so called hydrologic units or hydrologic landscapes possessing assumed homogeneous hydrologic functioning. If a model structure is fixed, the difference in hydrologic functioning (difference in hydrologic landscapes) should be reflected by a specific set of model parameters. Research watersheds provide the possibility for reasonable detailed combining of processes into some typical hydrologic concept such as hydrologic units, hydrologic forms, and runoff formation complexes in the Hydrograph model. And here by upscaling we imply not the upscaling of a single process but upscaling of such unified hydrologic functioning. The simulation of runoff processes for the Dry Creek research watershed, Idaho, USA (27 km2) was undertaken using the Hydrograph model. The information on the watershed was provided by Boise State University and included a GIS database of watershed characteristics and a detailed hydrometeorological observational dataset. The model provided good simulation results in

  10. The Chronic Kidney Disease Model: A General Purpose Model of Disease Progression and Treatment

    Directory of Open Access Journals (Sweden)

    Patel Uptal D

    2011-06-01

    Full Text Available Abstract Background Chronic kidney disease (CKD is the focus of recent national policy efforts; however, decision makers must account for multiple therapeutic options, comorbidities and complications. The objective of the Chronic Kidney Disease model is to provide guidance to decision makers. We describe this model and give an example of how it can inform clinical and policy decisions. Methods Monte Carlo simulation of CKD natural history and treatment. Health states include myocardial infarction, stroke with and without disability, congestive heart failure, CKD stages 1-5, bone disease, dialysis, transplant and death. Each cycle is 1 month. Projections account for race, age, gender, diabetes, proteinuria, hypertension, cardiac disease, and CKD stage. Treatment strategies include hypertension control, diabetes control, use of HMG-CoA reductase inhibitors, use of angiotensin converting enzyme inhibitors, nephrology specialty care, CKD screening, and a combination of these. The model architecture is flexible permitting updates as new data become available. The primary outcome is quality adjusted life years (QALYs. Secondary outcomes include health state events and CKD progression rate. Results The model was validated for GFR change/year -3.0 ± 1.9 vs. -1.7 ± 3.4 (in the AASK trial, and annual myocardial infarction and mortality rates 3.6 ± 0.9% and 1.6 ± 0.5% vs. 4.4% and 1.6% in the Go study. To illustrate the model's utility we estimated lifetime impact of a hypothetical treatment for primary prevention of vascular disease. As vascular risk declined, QALY improved but risk of dialysis increased. At baseline, 20% and 60% reduction: QALYs = 17.6, 18.2, and 19.0 and dialysis = 7.7%, 8.1%, and 10.4%, respectively. Conclusions The CKD Model is a valid, general purpose model intended as a resource to inform clinical and policy decisions improving CKD care. Its value as a tool is illustrated in our example which projects a relationship between

  11. Parallelized computation for computer simulation of electrocardiograms using personal computers with multi-core CPU and general-purpose GPU.

    Science.gov (United States)

    Shen, Wenfeng; Wei, Daming; Xu, Weimin; Zhu, Xin; Yuan, Shizhong

    2010-10-01

    Biological computations like electrocardiological modelling and simulation usually require high-performance computing environments. This paper introduces an implementation of parallel computation for computer simulation of electrocardiograms (ECGs) in a personal computer environment with an Intel CPU of Core (TM) 2 Quad Q6600 and a GPU of Geforce 8800GT, with software support by OpenMP and CUDA. It was tested in three parallelization device setups: (a) a four-core CPU without a general-purpose GPU, (b) a general-purpose GPU plus 1 core of CPU, and (c) a four-core CPU plus a general-purpose GPU. To effectively take advantage of a multi-core CPU and a general-purpose GPU, an algorithm based on load-prediction dynamic scheduling was developed and applied to setting (c). In the simulation with 1600 time steps, the speedup of the parallel computation as compared to the serial computation was 3.9 in setting (a), 16.8 in setting (b), and 20.0 in setting (c). This study demonstrates that a current PC with a multi-core CPU and a general-purpose GPU provides a good environment for parallel computations in biological modelling and simulation studies. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  12. How did the General Purpose Technology ’Electricity’ contribute to the Second Industrial Revolution (I): The Power Engines.

    NARCIS (Netherlands)

    van der Kooij, B.J.G.

    2016-01-01

    The concept of the General Purpose Technology (GPT) of the late 1990s is a culmination of many evolutionairy views in innovation-thinking. By definition the GPT considers the technical, social, and economic effects of meta-technologies like steam-technology and electric technology. This paper uses

  13. [Application of the grayscale standard display function to general purpose liquid-crystal display monitors for clinical use].

    Science.gov (United States)

    Tanaka, Nobukazu; Naka, Kentaro; Sueoka, Masaki; Higashida, Yoshiharu; Morishita, Junji

    2010-01-20

    Interpretations of medical images have been shifting to soft-copy readings with liquid-crystal display (LCD) monitors. The display function of the medical-grade LCD monitor for soft-copy readings is recommended to calibrate the grayscale standard display function (GSDF) in accordance with the guidelines of Japan and other countries. In this study, the luminance and display function of five models of eight general purpose LCD monitors were measured to gain an understanding of their characteristics. Moreover, the display function (gamma 2.2 or gamma 1.8) of general purpose LCD monitors was converted to GSDF through the use of a look-up table, and the detectability of a simulated lung nodule in the chest x-ray image was examined. As a result, the maximum luminance, contrast ratio, and luminance uniformity of general purpose LCD monitors, except for one model of two LCD monitors, met the management grade 1 standard in the guideline JESRA X-0093-2005. In addition, the detectability of simulated lung nodule in the mediastinal space was obviously improved by converting the display function of a general purpose LCD monitor into GSDF.

  14. RUMD: A general purpose molecular dynamics package optimized to utilize GPU hardware down to a few thousand particles

    DEFF Research Database (Denmark)

    Bailey, Nicholas; Ingebrigtsen, Trond; Hansen, Jesper Schmidt

    2017-01-01

    RUMD is a general purpose, high-performance molecular dynamics (MD) simulation package running on graphical processing units (GPU’s). RUMD addresses the challenge of utilizing the many-core nature of modern GPU hardware when simulating small to medium system sizes (roughly from a few thousand up...

  15. How did the General Purpose Technology Electricity contribute to the Second Industrial Revolution (II): The Communication Engines

    NARCIS (Netherlands)

    van der Kooij, B.J.G.

    2017-01-01

    The concept of the General Purpose Technology (GPT) of the late 1990s is a culmination of many evolutionairy views in innovation-thinking. By definition the GPT considers the technical, social, and economic effects of meta-technologies like steam-technology and electric technology. This paper uses

  16. Apple-CORE: Microgrids of SVP cores: flexible, general-purpose, fine-grained hardware concurrency management

    NARCIS (Netherlands)

    Poss, R.; Lankamp, M.; Yang, Q.; Fu, J.; van Tol, M.W.; Jesshope, C.; Nair, S.

    2012-01-01

    To harness the potential of CMPs for scalable, energy-efficient performance in general-purpose computers, the Apple-CORE project has co-designed a general machine model and concurrency control interface with dedicated hardware support for concurrency control across multiple cores. Its SVP interface

  17. Operation of a general purpose stepping motor-encoder positioning subsystem at the National Synchrotron Light Source

    International Nuclear Information System (INIS)

    Stubblefield, F.W.

    1985-11-01

    Four copies of a general purpose subsystem for mechanical positioning of detectors, samples, and beam line optical elements which constitute experiments at the National Synchrotron Light Source facility of Brookhaven National Laboratory have been constructed and placed into operation. Construction of a fifth subsystem unit is nearing completion. The subsystems affect mechanical positioning by controlling a set of stepping motor-encoder pairs. The units are general purpose in the sense that they receive commands over a 9600 baud asynchronous serial line compatible with the RS-232-C electrical signal standard, generate TTL-compatible streams of stepping pulses which can be used with a wide variety of stepping motors, and read back position values from a number of different types and models of position encoder. The basic structure of the motor controller subsystem is briefly reviewed. Additions to the subsystem made in response to problems indicated by actual operation of the four installed units are described in more detail

  18. APL/JHU free flight tests of the General Purpose Heat Source module. Testing: 5-7 March 1984

    International Nuclear Information System (INIS)

    Baker, W.M. II.

    1984-01-01

    Purpose of the test was to obtain statistical information on the dynamics of the General Purpose Heat Source (GPHS) module at terminal speeds. Models were designed to aerodynamically and dynamically represent the GPHS module. Normal and high speed photographic coverage documented the motion of the models. This report documents test parameters and techniques for the free-spin tests. It does not include data analysis

  19. Effects of detector-source distance and detector bias voltage variations on time resolution of general purpose plastic scintillation detectors.

    Science.gov (United States)

    Ermis, E E; Celiktas, C

    2012-12-01

    Effects of source-detector distance and the detector bias voltage variations on time resolution of a general purpose plastic scintillation detector such as BC400 were investigated. (133)Ba and (207)Bi calibration sources with and without collimator were used in the present work. Optimum source-detector distance and bias voltage values were determined for the best time resolution by using leading edge timing method. Effect of the collimator usage on time resolution was also investigated. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Application of a general-purpose scintigraphic scanner to transverse-section (tomographic) gamma-ray imaging

    International Nuclear Information System (INIS)

    Bradstock, P.A.; Milward, R.C.

    1976-01-01

    The paper describes the recent application of a general-purpose commercial scintigraphic scanner to transverse-section radioisotope tomography. The principle of the method is to obtain the distribution of radioactive material in a thin transverse slice of the body or brain, from a mathematical reconstruction using the measured transverse projections of the activity within that slice. The usefulness of the radioisotope section-scanning technique for clinical diagnosis, as evidenced from one year's use of the machine at the Midland Centre for Neurology and Neurosurgery, Birmingham, U.K., is briefly discussed. (orig.) [de

  1. General-purpose heat source project and space nuclear safety and fuels program. Progress reportt, January 1980

    International Nuclear Information System (INIS)

    Maraman, W.J.

    1980-04-01

    This formal monthly report covers the studies related to the use of 238 PuO 2 in radioisotopic power systems carried out for the Advanced Nuclear Systems and Projects Division of the Los Alamos Scientific Laboratory. The two programs involved are the general-purpose heat source development and space nuclear safety and fuels. Most of the studies discussed here are of a continuing nature. Results and conclusions described may change as the work continues. Published reference to the results cited in this report should not be made without the explicit permission of the person in charge of the work

  2. A parallelization study of the general purpose Monte Carlo code MCNP4 on a distributed memory highly parallel computer

    International Nuclear Information System (INIS)

    Yamazaki, Takao; Fujisaki, Masahide; Okuda, Motoi; Takano, Makoto; Masukawa, Fumihiro; Naito, Yoshitaka

    1993-01-01

    The general purpose Monte Carlo code MCNP4 has been implemented on the Fujitsu AP1000 distributed memory highly parallel computer. Parallelization techniques developed and studied are reported. A shielding analysis function of the MCNP4 code is parallelized in this study. A technique to map a history to each processor dynamically and to map control process to a certain processor was applied. The efficiency of parallelized code is up to 80% for a typical practical problem with 512 processors. These results demonstrate the advantages of a highly parallel computer to the conventional computers in the field of shielding analysis by Monte Carlo method. (orig.)

  3. Application of a general purpose user's version of the EGS4 code system to a photon skyshine benchmarking calculation

    International Nuclear Information System (INIS)

    Nojiri, I.; Fukasaku, Y.; Narita, O.

    1994-01-01

    A general purpose user's version of the EGS4 code system has been developed to make EGS4 easily applicable to the safety analysis of nuclear fuel cycle facilities. One such application involves the determination of skyshine dose for a variety of photon sources. To verify the accuracy of the code, it was benchmarked with Kansas State University (KSU) photon skyshine experiment of 1977. The results of the simulation showed that this version of EGS4 would be appicable to the skyshine calculation. (author)

  4. Multi­-Threaded Algorithms for General purpose Graphics Processor Units in the ATLAS High Level Trigger

    CERN Document Server

    Conde Mui\\~no, Patricia; The ATLAS collaboration

    2016-01-01

    General purpose Graphics Processor Units (GPGPU) are being evaluated for possible future inclusion in an upgraded ATLAS High Level Trigger farm. We have developed a demonstrator including GPGPU implementations of Inner Detector and Muon tracking and Calorimeter clustering within the ATLAS software framework. ATLAS is a general purpose particle physics experiment located on the LHC collider at CERN. The ATLAS Trigger system consists of two levels, with level 1 implemented in hardware and the High Level Trigger implemented in software running on a farm of commodity CPU. The High Level Trigger reduces the trigger rate from the 100 kHz level 1 acceptance rate to 1 kHz for recording, requiring an average per­-event processing time of ~250 ms for this task. The selection in the high level trigger is based on reconstructing tracks in the Inner Detector and Muon Spectrometer and clusters of energy deposited in the Calorimeter. Performing this reconstruction within the available farm resources presents a significant ...

  5. General-purpose heat source: Research and development program, radioisotope thermoelectric generator/thin fragment impact test

    International Nuclear Information System (INIS)

    Reimus, M.A.H.; Hinckley, J.E.

    1996-11-01

    The general-purpose heat source provides power for space missions by transmitting the heat of 238 Pu decay to an array of thermoelectric elements in a radioisotope thermoelectric generator (RTG). Because the potential for a launch abort or return from orbit exists for any space mission, the heat source response to credible accident scenarios is being evaluated. This test was designed to provide information on the response of a loaded RTG to impact by a fragment similar to the type of fragment produced by breakup of the spacecraft propulsion module system. The results of this test indicated that impact by a thin aluminum fragment traveling at 306 m/s may result in significant damage to the converter housing, failure of one fueled clad, and release of a small quantity of fuel

  6. RUMD: A general purpose molecular dynamics package optimized to utilize GPU hardware down to a few thousand particles

    Directory of Open Access Journals (Sweden)

    Nicholas P. Bailey, Trond S. Ingebrigtsen, Jesper Schmidt Hansen, Arno A. Veldhorst, Lasse Bøhling, Claire A. Lemarchand, Andreas E. Olsen, Andreas K. Bacher, Lorenzo Costigliola, Ulf R. Pedersen, Heine Larsen, Jeppe C. Dyre, Thomas B. Schrøder

    2017-12-01

    Full Text Available RUMD is a general purpose, high-performance molecular dynamics (MD simulation package running on graphical processing units (GPU's. RUMD addresses the challenge of utilizing the many-core nature of modern GPU hardware when simulating small to medium system sizes (roughly from a few thousand up to hundred thousand particles. It has a performance that is comparable to other GPU-MD codes at large system sizes and substantially better at smaller sizes.RUMD is open-source and consists of a library written in C++ and the CUDA extension to C, an easy-to-use Python interface, and a set of tools for set-up and post-simulation data analysis. The paper describes RUMD's main features, optimizations and performance benchmarks.

  7. VoxelMages: a general-purpose graphical interface for designing geometries and processing DICOM images for PENELOPE.

    Science.gov (United States)

    Giménez-Alventosa, V; Ballester, F; Vijande, J

    2016-12-01

    The design and construction of geometries for Monte Carlo calculations is an error-prone, time-consuming, and complex step in simulations describing particle interactions and transport in the field of medical physics. The software VoxelMages has been developed to help the user in this task. It allows to design complex geometries and to process DICOM image files for simulations with the general-purpose Monte Carlo code PENELOPE in an easy and straightforward way. VoxelMages also allows to import DICOM-RT structure contour information as delivered by a treatment planning system. Its main characteristics, usage and performance benchmarking are described in detail. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Effects of detector–source distance and detector bias voltage variations on time resolution of general purpose plastic scintillation detectors

    International Nuclear Information System (INIS)

    Ermis, E.E.; Celiktas, C.

    2012-01-01

    Effects of source-detector distance and the detector bias voltage variations on time resolution of a general purpose plastic scintillation detector such as BC400 were investigated. 133 Ba and 207 Bi calibration sources with and without collimator were used in the present work. Optimum source-detector distance and bias voltage values were determined for the best time resolution by using leading edge timing method. Effect of the collimator usage on time resolution was also investigated. - Highlights: ► Effect of the source-detector distance on time spectra was investigated. ► Effect of the detector bias voltage variations on time spectra was examined. ► Optimum detector–source distance was determined for the best time resolution. ► Optimum detector bias voltage was determined for the best time resolution. ► 133 Ba and 207 Bi radioisotopes were used.

  9. Human factors in equipment development for the Space Shuttle - A study of the general purpose work station

    Science.gov (United States)

    Junge, M. K.; Giacomi, M. J.

    1981-01-01

    The results of a human factors test to assay the suitability of a prototype general purpose work station (GPWS) for biosciences experiments on the fourth Spacelab mission are reported. The evaluation was performed to verify that users of the GPWS would optimally interact with the GPWS configuration and instrumentation. Six male subjects sat on stools positioned to allow assimilation of the zero-g body posture. Trials were run concerning the operator viewing angles facing the console, the console color, procedures for injecting rates with dye, a rat blood cell count, mouse dissection, squirrel monkey transfer, and plant fixation. The trials were run for several days in order to gage improvement or poor performance conditions. Better access to the work surface was found necessary, together with more distinct and better located LEDs, better access window latches, clearer sequences on control buttons, color-coded sequential buttons, and provisions made for an intercom system when operators of the GPWS work in tandem.

  10. Evaluation of Aqueous and Powder Processing Techniques for Production of Pu-238-Fueled General Purpose Heat Sources

    Energy Technology Data Exchange (ETDEWEB)

    2008-06-01

    This report evaluates alternative processes that could be used to produce Pu-238 fueled General Purpose Heat Sources (GPHS) for radioisotope thermoelectric generators (RTG). Fabricating GPHSs with the current process has remained essentially unchanged since its development in the 1970s. Meanwhile, 30 years of technological advancements have been made in the fields of chemistry, manufacturing, ceramics, and control systems. At the Department of Energy’s request, alternate manufacturing methods were compared to current methods to determine if alternative fabrication processes could reduce the hazards, especially the production of respirable fines, while producing an equivalent GPHS product. An expert committee performed the evaluation with input from four national laboratories experienced in Pu-238 handling.

  11. Duplication of complete dentures using general-purpose handheld optical scanner and 3-dimensional printer: Introduction and clinical considerations.

    Science.gov (United States)

    Kurahashi, Kosuke; Matsuda, Takashi; Goto, Takaharu; Ishida, Yuichi; Ito, Teruaki; Ichikawa, Tetsuo

    2017-01-01

    To introduce a new clinical procedure for fabricating duplicates of complete dentures by bite pressure impression using digital technology, and to discuss its clinical significance. The denture is placed on a rotary table and the 3-dimensional form of the denture is digitized using a general-purpose handheld optical scanner. The duplicate denture is made of polylactic acid by a 3-dimensional printer using the 3-dimensional data. This procedure has the advantages of wasting less material, employing less human power, decreasing treatment time at the chair side, lowering the rates of contamination, and being readily fabricated at the time of the treatment visit. Copyright © 2016 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.

  12. In vivo dosimetry in intraoperative electron radiotherapy: microMOSFETs, radiochromic films and a general-purpose linac.

    Science.gov (United States)

    López-Tarjuelo, Juan; Bouché-Babiloni, Ana; Morillo-Macías, Virginia; de Marco-Blancas, Noelia; Santos-Serra, Agustín; Quirós-Higueras, Juan David; Ferrer-Albiach, Carlos

    2014-10-01

    In vivo dosimetry is desirable for the verification, recording, and eventual correction of treatment in intraoperative electron radiotherapy (IOERT). Our aim is to share our experience of metal oxide semiconductor field-effect transistors (MOSFETs) and radiochromic films with patients undergoing IOERT using a general-purpose linac. We used MOSFETs inserted into sterile bronchus catheters and radiochromic films that were cut, digitized, and sterilized by means of gas plasma. In all, 59 measurements were taken from 27 patients involving 15 primary tumors (seven breast and eight non-breast tumors) and 12 relapses. Data were subjected to an outliers' analysis and classified according to their compatibility with the relevant doses. Associations were sought regarding the type of detector, breast and non-breast irradiation, and the radiation oncologist's assessment of the difficulty of detector placement. At the same time, 19 measurements were carried out at the tumor bed with both detectors. MOSFET measurements ([Formula: see text]  = 93.5 %, sD  =  6.5 %) were not significantly shifted from film measurements ([Formula: see text]  =  96.0 %, sD  =  5.5 %; p  =  0.109), and no associations were found (p = 0.526, p = 0.295,  and p = 0.501, respectively). As regards measurements performed at the tumor bed with both detectors, MOSFET measurements ([Formula: see text]  =  95.0 %, sD  =  5.4 % were not significantly shifted from film measurements ([Formula: see text]  =  96.4 %, sD  =  5.0 %; p  =  0.363). In vivo dosimetry can produce satisfactory results at every studied location with a general-purpose linac. Detector choice should depend on user factors, not on the detector performance itself. Surgical team collaboration is crucial to success.

  13. Deposition, characterization, and in vivo performance of parylene coating on general-purpose silicone for examining potential biocompatible surface modifications

    International Nuclear Information System (INIS)

    Chou, Chia-Man; Shiao, Chiao-Ju; Chung, Chi-Jen; He, Ju-Liang

    2013-01-01

    In this study, a thorough investigation of parylene coatings was conducted, as follows: microstructure (i.e., X-ray diffractometer (XRD) and cold field emission scanning electron microscope (FESEM)), mechanical property (i.e., pencil hardness and cross-cut adhesion test), surface property (i.e., water contact angle measurement, IR, and X-ray photoelectron spectroscopy (XPS)), and biocompatibility tests (i.e., fibroblast cell culture, platelet adhesion, and animal studies). The results revealed that parylene, a crystalline and brittle coating, exhibited satisfactory film adhesion and relative hydrophobicity, thereby contributing to its effective barrier properties. Fibroblast cell culturing on the parylene-deposited specimen demonstrated improved cell proliferation and equivalent to or superior blood compatibility than that of the medical-grade silicone (currently used clinically). In the animal study, parylene coatings exhibited similar subcutaneous inflammatory reactions compared with the medical-grade silicone. Both in vitro and in vivo tests demonstrated the satisfactory biocompatibility of parylene coatings. - Highlights: • A complete investigation to identify the characteristics of parylene coatings on general-purpose silicones. • Microstructures, surface properties and mechanical properties of parylene coatings were examined. • In vitro (Cell culture, platelet adhesion) tests and animal studies revealed satisfactory biocompatibility. • An alternative of medical-grade silicones is expected to be obtained

  14. General-purpose computer networks and resource sharing in ERDA. Volume 3. Remote resource-sharing experience and findings

    Energy Technology Data Exchange (ETDEWEB)

    1977-07-15

    The investigation focused on heterogeneous networks in which a variety of dissimilar computers and operating systems were interconnected nationwide. Homogeneous networks, such as MFE net and SACNET, were not considered since they could not be used for general purpose resource sharing. Issues of privacy and security are of concern in any network activity. However, consideration of privacy and security of sensitive data arise to a much lesser degree in unclassified scientific research than in areas involving personal or proprietary information. Therefore, the existing mechanisms at individual sites for protecting sensitive data were relied on, and no new protection mechanisms to prevent infringement of privacy and security were attempted. Further development of ERDA networking will need to incorporate additional mechanisms to prevent infringement of privacy. The investigation itself furnishes an excellent example of computational resource sharing through a heterogeneous network. More than twenty persons, representing seven ERDA computing sites, made extensive use of both ERDA and non-ERDA computers in coordinating, compiling, and formatting the data which constitute the bulk of this report. Volume 3 analyzes the benefits and barriers encountered in actual resource sharing experience, and provides case histories of typical applications.

  15. Architecture of a general purpose embedded Slow-Control Adapter ASIC for future high-energy physics experiments

    Science.gov (United States)

    Gabrielli, Alessandro; Loddo, Flavio; Ranieri, Antonio; De Robertis, Giuseppe

    2008-10-01

    This work is aimed at defining the architecture of a new digital ASIC, namely Slow-Control Adapter (SCA), which will be designed in a commercial 130-nm CMOS technology. This chip will be embedded within a high-speed data acquisition optical link (GBT) to control and monitor the front-end electronics in future high-energy physics experiments. The GBT link provides a transparent transport layer between the SCA and control electronics in the counting room. The proposed SCA supports a variety of common bus protocols to interface with end-user general-purpose electronics. Between the GBT and the SCA a standard 100 Mb/s IEEE-802.3 compatible protocol will be implemented. This standard protocol allows off-line tests of the prototypes using commercial components that support the same standard. The project is justified because embedded applications in modern large HEP experiments require particular care to assure the lowest possible power consumption, still offering the highest reliability demanded by very large particle detectors.

  16. Screening tests in toxicity or drug effect studies with use of centrifichem general-purpose spectrophotometeric analyzer

    International Nuclear Information System (INIS)

    Nagy, B.; Bercz, J.P.

    1986-01-01

    CentrifiChem System 400 general-purpose spectrophotometric analyzer which can process simultaneously 30 samples and reads the reactions within milliseconds was used for toxicity studies. Organic and inorganic chemicals were screened for inhibitory action of the hydrolytic activity of sarcoplasmic reticulum (SR) Ca,Mg-ATPase and that of the sacrolemmal (SL) Na,K-ATPase, or mitochondrial ATPase (M). SR and SL were prepared from rabbit muscles, Na,K-ATPase from pig kidneys, M from pig hearts. Pseudosubstrates of paranitrophenyl phosphate and 2,4-dinitrophenyl phosphate, both proven high energy phosphate substitutes for ATPase coupled ion transfer were used. The reaction rates were followed spectrophotometrically at 405 nm measuring the accumulation of yellow nitrophenolate ions. The reported calcium transfer coupling ratio to hydrolysis of 2:1 was ascertained with use of 45 Ca in case of SR. Inhibition constants (pI) on SR, SL, and M for the pseudosubstrate hydrolysis will be given for over 20 chemicals tested. The applicability of the system to general toxicity testing and to general cardio-effective drug screening will be presented

  17. Deposition, characterization, and in vivo performance of parylene coating on general-purpose silicone for examining potential biocompatible surface modifications

    Energy Technology Data Exchange (ETDEWEB)

    Chou, Chia-Man [Division of Pediatric Surgery, Department of Surgery, Taichung Veterans General Hospital, 160, Sec. 3, Taichung Port Rd., Taichung 40705, Taiwan, ROC (China); Department of Medicine, National Yang-Ming University, 155, Sec. 2, Linong Street, Taipei 11221, Taiwan, ROC (China); Shiao, Chiao-Ju [Department of Materials Science and Engineering, Feng Chia University, 100, Wen-Hwa Rd., Taichung 40724, Taiwan, ROC (China); Chung, Chi-Jen, E-mail: cjchung@seed.net.tw [Department of Dental Technology and Materials Science, Central Taiwan University of Science and Technology, 666 Buzih Rd., Beitun District, Taichung 40601, Taiwan, ROC (China); He, Ju-Liang [Department of Materials Science and Engineering, Feng Chia University, 100, Wen-Hwa Rd., Taichung 40724, Taiwan, ROC (China)

    2013-12-31

    In this study, a thorough investigation of parylene coatings was conducted, as follows: microstructure (i.e., X-ray diffractometer (XRD) and cold field emission scanning electron microscope (FESEM)), mechanical property (i.e., pencil hardness and cross-cut adhesion test), surface property (i.e., water contact angle measurement, IR, and X-ray photoelectron spectroscopy (XPS)), and biocompatibility tests (i.e., fibroblast cell culture, platelet adhesion, and animal studies). The results revealed that parylene, a crystalline and brittle coating, exhibited satisfactory film adhesion and relative hydrophobicity, thereby contributing to its effective barrier properties. Fibroblast cell culturing on the parylene-deposited specimen demonstrated improved cell proliferation and equivalent to or superior blood compatibility than that of the medical-grade silicone (currently used clinically). In the animal study, parylene coatings exhibited similar subcutaneous inflammatory reactions compared with the medical-grade silicone. Both in vitro and in vivo tests demonstrated the satisfactory biocompatibility of parylene coatings. - Highlights: • A complete investigation to identify the characteristics of parylene coatings on general-purpose silicones. • Microstructures, surface properties and mechanical properties of parylene coatings were examined. • In vitro (Cell culture, platelet adhesion) tests and animal studies revealed satisfactory biocompatibility. • An alternative of medical-grade silicones is expected to be obtained.

  18. Development of a general-purpose method for cell purification using Cre/loxP-mediated recombination.

    Science.gov (United States)

    Kuroki, Shunsuke; Akiyoshi, Mika; Ideguchi, Ko; Kitano, Satsuki; Miyachi, Hitoshi; Hirose, Michiko; Mise, Nathan; Abe, Kuniya; Ogura, Atsuo; Tachibana, Makoto

    2015-06-01

    A mammalian body is composed of more than 200 different types of cells. The purification of a certain cell type from tissues/organs enables a wide variety of studies. One popular cell purification method is immunological isolation, using antibodies against specific cell surface antigens. However, this is not a general-purpose method, since suitable antigens have not been found in certain cell types, including embryonic gonadal somatic cells and Sertoli cells. To address this issue, we established a knock-in mouse line, named R26 KI, designed to express the human cell surface antigen hCD271 through Cre/loxP-mediated recombination. First, we used the R26 Kl mouse line to purify embryonic gonadal somatic cells. Gonadal somatic cells were purified from the R26 KI; Nr5a1-Cre-transgenic (tg) embryos almost equally as efficiently as from Nr5a1-hCD271-tg embryos. Second, we used the R26 KI mouse line to purify Sertoli cells successfully from R26 KI; Amh-Cre-tg testes. In summary, we propose that the R26 KI mouse line is a powerful tool for the purification of various cell types. © 2015 Wiley Periodicals, Inc.

  19. STICK: Spike Time Interval Computational Kernel, a Framework for General Purpose Computation Using Neurons, Precise Timing, Delays, and Synchrony.

    Science.gov (United States)

    Lagorce, Xavier; Benosman, Ryad

    2015-11-01

    There has been significant research over the past two decades in developing new platforms for spiking neural computation. Current neural computers are primarily developed to mimic biology. They use neural networks, which can be trained to perform specific tasks to mainly solve pattern recognition problems. These machines can do more than simulate biology; they allow us to rethink our current paradigm of computation. The ultimate goal is to develop brain-inspired general purpose computation architectures that can breach the current bottleneck introduced by the von Neumann architecture. This work proposes a new framework for such a machine. We show that the use of neuron-like units with precise timing representation, synaptic diversity, and temporal delays allows us to set a complete, scalable compact computation framework. The framework provides both linear and nonlinear operations, allowing us to represent and solve any function. We show usability in solving real use cases from simple differential equations to sets of nonlinear differential equations leading to chaotic attractors.

  20. Architecture of a general purpose embedded Slow-Control Adapter ASIC for future high-energy physics experiments

    International Nuclear Information System (INIS)

    Gabrielli, Alessandro; Loddo, Flavio; Ranieri, Antonio; De Robertis, Giuseppe

    2008-01-01

    This work is aimed at defining the architecture of a new digital ASIC, namely Slow-Control Adapter (SCA), which will be designed in a commercial 130-nm CMOS technology. This chip will be embedded within a high-speed data acquisition optical link (GBT) to control and monitor the front-end electronics in future high-energy physics experiments. The GBT link provides a transparent transport layer between the SCA and control electronics in the counting room. The proposed SCA supports a variety of common bus protocols to interface with end-user general-purpose electronics. Between the GBT and the SCA a standard 100 Mb/s IEEE-802.3 compatible protocol will be implemented. This standard protocol allows off-line tests of the prototypes using commercial components that support the same standard. The project is justified because embedded applications in modern large HEP experiments require particular care to assure the lowest possible power consumption, still offering the highest reliability demanded by very large particle detectors.

  1. Computer-assisted analyses of (/sup 14/C)2-DG autoradiographs employing a general purpose image processing system

    Energy Technology Data Exchange (ETDEWEB)

    Porro, C; Biral, G P [Modena Univ. (Italy). Ist. di Fisiologia Umana; Fonda, S; Baraldi, P [Modena Univ. (Italy). Lab. di Bioingegneria della Clinica Oculistica; Cavazzuti, M [Modena Univ. (Italy). Clinica Neurologica

    1984-09-01

    A general purpose image processing system is described including B/W TV camera, high resolution image processor and display system (TESAK VDC 501), computer (DEC PDP 11/23) and monochrome and color monitors. Images may be acquired from a microscope equipped with a TV camera or using the TV in direct viewing; the A/D converter and the image processor provides fast (40 ms) and precise (512x512 data points) digitization of TV signal with a 256 gray levels maximum resolution. Computer programs have been developed in order to perform qualitative and quantitative analyses of autoradiographs obtained with the 2-DG method, which are written in FORTRAN and MACRO 11 Assembly Language. They include: (1) procedures designed to recognize errors in acquisition due to possible image shading and correct them via software; (2) routines suitable for qualitative analyses of the whole image or selected regions of it, providing the opportunity for pseudocolor coding, statistics, graphic overlays; (3) programs permitting the conversion of gray levels into metabolic rates of glucose utilization and the display of gray- or color-coded metabolic maps.

  2. Design and Deployment of a General Purpose, Open Source LoRa to Wi-Fi Hub and Data Logger

    Science.gov (United States)

    DeBell, T. C.; Udell, C.; Kwon, M.; Selker, J. S.; Lopez Alcala, J. M.

    2017-12-01

    Methods and technologies facilitating internet connectivity and near-real-time status updates for in site environmental sensor data are of increasing interest in Earth Science. However, Open Source, Do-It-Yourself technologies that enable plug and play functionality for web-connected sensors and devices remain largely inaccessible for typical researchers in our community. The Openly Published Environmental Sensing Lab at Oregon State University (OPEnS Lab) constructed an Open Source 900 MHz Long Range Radio (LoRa) receiver hub with SD card data logger, Ethernet and Wi-Fi shield, and 3D printed enclosure that dynamically uploads transmissions from multiple wirelessly-connected environmental sensing devices. Data transmissions may be received from devices up to 20km away. The hub time-stamps, saves to SD card, and uploads all transmissions to a Google Drive spreadsheet to be accessed in near-real-time by researchers and GeoVisualization applications (such as Arc GIS) for access, visualization, and analysis. This research expands the possibilities of scientific observation of our Earth, transforming the technology, methods, and culture by combining open-source development and cutting edge technology. This poster details our methods and evaluates the application of using 3D printing, Arduino Integrated Development Environment (IDE), Adafruit's Open-Hardware Feather development boards, and the WIZNET5500 Ethernet shield for designing this open-source, general purpose LoRa to Wi-Fi data logger.

  3. Heuristic simulation of nuclear systems on a supercomputer using the HAL-1987 general-purpose production-rule analysis system

    International Nuclear Information System (INIS)

    Ragheb, M.; Gvillo, D.; Makowitz, H.

    1987-01-01

    HAL-1987 is a general-purpose tool for the construction of production-rule analysis systems. It uses the rule-based paradigm from the part of artificial intelligence concerned with knowledge engineering. It uses backward-chaining and forward-chaining in an antecedent-consequent logic, and is programmed in Portable Standard Lisp (PSL). The inference engine is flexible and accommodates general additions and modifications to the knowledge base. The system is used in coupled symbolic-procedural programming adaptive methodologies for stochastic simulations. In Monte Carlo simulations of particle transport, the system considers the pre-processing of the input data to the simulation and adaptively controls the variance reduction process as the simulation progresses. This is accomplished through the use of a knowledge base of rules which encompass the user's expertise in the variance reduction process. It is also applied to the construction of model-based systems for monitoring, fault-diagnosis and crisis-alert in engineering devices, particularly in the field of nuclear reactor safety analysis

  4. Cafe Variome: general-purpose software for making genotype-phenotype data discoverable in restricted or open access contexts.

    Science.gov (United States)

    Lancaster, Owen; Beck, Tim; Atlan, David; Swertz, Morris; Thangavelu, Dhiwagaran; Veal, Colin; Dalgleish, Raymond; Brookes, Anthony J

    2015-10-01

    Biomedical data sharing is desirable, but problematic. Data "discovery" approaches-which establish the existence rather than the substance of data-precisely connect data owners with data seekers, and thereby promote data sharing. Cafe Variome (http://www.cafevariome.org) was therefore designed to provide a general-purpose, Web-based, data discovery tool that can be quickly installed by any genotype-phenotype data owner, or network of data owners, to make safe or sensitive content appropriately discoverable. Data fields or content of any type can be accommodated, from simple ID and label fields through to extensive genotype and phenotype details based on ontologies. The system provides a "shop window" in front of data, with main interfaces being a simple search box and a powerful "query-builder" that enable very elaborate queries to be formulated. After a successful search, counts of records are reported grouped by "openAccess" (data may be directly accessed), "linkedAccess" (a source link is provided), and "restrictedAccess" (facilitated data requests and subsequent provision of approved records). An administrator interface provides a wide range of options for system configuration, enabling highly customized single-site or federated networks to be established. Current uses include rare disease data discovery, patient matchmaking, and a Beacon Web service. © 2015 WILEY PERIODICALS, INC.

  5. In vivo dosimetry in intraoperative electron radiotherapy. microMOSFETs, radiochromic films and a general-purpose linac

    Energy Technology Data Exchange (ETDEWEB)

    Lopez-Tarjuelo, Juan; Marco-Blancas, Noelia de; Santos-Serra, Agustin; Quiros-Higueras, Juan David [Consorcio Hospitalario Provincial de Castellon, Servicio de Radiofisica y Proteccion Radiologica, Castellon de la Plana (Spain); Bouche-Babiloni, Ana; Morillo-Macias, Virginia; Ferrer-Albiach, Carlos [Consorcio Hospitalario Provincial de Castellon, Servicio de Oncologia Radioterapica, Castellon de la Plana (Spain)

    2014-11-15

    In vivo dosimetry is desirable for the verification, recording, and eventual correction of treatment in intraoperative electron radiotherapy (IOERT). Our aim is to share our experience of metal oxide semiconductor field-effect transistors (MOSFETs) and radiochromic films with patients undergoing IOERT using a general-purpose linac. We used MOSFETs inserted into sterile bronchus catheters and radiochromic films that were cut, digitized, and sterilized by means of gas plasma. In all, 59 measurements were taken from 27 patients involving 15 primary tumors (seven breast and eight non-breast tumors) and 12 relapses. Data were subjected to an outliers' analysis and classified according to their compatibility with the relevant doses. Associations were sought regarding the type of detector, breast and non-breast irradiation, and the radiation oncologist's assessment of the difficulty of detector placement. At the same time, 19 measurements were carried out at the tumor bed with both detectors. MOSFET measurements (D = 93.5 %, s{sub D} = 6.5 %) were not significantly shifted from film measurements (D = 96.0 %, s{sub D} = 5.5 %; p = 0.109), and no associations were found (p = 0.526, p = 0.295, and p = 0.501, respectively). As regards measurements performed at the tumor bed with both detectors, MOSFET measurements (D = 95.0 %, s{sub D} = 5.4 %) were not significantly shifted from film measurements (D = 96.4 %, s{sub D} = 5.0 %; p = 0.363). In vivo dosimetry can produce satisfactory results at every studied location with a general-purpose linac. Detector choice should depend on user factors, not on the detector performance itself. Surgical team collaboration is crucial to success. (orig.) [German] Die In-vivo-Dosimetrie ist wuenschenswert fuer die Ueberpruefung, Registrierung und die eventuelle Korrektur der Behandlungen in der IOERT (''Intraoperative Electron Radiation Therapy''). Unser Ziel ist die Veroeffentlichung unserer Erfahrungen beim

  6. Creep properties of forged 2219 T6 aluminum alloy shell of general-purpose heat source-radioisotope thermoelectric generator

    International Nuclear Information System (INIS)

    Hammond, J.P.

    1981-12-01

    The shell (2219 T6 aluminum forging) of the General Purpose Heat Source-Radioisotope Thermoelectric Generator was designed to retain the generator under sufficient elastic stress to secure it during space flight. A major concern was the extent to which the elastic stress would relax by creep. To determine acceptability of the shell construction material, the following proof tests simulating service were performed: 600 h of testing at 270 0 C under 24.1 MPa stress followed by 10,000 h of storage at 177 0 C under 55.1 MPa, both on the ground; and 10,000 h of flight in space at 270 0 C under 34.4 MPa stress. Additionally, systematic creep testing was performed at 177 and 260 0 C to establish creep design curves. The creep tests performed at 177 0 C revealed comparatively large amounts of primary creep followed by small amounts of secondary creep. The early creep is believed to be abetted by unstable substructures that are annealed out during testing at this temperature. The creep tests performed at 270 0 C showed normal primary creep followed by large amounts of secondary creep. Duplicate proof tests simulating the ground exposure conditions gave results that were in good agreement. The proof test simulating space flight at 270 0 C gave 0.11% primary creep followed by 0.59% secondary creep. About 10% of the second-stage creep was caused by four or five instantaneous strains, which began at the 4500-h mark. One or two of these strain bursts, occurred in each of several other tests at 177 and 260 0 C but were assessed as very moderate in magnitude. The effect is attributable to a slightly microsegregated condition remaining from the original cast structure

  7. Development and validation of a general-purpose ASIC chip for the control of switched reluctance machines

    International Nuclear Information System (INIS)

    Chen Haijin; Lu Shengli; Shi Longxing

    2009-01-01

    A general-purpose application specific integrated circuit (ASIC) chip for the control of switched reluctance machines (SRMs) was designed and validated to fill the gap between the microcontroller capability and the controller requirements of high performance switched reluctance drive (SRD) systems. It can be used for the control of SRM running either in low speed or in high-speed, i.e., either in chopped current control (CCC) mode or in angular position control (APC) mode. Main functions of the chip include filtering and cycle calculation of rotor angular position signals, commutation logic according to rotor cycle and turn-on/turn-off angles (θ on /θ off ), controllable pulse width modulation (PWM) waveforms generation, chopping control with adjustable delay time, and commutation control with adjustable delay time. All the control parameters of the chip are set online by the microcontroller through a serial peripheral interface (SPI). The chip has been designed with the standard cell based design methodology, and implemented in the central semiconductor manufacturing corporation (CSMC) 0.5 μm complementary metal-oxide-semiconductor (CMOS) process technology. After a successful automatic test equipment (ATE) test using the Nextest's Maverick test system, the chip was further validated through an experimental three-phase 6/2-pole SRD system. Both the ATE test and experimental validation results show that the chip can meet the control requirements of high performance SRD systems, and simplify the controller construction. For a resolution of 0.36 deg. (electrical degree), the chip's maximum processable frequency of the rotor angular position signals is 10 kHz, which is 300,000 rev/min when a three-phase 6/2-pole SRM is concerned

  8. The EB factory project. I. A fast, neural-net-based, general purpose light curve classifier optimized for eclipsing binaries

    International Nuclear Information System (INIS)

    Paegert, Martin; Stassun, Keivan G.; Burger, Dan M.

    2014-01-01

    We describe a new neural-net-based light curve classifier and provide it with documentation as a ready-to-use tool for the community. While optimized for identification and classification of eclipsing binary stars, the classifier is general purpose, and has been developed for speed in the context of upcoming massive surveys such as the Large Synoptic Survey Telescope. A challenge for classifiers in the context of neural-net training and massive data sets is to minimize the number of parameters required to describe each light curve. We show that a simple and fast geometric representation that encodes the overall light curve shape, together with a chi-square parameter to capture higher-order morphology information results in efficient yet robust light curve classification, especially for eclipsing binaries. Testing the classifier on the ASAS light curve database, we achieve a retrieval rate of 98% and a false-positive rate of 2% for eclipsing binaries. We achieve similarly high retrieval rates for most other periodic variable-star classes, including RR Lyrae, Mira, and delta Scuti. However, the classifier currently has difficulty discriminating between different sub-classes of eclipsing binaries, and suffers a relatively low (∼60%) retrieval rate for multi-mode delta Cepheid stars. We find that it is imperative to train the classifier's neural network with exemplars that include the full range of light curve quality to which the classifier will be expected to perform; the classifier performs well on noisy light curves only when trained with noisy exemplars. The classifier source code, ancillary programs, a trained neural net, and a guide for use, are provided.

  9. The Treatment of Polysemy and Homonymy in Monolingual General-purpose Dictionaries with Special Reference to Isichazamazwi SesiNdebele

    Directory of Open Access Journals (Sweden)

    Eventhough Ndlovu

    2011-10-01

    Full Text Available

    ABSTRACT: This article focuses on the treatment of polysemy and homonymy in general-purpose monolingual dictionaries with special reference to Isichazamazwi SesiNdebele. It was found that there are some inconsistencies in the treatment of polysemous and homonymous entries in this dictionary. The article shows that an overreliance on one criterion, particularly etymology, to distinguish polysemy and homonymy is often misleading and unreliable. Polysemy itself has its own inherent complexities, among these being the problem of determining the exact number of meanings of a polysemous lemma. When the meanings of a polysemous lemma are listed, the central or primary meaning, which is not always easily ascertainable, should come first. A holistic approach is proposed to distinguish polysemy and homonymy, which entails the use of the following criteria: etymology, relatedness vs unrelatedness of meaning, componential analysis, the identification of the central or core meaning and the test of ambiguity. Whatever results are obtained from a particular criterion, these findings must be compared with those of other criteria, and verified against native speakers' intuitive knowledge and introspective judgements.

    OPSOMMING: Die behandeling van polisemie en homonimie in eentalige algemene woordeboeke met spesiale verwysing na Isichazamazwi SesiNdebele. Hierdie artikel fokus op die behandeling van polisemie en homonimie in algemene eentalige woordeboeke met spesiale verwysing na Isichazamazwi SesiNdebele. Daar is vasgestel dat daar 'n aantal inkonsekwensies in die behandeling van poliseme en homonieme inskrywings in hierdie woordeboek is. Die artikel toon dat 'n te groot steun op een kriterium, veral etimologie, om polisemie en homonimie te onderskei, dikwels misleidend en onbetroubaar is. Polisemie self het sy eie inherente gekompliseerdhede waarvan sommige die probleem is om die presiese aantal betekenisse van 'n poliseme lemma te bepaal. Wanneer

  10. The ESPAT tool: a general-purpose DSS shell for solving stochastic optimization problems in complex river-aquifer systems

    Science.gov (United States)

    Macian-Sorribes, Hector; Pulido-Velazquez, Manuel; Tilmant, Amaury

    2015-04-01

    Stochastic programming methods are better suited to deal with the inherent uncertainty of inflow time series in water resource management. However, one of the most important hurdles in their use in practical implementations is the lack of generalized Decision Support System (DSS) shells, usually based on a deterministic approach. The purpose of this contribution is to present a general-purpose DSS shell, named Explicit Stochastic Programming Advanced Tool (ESPAT), able to build and solve stochastic programming problems for most water resource systems. It implements a hydro-economic approach, optimizing the total system benefits as the sum of the benefits obtained by each user. It has been coded using GAMS, and implements a Microsoft Excel interface with a GAMS-Excel link that allows the user to introduce the required data and recover the results. Therefore, no GAMS skills are required to run the program. The tool is divided into four modules according to its capabilities: 1) the ESPATR module, which performs stochastic optimization procedures in surface water systems using a Stochastic Dual Dynamic Programming (SDDP) approach; 2) the ESPAT_RA module, which optimizes coupled surface-groundwater systems using a modified SDDP approach; 3) the ESPAT_SDP module, capable of performing stochastic optimization procedures in small-size surface systems using a standard SDP approach; and 4) the ESPAT_DET module, which implements a deterministic programming procedure using non-linear programming, able to solve deterministic optimization problems in complex surface-groundwater river basins. The case study of the Mijares river basin (Spain) is used to illustrate the method. It consists in two reservoirs in series, one aquifer and four agricultural demand sites currently managed using historical (XIV century) rights, which give priority to the most traditional irrigation district over the XX century agricultural developments. Its size makes it possible to use either the SDP or

  11. Implementing and analyzing the multi-threaded LP-inference

    Science.gov (United States)

    Bolotova, S. Yu; Trofimenko, E. V.; Leschinskaya, M. V.

    2018-03-01

    The logical production equations provide new possibilities for the backward inference optimization in intelligent production-type systems. The strategy of a relevant backward inference is aimed at minimization of a number of queries to external information source (either to a database or an interactive user). The idea of the method is based on the computing of initial preimages set and searching for the true preimage. The execution of each stage can be organized independently and in parallel and the actual work at a given stage can also be distributed between parallel computers. This paper is devoted to the parallel algorithms of the relevant inference based on the advanced scheme of the parallel computations “pipeline” which allows to increase the degree of parallelism. The author also provides some details of the LP-structures implementation.

  12. Multi-threading in the ATLAS High-Level Trigger

    CERN Document Server

    Barton, Adam Edward; The ATLAS collaboration

    2018-01-01

    Over the next decade of LHC data-taking the instantaneous luminosity will reach up 7.5 times the design value with over 200 interactions per bunch-crossing and will pose unprecedented challenges for the ATLAS trigger system. With the evolution of the CPU market to many-core systems, both the ATLAS offline reconstruction and High-Level Trigger (HLT) software will have to transition from a multi-process to a multithreaded processing paradigm in order not to exhaust the available physical memory of a typical compute node. The new multithreaded ATLAS software framework, AthenaMT, has been designed from the ground up to support both the offline and online use-cases with the aim to further harmonize the offline and trigger algorithms. The latter is crucial both in terms of maintenance effort and to guarantee the high trigger efficiency and rejection factors needed for the next two decades of data-taking. We report on an HLT prototype in which the need for HLT­specific components has been reduced to a minimum while...

  13. Multi-threading in the ATLAS High-Level Trigger

    CERN Document Server

    Barton, Adam Edward; The ATLAS collaboration

    2017-01-01

    Over the next decade of LHC data-taking the instantaneous luminosity will reach up 7.5 times the design value with over 200 interactions per bunch-crossing and will pose unprecedented challenges for the ATLAS trigger system. We report on an HLT prototype in which the need for HLT­specific components has been reduced to a minimum while retaining the key aspects of trigger functionality including regional reconstruction and early event rejection. We report on the first experience of migrating trigger algorithms to this new framework and present the next steps towards a full implementation of the ATLAS trigger within AthenaMT.

  14. A Multi-Threaded Cryptographic Pseudorandom Number Generator Test Suite

    Science.gov (United States)

    2016-09-01

    bitcoin thieves, Google releases patch. (2013, Aug. 16). SiliconANGLE. [Online]. Available: http://siliconangle.com/blog/2013/ 08/16/android-crypto-prng...flaw-aided- bitcoin -thieves-google-releases-patch/ [5] M. Gondree. (2014, Sep. 28). NPS POSIX thread pool library. [Online]. Available: https

  15. An object-oriented multi-threaded software beamformation toolbox

    DEFF Research Database (Denmark)

    Hansen, Jens Munk; Hemmsen, Martin Christian; Jensen, Jørgen Arendt

    2011-01-01

    Focusing and apodization are an essential part of signal processing in ultrasound imaging. Although the fun- damental principles are simple, the dramatic increase in computational power of CPUs, GPUs, and FPGAs motivates the development of software based beamformers, which further improves image...... new beam formation strategies. It is a general 3D implementation capable of handling a multitude of focusing methods, interpolation schemes, and parametric and dynamic apodization. Despite being exible, it is capable of exploiting parallelization on a single computer, on a cluster, or on both....... On a single computer, it mimics the parallization in a scanner containing multiple beam formers. The focusing is determined using the positions of the transducer elements, presence of virtual sources, and the focus points. For interpolation, a number of interpolation schemes can be chosen, e.g. linear, polyno...

  16. AN MHD AVALANCHE IN A MULTI-THREADED CORONAL LOOP

    Energy Technology Data Exchange (ETDEWEB)

    Hood, A. W.; Cargill, P. J.; Tam, K. V. [School of Mathematics and Statistics, University of St Andrews, St Andrews, Fife, KY16 9SS (United Kingdom); Browning, P. K., E-mail: awh@st-andrews.ac.uk [School of Physics and Astronomy, University of Manchester, Oxford Road, Manchester, M13 9PL (United Kingdom)

    2016-01-20

    For the first time, we demonstrate how an MHD avalanche might occur in a multithreaded coronal loop. Considering 23 non-potential magnetic threads within a loop, we use 3D MHD simulations to show that only one thread needs to be unstable in order to start an avalanche even when the others are below marginal stability. This has significant implications for coronal heating in that it provides for energy dissipation with a trigger mechanism. The instability of the unstable thread follows the evolution determined in many earlier investigations. However, once one stable thread is disrupted, it coalesces with a neighboring thread and this process disrupts other nearby threads. Coalescence with these disrupted threads then occurs leading to the disruption of yet more threads as the avalanche develops. Magnetic energy is released in discrete bursts as the surrounding stable threads are disrupted. The volume integrated heating, as a function of time, shows short spikes suggesting that the temporal form of the heating is more like that of nanoflares than of constant heating.

  17. Adaptive control in multi-threaded iterated integration

    International Nuclear Information System (INIS)

    Doncker, Elise de; Yuasa, Fukuko

    2013-01-01

    In recent years we have developed a technique for the direct computation of Feynman loop-integrals, which are notorious for the occurrence of integrand singularities. Especially for handling singularities in the interior of the domain, we approximate the iterated integral using an adaptive algorithm in the coordinate directions. We present a novel multi-core parallelization scheme for adaptive multivariate integration, by assigning threads to the rule evaluations in the outer dimensions of the iterated integral. The method ensures a large parallel granularity as each function evaluation by itself comprises an integral over the lower dimensions, while the application of the threads is governed by the adaptive control in the outer level. We give computational results for a test set of 3- to 6-dimensional integrals, where several problems exhibit a loop integral behavior.

  18. Field Experimentation Design for Multi-Threaded Analysis

    National Research Council Canada - National Science Library

    Tackett, Gregory

    2001-01-01

    .... This report discusses the OSD definition of military utility, the decomposition and allocation of requirements, the responsibilities of organizations, and the Verification, Validation, and Accrediation (VV&A) of models, simulations, and data.

  19. A randomised comparison between an inexpensive, general-purpose headlight and a purpose-built surgical headlight on users' visual acuity and colour vision.

    Science.gov (United States)

    Street, I; Sayles, M; Nistor, M; McRae, A R

    2014-02-01

    To determine if there are any differences in near visual acuity and colour vision between an inexpensive general-purpose light emitting diode (LED) headlight and a purpose-built surgical LED headlight. A prospective study was conducted sequentially comparing near visual acuity and colour vision, the headlights being tested in random order, in a testing room with a constant minimal amount of background light. The participants were NHS employee volunteers, with self-declared normal (or corrected) vision, working in occupations requiring full literacy. For visual acuity, outcome was measured by recording the smallest font legible when using each headlight when the subject read a near visual acuity test card. For colour vision, the outcome was passing or failing the Ishihara test. There was no statistically significant difference between the general-purpose and the purpose-built headlights in users' near visual acuity or colour vision.

  20. Bilingual Language Control and General Purpose Cognitive Control among Individuals with Bilingual Aphasia: Evidence Based on Negative Priming and Flanker Tasks

    Science.gov (United States)

    Dash, Tanya; Kar, Bhoomika R.

    2014-01-01

    Background. Bilingualism results in an added advantage with respect to cognitive control. The interaction between bilingual language control and general purpose cognitive control systems can also be understood by studying executive control among individuals with bilingual aphasia. Objectives. The current study examined the subcomponents of cognitive control in bilingual aphasia. A case study approach was used to investigate whether cognitive control and language control are two separate systems and how factors related to bilingualism interact with control processes. Methods. Four individuals with bilingual aphasia performed a language background questionnaire, picture description task, and two experimental tasks (nonlinguistic negative priming task and linguistic and nonlinguistic versions of flanker task). Results. A descriptive approach was used to analyse the data using reaction time and accuracy measures. The cumulative distribution function plots were used to visualize the variations in performance across conditions. The results highlight the distinction between general purpose cognitive control and bilingual language control mechanisms. Conclusion. All participants showed predominant use of the reactive control mechanism to compensate for the limited resources system. Independent yet interactive systems for bilingual language control and general purpose cognitive control were postulated based on the experimental data derived from individuals with bilingual aphasia. PMID:24982591

  1. Bilingual language control and general purpose cognitive control among individuals with bilingual aphasia: evidence based on negative priming and flanker tasks.

    Science.gov (United States)

    Dash, Tanya; Kar, Bhoomika R

    2014-01-01

    Bilingualism results in an added advantage with respect to cognitive control. The interaction between bilingual language control and general purpose cognitive control systems can also be understood by studying executive control among individuals with bilingual aphasia. objectives: The current study examined the subcomponents of cognitive control in bilingual aphasia. A case study approach was used to investigate whether cognitive control and language control are two separate systems and how factors related to bilingualism interact with control processes. Four individuals with bilingual aphasia performed a language background questionnaire, picture description task, and two experimental tasks (nonlinguistic negative priming task and linguistic and nonlinguistic versions of flanker task). A descriptive approach was used to analyse the data using reaction time and accuracy measures. The cumulative distribution function plots were used to visualize the variations in performance across conditions. The results highlight the distinction between general purpose cognitive control and bilingual language control mechanisms. All participants showed predominant use of the reactive control mechanism to compensate for the limited resources system. Independent yet interactive systems for bilingual language control and general purpose cognitive control were postulated based on the experimental data derived from individuals with bilingual aphasia.

  2. Pelletron general purpose scattering chamber

    International Nuclear Information System (INIS)

    Chatterjee, A.; Kailas, S.; Kerekette, S.S.; Navin, A.; Kumar, Suresh

    1993-01-01

    A medium sized stainless steel scattering chamber has been constructed for nuclear scattering and reaction experiments at the 14UD pelletron accelerator facility. It has been so designed that several types of detectors, varying from small sized silicon surface barrier detectors to medium sized gas detectors and NaI detectors can be conveniently positioned inside the chamber for detection of charged particles. The chamber has been planned to perform the following types of experiments : angular distributions of elastically scattered particles, fission fragments and other charged particles, angular correlations for charged particles e.g. protons, alphas and fission fragments. (author). 2 figs

  3. General purpose nuclear irradiation chamber

    International Nuclear Information System (INIS)

    Nurul Fadzlin Hasbullah; Nuurul Iffah Che Omar; Nahrul Khair Alang Md Rashid; Jaafar Abdullah

    2013-01-01

    Nuclear technology has found a great need for use in medicine, industry, and research. Smoke detectors in our homes, medical treatments and new varieties of plants by irradiating its seeds are just a few examples of the benefits of nuclear technology. Portable neutron source such as Californium-252, available at Industrial Technology Division (BTI/ PAT), Malaysian Nuclear Agency, has a 2.645 year half-life. However, 252 Cf is known to emit gamma radiation from the source. Thus, this chamber aims to provide a proper gamma shielding for samples to distinguish the use of mixed neutron with gamma-rays or pure neutron radiation. The chamber is compatible to be used with other portable neutron sources such as 241 Am-Be as well as the reactor TRIGA PUSPATI for higher neutron dose. This chamber was designed through a collaborative effort of Kulliyyah Engineering, IIUM with the Industrial Technology Division (BTI) team, Malaysian Nuclear Agency. (Author)

  4. Performance of the Research Animal Holding Facility (RAHF) and General Purpose Work Station (GPWS) and other hardware in the microgravity environment

    Science.gov (United States)

    Hogan, Robert P.; Dalton, Bonnie P.

    1991-01-01

    This paper discusses the performance of the Research Animal Holding Facility (RAHF) and General Purpose Work Station (GPWS) plus other associated hardware during the recent flight of Spacelab Life Sciences 1 (SLS-1). The RAHF was developed to provide proper housing (food, water, temperature control, lighting and waste management) for up to 24 rodents during flights on the Spacelab. The GPWS was designed to contain particulates and toxic chemicals generated during plant and animal handling and dissection/fixation activities during space flights. A history of the hardware development involves as well as the redesign activities prior to the actual flight are discussed.

  5. Validity of silhouette showcards as a measure of body size and obesity in a population in the African region: A practical research tool for general-purpose surveys.

    Science.gov (United States)

    Yepes, Maryam; Viswanathan, Barathi; Bovet, Pascal; Maurer, Jürgen

    2015-01-01

    The purpose of this study is to validate the Pulvers silhouette showcard as a measure of weight status in a population in the African region. This tool is particularly beneficial when scarce resources do not allow for direct anthropometric measurements due to limited survey time or lack of measurement technology in face-to-face general-purpose surveys or in mailed, online, or mobile device-based surveys. A cross-sectional study was conducted in the Republic of Seychelles with a sample of 1240 adults. We compared self-reported body sizes measured by Pulvers' silhouette showcards to four measurements of body size and adiposity: body mass index (BMI), body fat percent measured, waist circumference, and waist to height ratio. The accuracy of silhouettes as an obesity indicator was examined using sex-specific receiver operator curve (ROC) analysis and the reliability of this tool to detect socioeconomic gradients in obesity was compared to BMI-based measurements. Our study supports silhouette body size showcards as a valid and reliable survey tool to measure self-reported body size and adiposity in an African population. The mean correlation coefficients of self-reported silhouettes with measured BMI were 0.80 in men and 0.81 in women (P general-purpose surveys of obesity in social sciences, where limited resources do not allow for direct anthropometric measurements.

  6. The time-resolved and extreme conditions XAS (TEXAS) facility at the European Synchrotron Radiation Facility: the general-purpose EXAFS bending-magnet beamline BM23

    Energy Technology Data Exchange (ETDEWEB)

    Mathon, O., E-mail: mathon@esrf.fr; Beteva, A.; Borrel, J.; Bugnazet, D.; Gatla, S.; Hino, R.; Kantor, I.; Mairs, T. [European Synchrotron Radiation Facility, CS 40220, 38043 Grenoble Cedex 9 (France); Munoz, M. [European Synchrotron Radiation Facility, CS 40220, 38043 Grenoble Cedex 9 (France); Université Joseph Fourier, 1381 rue de la Piscine, BP 53, 38041 Grenoble Cedex 9 (France); Pasternak, S.; Perrin, F.; Pascarelli, S. [European Synchrotron Radiation Facility, CS 40220, 38043 Grenoble Cedex 9 (France)

    2015-10-17

    BM23 is the general-purpose EXAFS bending-magnet beamline at the ESRF, replacing the former BM29 beamline in the framework of the ESRF upgrade. Its mission is to serve the whole XAS user community by providing access to a basic service in addition to the many specialized instruments available at the ESRF. BM23 offers high-signal-to-noise ratio EXAFS in a large energy range (5–75 keV), continuous energy scanning for quick-EXAFS on the second timescale and a micro-XAS station delivering a spot size of 4 µm × 4 µm FWHM. BM23 is the general-purpose EXAFS bending-magnet beamline at the ESRF, replacing the former BM29 beamline in the framework of the ESRF upgrade. Its mission is to serve the whole XAS user community by providing access to a basic service in addition to the many specialized instruments available at the ESRF. BM23 offers high signal-to-noise ratio EXAFS in a large energy range (5–75 keV), continuous energy scanning for quick-EXAFS on the second timescale and a micro-XAS station delivering a spot size of 4 µm × 4 µm FWHM. It is a user-friendly facility featuring a high degree of automation, online EXAFS data reduction and a flexible sample environment.

  7. The time-resolved and extreme conditions XAS (TEXAS) facility at the European Synchrotron Radiation Facility: the general-purpose EXAFS bending-magnet beamline BM23.

    Science.gov (United States)

    Mathon, O; Beteva, A; Borrel, J; Bugnazet, D; Gatla, S; Hino, R; Kantor, I; Mairs, T; Munoz, M; Pasternak, S; Perrin, F; Pascarelli, S

    2015-11-01

    BM23 is the general-purpose EXAFS bending-magnet beamline at the ESRF, replacing the former BM29 beamline in the framework of the ESRF upgrade. Its mission is to serve the whole XAS user community by providing access to a basic service in addition to the many specialized instruments available at the ESRF. BM23 offers high signal-to-noise ratio EXAFS in a large energy range (5-75 keV), continuous energy scanning for quick-EXAFS on the second timescale and a micro-XAS station delivering a spot size of 4 µm × 4 µm FWHM. It is a user-friendly facility featuring a high degree of automation, online EXAFS data reduction and a flexible sample environment.

  8. The new versatile general purpose surface-muon instrument (GPS) based on silicon photomultipliers for μSR measurements on a continuous-wave beam.

    Science.gov (United States)

    Amato, A; Luetkens, H; Sedlak, K; Stoykov, A; Scheuermann, R; Elender, M; Raselli, A; Graf, D

    2017-09-01

    We report on the design and commissioning of a new spectrometer for muon-spin relaxation/rotation studies installed at the Swiss Muon Source (SμS) of the Paul Scherrer Institute (PSI, Switzerland). This new instrument is essentially a new design and replaces the old general-purpose surface-muon (GPS) instrument that has been for long the workhorse of the μSR user facility at PSI. By making use of muon and positron detectors made of plastic scintillators read out by silicon photomultipliers, a time resolution of the complete instrument of about 160 ps (standard deviation) could be achieved. In addition, the absence of light guides, which are needed in traditionally built μSR instrument to deliver the scintillation light to photomultiplier tubes located outside magnetic fields applied, allowed us to design a compact instrument with a detector set covering an increased solid angle compared with the old GPS.

  9. High Precision Thermal, Structural and Optical Analysis of an External Occulter Using a Common Model and the General Purpose Multi-Physics Analysis Tool Cielo

    Science.gov (United States)

    Hoff, Claus; Cady, Eric; Chainyk, Mike; Kissil, Andrew; Levine, Marie; Moore, Greg

    2011-01-01

    The efficient simulation of multidisciplinary thermo-opto-mechanical effects in precision deployable systems has for years been limited by numerical toolsets that do not necessarily share the same finite element basis, level of mesh discretization, data formats, or compute platforms. Cielo, a general purpose integrated modeling tool funded by the Jet Propulsion Laboratory and the Exoplanet Exploration Program, addresses shortcomings in the current state of the art via features that enable the use of a single, common model for thermal, structural and optical aberration analysis, producing results of greater accuracy, without the need for results interpolation or mapping. This paper will highlight some of these advances, and will demonstrate them within the context of detailed external occulter analyses, focusing on in-plane deformations of the petal edges for both steady-state and transient conditions, with subsequent optical performance metrics including intensity distributions at the pupil and image plane.

  10. Development of a General-Purpose Analysis System Based on a Programmable Fluid Processor Final Report CRADA No. TC-2027-01

    Energy Technology Data Exchange (ETDEWEB)

    McConaghy, C. F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gascoyne, P. R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-09-28

    The purpose ofthis project was to develop a general-purpose analysis system based on a programmable fluid processor (PFP). The PFP is an array of electrodes surrounded by fluid reservoirs and injectors. Injected droplets of various reagents are manjpulated and combined on the array by Dielectrophoretic (DEP) forces. The goal was to create a small handheld device that could accomplish the tasks currently undertaken by much larger, time consuming, manual manipulation in the lab. The entire effo1t was funded by DARPA under the Bio-Flips program. MD Anderson Cancer Center was the PI for the DARPA effort. The Bio-Flips program was a 3- year program that ran from September 2000 to September 2003. The CRADA was somewhat behind the Bi-Flips program running from June 2001 to June 2004 with a no cost extension to September 2004.

  11. Evaluation of general-purpose collimators against high-resolution collimators with resolution recovery with a view to reducing radiation dose in myocardial perfusion SPECT: A preliminary phantom study.

    Science.gov (United States)

    Armstrong, Ian S; Saint, Kimberley J; Tonge, Christine M; Arumugam, Parthiban

    2017-04-01

    There is a growing focus on reducing radiation dose to patients undergoing myocardial perfusion imaging. This preliminary phantom study aims to evaluate the use of general-purpose collimators with resolution recovery (RR) to allow a reduction in patient radiation dose. Images of a cardiac torso phantom with inferior and anterior wall defects were acquired on a GE Infinia and Siemens Symbia T6 using both high-resolution and general-purpose collimators. Imaging time, a surrogate for administered activity, was reduced between 35% and 40% with general-purpose collimators to match the counts acquired with high-resolution collimators. Images were reconstructed with RR with and without attenuation correction. Two pixel sizes were also investigated. Defect contrast was measured. Defect contrast on general-purpose images was superior or comparable to the high-resolution collimators on both systems despite the reduced imaging time. Infinia general-purpose images required a smaller pixel size to be used to maintain defect contrast, while Symbia T6 general-purpose images did not require a change in pixel size to that used for standard myocardial perfusion SPECT. This study suggests that general-purpose collimators with RR offer a potential for substantial dose reductions while providing similar or better image quality to images acquired using high-resolution collimators.

  12. Real-Time and Real-Fast Performance of General-Purpose and Real-Time Operating Systems in Multithreaded Physical Simulation of Complex Mechanical Systems

    Directory of Open Access Journals (Sweden)

    Carlos Garre

    2014-01-01

    Full Text Available Physical simulation is a valuable tool in many fields of engineering for the tasks of design, prototyping, and testing. General-purpose operating systems (GPOS are designed for real-fast tasks, such as offline simulation of complex physical models that should finish as soon as possible. Interfacing hardware at a given rate (as in a hardware-in-the-loop test requires instead maximizing time determinism, for which real-time operating systems (RTOS are designed. In this paper, real-fast and real-time performance of RTOS and GPOS are compared when simulating models of high complexity with large time steps. This type of applications is usually present in the automotive industry and requires a good trade-off between real-fast and real-time performance. The performance of an RTOS and a GPOS is compared by running a tire model scalable on the number of degrees-of-freedom and parallel threads. The benchmark shows that the GPOS present better performance in real-fast runs but worse in real-time due to nonexplicit task switches and to the latency associated with interprocess communication (IPC and task switch.

  13. Towards anatomic scale agent-based modeling with a massively parallel spatially explicit general-purpose model of enteric tissue (SEGMEnT_HPC).

    Science.gov (United States)

    Cockrell, Robert Chase; Christley, Scott; Chang, Eugene; An, Gary

    2015-01-01

    Perhaps the greatest challenge currently facing the biomedical research community is the ability to integrate highly detailed cellular and molecular mechanisms to represent clinical disease states as a pathway to engineer effective therapeutics. This is particularly evident in the representation of organ-level pathophysiology in terms of abnormal tissue structure, which, through histology, remains a mainstay in disease diagnosis and staging. As such, being able to generate anatomic scale simulations is a highly desirable goal. While computational limitations have previously constrained the size and scope of multi-scale computational models, advances in the capacity and availability of high-performance computing (HPC) resources have greatly expanded the ability of computational models of biological systems to achieve anatomic, clinically relevant scale. Diseases of the intestinal tract are exemplary examples of pathophysiological processes that manifest at multiple scales of spatial resolution, with structural abnormalities present at the microscopic, macroscopic and organ-levels. In this paper, we describe a novel, massively parallel computational model of the gut, the Spatially Explicitly General-purpose Model of Enteric Tissue_HPC (SEGMEnT_HPC), which extends an existing model of the gut epithelium, SEGMEnT, in order to create cell-for-cell anatomic scale simulations. We present an example implementation of SEGMEnT_HPC that simulates the pathogenesis of ileal pouchitis, and important clinical entity that affects patients following remedial surgery for ulcerative colitis.

  14. Improved detection of sentinel lymph nodes in SPECT/CT images acquired using a low- to medium-energy general-purpose collimator.

    Science.gov (United States)

    Yoneyama, Hiroto; Tsushima, Hiroyuki; Kobayashi, Masato; Onoguchi, Masahisa; Nakajima, Kenichi; Kinuya, Seigo

    2014-01-01

    The use of the low-energy high-resolution (LEHR) collimator for lymphoscintigraphy causes the appearance of star-shaped artifacts at injection sites. The aim of this study was to confirm whether the lower resolution of the low- to medium-energy general-purpose (LMEGP) collimator is compensated by decrease in the degree of septal penetration and the reduction in star-shaped artifacts. A total of 106 female patients with breast cancer, diagnosed by biopsy, were enrolled in this study. Tc phytate (37 MBq, 1 mCi) was injected around the tumor, and planar and SPECT/CT images were obtained after 3 to 4 hours. When sentinel lymph nodes (SLNs) could not be identified from planar and SPECT/CT images by using the LEHR collimator, we repeated the study with the LMEGP collimator. Planar imaging performed using the LEHR and LEHR + LMEGP collimators positively identified SLNs in 96.2% (102/106) and 99.1% (105/106) of the patients, respectively. Using combination of planar and SPECT/CT imaging with the LEHR and LEHR + LMEGP collimators, SLNs were positively identified in 97.2% (103/106) and 100% (106/106) of the patients, respectively. The LMEGP collimator provided better results than the LEHR collimator because of the lower degree of septal penetration. The use of the LMEGP collimator improved SLN detection.

  15. Optimization of GATE and PHITS Monte Carlo code parameters for uniform scanning proton beam based on simulation with FLUKA general-purpose code

    Energy Technology Data Exchange (ETDEWEB)

    Kurosu, Keita [Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871 (Japan); Department of Radiation Oncology, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871 (Japan); Takashina, Masaaki; Koizumi, Masahiko [Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871 (Japan); Das, Indra J. [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States); Moskvin, Vadim P., E-mail: vadim.p.moskvin@gmail.com [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States)

    2014-10-01

    Although three general-purpose Monte Carlo (MC) simulation tools: Geant4, FLUKA and PHITS have been used extensively, differences in calculation results have been reported. The major causes are the implementation of the physical model, preset value of the ionization potential or definition of the maximum step size. In order to achieve artifact free MC simulation, an optimized parameters list for each simulation system is required. Several authors have already proposed the optimized lists, but those studies were performed with a simple system such as only a water phantom. Since particle beams have a transport, interaction and electromagnetic processes during beam delivery, establishment of an optimized parameters-list for whole beam delivery system is therefore of major importance. The purpose of this study was to determine the optimized parameters list for GATE and PHITS using proton treatment nozzle computational model. The simulation was performed with the broad scanning proton beam. The influences of the customizing parameters on the percentage depth dose (PDD) profile and the proton range were investigated by comparison with the result of FLUKA, and then the optimal parameters were determined. The PDD profile and the proton range obtained from our optimized parameters list showed different characteristics from the results obtained with simple system. This led to the conclusion that the physical model, particle transport mechanics and different geometry-based descriptions need accurate customization in planning computational experiments for artifact-free MC simulation.

  16. Optimization of GATE and PHITS Monte Carlo code parameters for uniform scanning proton beam based on simulation with FLUKA general-purpose code

    International Nuclear Information System (INIS)

    Kurosu, Keita; Takashina, Masaaki; Koizumi, Masahiko; Das, Indra J.; Moskvin, Vadim P.

    2014-01-01

    Although three general-purpose Monte Carlo (MC) simulation tools: Geant4, FLUKA and PHITS have been used extensively, differences in calculation results have been reported. The major causes are the implementation of the physical model, preset value of the ionization potential or definition of the maximum step size. In order to achieve artifact free MC simulation, an optimized parameters list for each simulation system is required. Several authors have already proposed the optimized lists, but those studies were performed with a simple system such as only a water phantom. Since particle beams have a transport, interaction and electromagnetic processes during beam delivery, establishment of an optimized parameters-list for whole beam delivery system is therefore of major importance. The purpose of this study was to determine the optimized parameters list for GATE and PHITS using proton treatment nozzle computational model. The simulation was performed with the broad scanning proton beam. The influences of the customizing parameters on the percentage depth dose (PDD) profile and the proton range were investigated by comparison with the result of FLUKA, and then the optimal parameters were determined. The PDD profile and the proton range obtained from our optimized parameters list showed different characteristics from the results obtained with simple system. This led to the conclusion that the physical model, particle transport mechanics and different geometry-based descriptions need accurate customization in planning computational experiments for artifact-free MC simulation

  17. GPScheDVS: A New Paradigm of the Autonomous CPU Speed Control for Commodity-OS-based General-Purpose Mobile Computers with a DVS-friendly Task Scheduling

    OpenAIRE

    Kim, Sookyoung

    2008-01-01

    This dissertation studies the problem of increasing battery life-time and reducing CPU heat dissipation without degrading system performance in commodity-OS-based general-purpose (GP) mobile computers using the dynamic voltage scaling (DVS) function of modern CPUs. The dissertation especially focuses on the impact of task scheduling on the effectiveness of DVS in achieving this goal. The task scheduling mechanism used in most contemporary general-purpose operating systems (GPOS) prioritizes t...

  18. IBM Demonstrates a General-Purpose, High-Performance, High-Availability Cloud-Hosted Data Distribution System With Live GOES-16 Weather Satellite Data

    Science.gov (United States)

    Snyder, P. L.; Brown, V. W.

    2017-12-01

    IBM has created a general purpose, data-agnostic solution that provides high performance, low data latency, high availability, scalability, and persistent access to the captured data, regardless of source or type. This capability is hosted on commercially available cloud environments and uses much faster, more efficient, reliable, and secure data transfer protocols than the more typically used FTP. The design incorporates completely redundant data paths at every level, including at the cloud data center level, in order to provide the highest assurance of data availability to the data consumers. IBM has been successful in building and testing a Proof of Concept instance on our IBM Cloud platform to receive and disseminate actual GOES-16 data as it is being downlinked. This solution leverages the inherent benefits of a cloud infrastructure configured and tuned for continuous, stable, high-speed data dissemination to data consumers worldwide at the downlink rate. It also is designed to ingest data from multiple simultaneous sources and disseminate data to multiple consumers. Nearly linear scalability is achieved by adding servers and storage.The IBM Proof of Concept system has been tested with our partners to achieve in excess of 5 Gigabits/second over public internet infrastructure. In tests with live GOES-16 data, the system routinely achieved 2.5 Gigabits/second pass-through to The Weather Company from the University of Wisconsin-Madison SSEC. Simulated data was also transferred from the Cooperative Institute for Climate and Satellites — North Carolina to The Weather Company, as well. The storage node allocated to our Proof of Concept system as tested was sized at 480 Terabytes of RAID protected disk as a worst case sizing to accommodate the data from four GOES-16 class satellites for 30 days in a circular buffer. This shows that an abundance of performance and capacity headroom exists in the IBM design that can be applied to additional missions.

  19. Adapting machine learning techniques to censored time-to-event health record data: A general-purpose approach using inverse probability of censoring weighting.

    Science.gov (United States)

    Vock, David M; Wolfson, Julian; Bandyopadhyay, Sunayan; Adomavicius, Gediminas; Johnson, Paul E; Vazquez-Benitez, Gabriela; O'Connor, Patrick J

    2016-06-01

    Models for predicting the probability of experiencing various health outcomes or adverse events over a certain time frame (e.g., having a heart attack in the next 5years) based on individual patient characteristics are important tools for managing patient care. Electronic health data (EHD) are appealing sources of training data because they provide access to large amounts of rich individual-level data from present-day patient populations. However, because EHD are derived by extracting information from administrative and clinical databases, some fraction of subjects will not be under observation for the entire time frame over which one wants to make predictions; this loss to follow-up is often due to disenrollment from the health system. For subjects without complete follow-up, whether or not they experienced the adverse event is unknown, and in statistical terms the event time is said to be right-censored. Most machine learning approaches to the problem have been relatively ad hoc; for example, common approaches for handling observations in which the event status is unknown include (1) discarding those observations, (2) treating them as non-events, (3) splitting those observations into two observations: one where the event occurs and one where the event does not. In this paper, we present a general-purpose approach to account for right-censored outcomes using inverse probability of censoring weighting (IPCW). We illustrate how IPCW can easily be incorporated into a number of existing machine learning algorithms used to mine big health care data including Bayesian networks, k-nearest neighbors, decision trees, and generalized additive models. We then show that our approach leads to better calibrated predictions than the three ad hoc approaches when applied to predicting the 5-year risk of experiencing a cardiovascular adverse event, using EHD from a large U.S. Midwestern healthcare system. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. ProtDCal: A program to compute general-purpose-numerical descriptors for sequences and 3D-structures of proteins.

    Science.gov (United States)

    Ruiz-Blanco, Yasser B; Paz, Waldo; Green, James; Marrero-Ponce, Yovani

    2015-05-16

    software is intended to provide a useful tool for general-purpose encoding of protein sequences and structures for applications is protein classification, similarity analyses and function prediction.

  1. Can Universities Profit from General Purpose Inventions?

    DEFF Research Database (Denmark)

    Barirani, Ahmad; Beaudry, Catherine; Agard, Bruno

    2017-01-01

    The lack of control over downstream assets can hinder universities’ ability to extract rents from their inventive activities. We explore this possibility by assessing the relationship between invention generality and renewal decisions for a sample of Canadian nanotechnology patents. Our results s...

  2. A General Purpose Microcontroller Trainer | Talukder | African ...

    African Journals Online (AJOL)

    Although the method is discussed primarily for a communication network, it may be applied to any other type of network, such as transportation, water reticulation or any large scale process plants. (Af. J. of Science and Technology: 2002 3(1): 114-121). http://dx.doi.org/10.4314/ajst.v3i1.15296 · AJOL African Journals Online.

  3. General-purpose radiological examination device

    Energy Technology Data Exchange (ETDEWEB)

    Slaby, J

    1978-03-15

    Equipment is described suitable for all radiological examinations using x-ray and neuroradiological diagnostic machines. The equipment consists of a gimbal suspension supporting a base plate and an imaging system, a gantry on which a neurological seat is pivoted capable of isocentrically positioning the patient's head.

  4. General purpose modeling languages for configuration

    DEFF Research Database (Denmark)

    Queva, Matthieu Stéphane Benoit

    In the later years, there has been an important need for companies to reduce their costs while proposing highly customized products. Indeed, today's customers demand products with lower prices, higher quality and faster delivery, but they also want products customized to match their unique needs....

  5. General-Purpose Monitoring during Speech Production

    Science.gov (United States)

    Ries, Stephanie; Janssen, Niels; Dufau, Stephane; Alario, F.-Xavier; Burle, Boris

    2011-01-01

    The concept of "monitoring" refers to our ability to control our actions on-line. Monitoring involved in speech production is often described in psycholinguistic models as an inherent part of the language system. We probed the specificity of speech monitoring in two psycholinguistic experiments where electroencephalographic activities were…

  6. Feed-forward general-purpose computer

    Energy Technology Data Exchange (ETDEWEB)

    Yamada, H; Yoshioka, Y; Nakamura, T; Shigei, Y

    1983-08-01

    The feed forward machine (FFM) proposed by the authors has a CPU composed of many fixed arithmetic units and registers. Many features of the FFM which are compatible with concurrent operating and reduce the instruction requirement for store are reported. In order to evaluate the FFM, the minimum execution time of instructions is discussed by using the Petri Net model. From this it is predicted that the execution time will be 0.46-0.6 times the real execution time. Furthermore, it is concluded that the program for the FFM will be reduced in size with respect to the program for the Von Neumann computers. 12 references.

  7. General Purpose Ground Forces: What Purpose?

    National Research Council Canada - National Science Library

    Challis, Dan

    1993-01-01

    "New World Order," a phrase uttered frequently by former President George Bush during and after the Persian Gulf War, no longer connotes the optimism of America's global view at the end of Desert Storm...

  8. Open-source implementation of an ad-hoc IEEE802.11a/g/p software-defined radio on low-power and low-cost general purpose processors

    Directory of Open Access Journals (Sweden)

    S. Ciccia

    2017-12-01

    Full Text Available This work proposes a low-cost and low-power software-defined radio open-source platform with IEEE 802.11 a/g/p wireless communication capability. A state-of-the-art version of the IEEE 802.11 a/g/p software for GNU Radio (a free and open-source software development framework is available online, but we show here that its computational complexity prevents operations in low-power general purpose processors, even at throughputs below the standard. We therefore propose an evolution of this software that achieves a faster and lighter IEEE 802.11 a/g/p transmitter and receiver, suitable for low-power general purpose processors, for which GNU Radio provides very limited support; we discuss and describe the software radio processing structuring that is necessary to achieve the goal, providing a review of signal processing techniques. In particular, we emphasize the advanced reduced-instruction set (RISC machine (ARM study case, for which we also optimize some of the processing libraries. The presented software will remain open-source.

  9. A platform independent framework for Statecharts code generation

    International Nuclear Information System (INIS)

    Andolfato, L.; Chiozzi, G.; Migliorini, N.; Morales, C.

    2012-01-01

    Control systems for telescopes and their instruments are reactive systems very well suited to be modelled using Statecharts formalism. The World Wide Web Consortium is working on a new standard called SCXML that specifies XML notation to describe Statecharts and provides a well defined operational semantic for run-time interpretation of the SCXML models. This paper presents a generic application framework for reactive non realtime systems based on interpreted Statecharts. The framework consists of a model to text transformation tool and an SCXML interpreter. The tool generates from UML state machine models the SCXML representation of the state machines as well as the application skeletons for the supported software platforms. An abstraction layer propagates the events from the middle-ware to the SCXML interpreter facilitating the support for different software platforms. This project benefits from the positive experience gained in several years of development of coordination and monitoring applications for the telescope control software domain using Model Driven Development technologies. (authors)

  10. Towards quantification of butadiene content in styrene-butadiene block copolymers and their blends with general purpose polystyrene (GPPS) and the relation between mechanical properties and NMR relaxation times

    Energy Technology Data Exchange (ETDEWEB)

    Nestle, Nikolaus [BASF Aktiengesellschaft, GKP/P-G 201, D-67056 Ludwigshafen (Germany)], E-mail: nikolaus.nestle@basf.com; Heckmann, Walter; Steininger, Helmut; Knoll, Konrad [BASF Aktiengesellschaft, GKP/P-G 201, D-67056 Ludwigshafen (Germany)

    2007-11-26

    The properties of styrene-butadiene-styrene (SBS) block copolymers do not only depend on the butadiene content and the degree of polymerisation but also on their chain architecture. In this contribution we present the results of a low-field time domain (TD) NMR study in which the transverse relaxation behaviour of different SBS block copolymers was analysed and correlated with findings from mechanical testing on pure and blended materials and transmission electron microscopy data which provide information on the microphase separation. The results indicate that while a straightforward determination of the butadiene content as in blended materials like ABS is not possible for these materials, the TD-NMR results correlate quite well with the mechanical performance of blends from SBS block copolymers with general purpose polystyrene (GPPS), i.e. industrial grade homopolymer polystyrene. Temperature-dependent experiments on pure and blended materials revealed a slight reduction in the softening temperature of the GPPS fraction in the blends.

  11. Performance Characterization of Multi-threaded Graph Processing Applications on Intel Many-Integrated-Core Architecture

    OpenAIRE

    Liu, Xu; Chen, Langshi; Firoz, Jesun S.; Qiu, Judy; Jiang, Lei

    2017-01-01

    Intel Xeon Phi many-integrated-core (MIC) architectures usher in a new era of terascale integration. Among emerging killer applications, parallel graph processing has been a critical technique to analyze connected data. In this paper, we empirically evaluate various computing platforms including an Intel Xeon E5 CPU, a Nvidia Geforce GTX1070 GPU and an Xeon Phi 7210 processor codenamed Knights Landing (KNL) in the domain of parallel graph processing. We show that the KNL gains encouraging per...

  12. Permission-based separation logic for multi-threaded Java programs

    NARCIS (Netherlands)

    Amighi, A.; Haack, Christian; Huisman, Marieke; Hurlin, C.

    This paper presents a program logic for reasoning about multithreaded Java-like programs with concurrency primitives such as dynamic thread creation, thread joining and reentrant object monitors. The logic is based on concurrent separation logic. It is the first detailed adaptation of concurrent

  13. Investigating multi-thread utilization as a software defence mechanism against side channel attacks

    CSIR Research Space (South Africa)

    Frieslaar, Ibraheem

    2016-11-01

    Full Text Available out information at critical points in the cryptographic algorithm and confuse the attacker. This research demonstrates it is capable of outperforming the known countermeasure of hiding and shuffling in terms of preventing the secret information from...

  14. Multi-Threaded Evolution of the Data-Logging System of the ATLAS Experiment at CERN

    CERN Document Server

    Colombo, T; The ATLAS collaboration

    2011-01-01

    The ATLAS experiment is currently observing proton-proton collisions delivered by the LHC accelerator at a centre of mass energy of 7 TeV with a peak luminosity of ~1033 cm-2 s-1. The ATLAS Trigger and Data Acquisition (TDAQ) system selects interesting events on-line in a three-level trigger system in order to store them at a budgeted rate of ~200 Hz for an event size of ~1.5 MB. This paper focuses on the TDAQ data-logging system. Its purpose is to receive events from the third level trigger, process them and stream the results into different raw data files according to the trigger decision. The data files are subsequently moved to the central mass storage facility at CERN. The system currently in production has been commissioned in 2007 and has been working smoothly since then. It is however based on an essentially single-threaded design that is anticipated not to cope with the increase in event rate and event size that is foreseen as part of the ATLAS and LHC upgrade programs. This design also severely limi...

  15. Multi-Threaded Evolution of the Data-Logging System of the ATLAS Experiment at CERN

    CERN Document Server

    Colombo, T; The ATLAS collaboration

    2011-01-01

    The ATLAS experiment observes proton-proton collisions delivered by the LHC accelerator at a centre of mass energy of 7 TeV with a peak luminosity of ~ 10^33 cm^-2 s^-1 in 2011. The ATLAS Trigger and Data Acquisition (TDAQ) system selects interesting events on-line in a three-level trigger system in order to store them at a budgeted average rate of ~ 400 Hz for an event size of ~1.2 MB. This paper focuses on the TDAQ data-logging system. Its purpose is to receive events from the third level trigger, process them and stream the data into different raw files according to the trigger decision. The system currently in production is based on an essentially single-threaded design that is anticipated not to cope with the increase in event rate and event size foreseen as part of the ATLAS and LHC upgrade programs. This design also severely limits the possibility of performing additional CPU-intensive tasks. Therefore, a novel design able to exploit the full power of multi-core architecture is needed. The main challen...

  16. SISSY: An example of a multi-threaded, networked, object-oriented databased application

    International Nuclear Information System (INIS)

    Scipioni, B.; Liu, D.; Song, T.

    1993-05-01

    The Systems Integration Support SYstem (SISSY) is presented and its capabilities and techniques are discussed. It is fully automated data collection and analysis system supporting the SSCL's systems analysis activities as they relate to the Physics Detector and Simulation Facility (PDSF). SISSY itself is a paradigm of effective computing on the PDSF. It uses home-grown code (C++), network programming (RPC, SNMP), relational (SYBASE) and object-oriented (ObjectStore) DBMSs, UNIX operating system services (IRIX threads, cron, system utilities, shells scripts, etc.), and third party software applications (NetCentral Station, Wingz, DataLink) all of which act together as a single application to monitor and analyze the PDSF

  17. Generic accelerated sequence alignment in SeqAn using vectorization and multi-threading.

    Science.gov (United States)

    Rahn, René; Budach, Stefan; Costanza, Pascal; Ehrhardt, Marcel; Hancox, Jonny; Reinert, Knut

    2018-05-03

    Pairwise sequence alignment is undoubtedly a central tool in many bioinformatics analyses. In this paper, we present a generically accelerated module for pairwise sequence alignments applicable for a broad range of applications. In our module, we unified the standard dynamic programming kernel used for pairwise sequence alignments and extended it with a generalized inter-sequence vectorization layout, such that many alignments can be computed simultaneously by exploiting SIMD (Single Instruction Multiple Data) instructions of modern processors. We then extended the module by adding two layers of thread-level parallelization, where we a) distribute many independent alignments on multiple threads and b) inherently parallelize a single alignment computation using a work stealing approach producing a dynamic wavefront progressing along the minor diagonal. We evaluated our alignment vectorization and parallelization on different processors, including the newest Intel® Xeon® (Skylake) and Intel® Xeon Phi™ (KNL) processors, and use cases. The instruction set AVX512-BW (Byte and Word), available on Skylake processors, can genuinely improve the performance of vectorized alignments. We could run single alignments 1600 times faster on the Xeon Phi™ and 1400 times faster on the Xeon® than executing them with our previous sequential alignment module. The module is programmed in C++ using the SeqAn (Reinert et al., 2017) library and distributed with version 2.4. under the BSD license. We support SSE4, AVX2, AVX512 instructions and included UME::SIMD, a SIMD-instruction wrapper library, to extend our module for further instruction sets. We thoroughly test all alignment components with all major C++ compilers on various platforms. rene.rahn@fu-berlin.de.

  18. LUNA: Hard Real-Time, Multi-Threaded, CSP-Capable Execution Framework

    NARCIS (Netherlands)

    Bezemer, M.M.; Wilterdink, R.J.W.; Welch, Peter H.; Sampson, Adam T.; Pedersen, Jan B.; Kerridge, Jon M.; Broenink, Johannes F.; Barnes, Frederick R.M.

    Modern embedded systems have multiple cores available. The CTC++ library is not able to make use of these cores, so a new framework is required to control the robotic setups in our lab. This paper first looks into the available frameworks and compares them to the requirements for controlling the

  19. Qualitative and Quantitative Information Flow Analysis for Multi-threaded Programs

    NARCIS (Netherlands)

    Ngo, Minh Tri

    2014-01-01

    In today’s information-based society, guaranteeing information security plays an important role in all aspects of life: governments, military, companies, financial information systems, web-based services etc. With the existence of Internet, Google, and shared-information networks, it is easier than

  20. Qualitative and quantitative information flow analysis for multi-thread programs

    NARCIS (Netherlands)

    Ngo, Minh Tri

    2014-01-01

    In today's information-based society, guaranteeing information security plays an important role in all aspects of life: communication between citizens and governments, military, companies, financial information systems, web-based services etc. With the increasing popularity of computer systems with

  1. FODEM: A Multi-Threaded Research and Development Method for Educational Technology

    Science.gov (United States)

    Suhonen, Jarkko; de Villiers, M. Ruth; Sutinen, Erkki

    2012-01-01

    Formative development method (FODEM) is a multithreaded design approach that was originated to support the design and development of various types of educational technology innovations, such as learning tools, and online study programmes. The threaded and agile structure of the approach provides flexibility to the design process. Intensive…

  2. Towards Fast Reverse Time Migration Kernels using Multi-threaded Wavefront Diamond Tiling

    KAUST Repository

    Malas, T.; Hager, G.; Ltaief, Hatem; Keyes, David E.

    2015-01-01

    Today’s high-end multicore systems are characterized by a deep memory hierarchy, i.e., several levels of local and shared caches, with limited size and bandwidth per core. The ever-increasing gap between the processor and memory speed will further

  3. Hardware Support for Fine-Grain Multi-Threading in LEON3

    Czech Academy of Sciences Publication Activity Database

    Daněk, Martin; Kafka, Leoš; Kohout, Lukáš; Sýkora, Jaroslav

    2011-01-01

    Roč. 4, č. 1 (2011), s. 27-34 ISSN 1844-9689 R&D Projects: GA MŠk 7E08013 Grant - others:European Commission(BE) FP7-ICT-215216 Keywords : multithreading * microthreading * SPARC * microarchitecture * FPGA Subject RIV: JC - Computer Hardware ; Software http://library.utia.cas.cz/separaty/2011/ZS/danek-0380861.pdf

  4. Multi-threaded Sparse Matrix Sparse Matrix Multiplication for Many-Core and GPU Architectures.

    Energy Technology Data Exchange (ETDEWEB)

    Deveci, Mehmet [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Trott, Christian Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rajamanickam, Sivasankaran [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2018-01-01

    Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scientific computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrix- matrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and data structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.

  5. Multi-threaded Sparse Matrix-Matrix Multiplication for Many-Core and GPU Architectures.

    Energy Technology Data Exchange (ETDEWEB)

    Deveci, Mehmet [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rajamanickam, Sivasankaran [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Trott, Christian Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-12-01

    Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scienti c computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrix-matrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and data structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.

  6. Shadow-Bitcoin: Scalable Simulation via Direct Execution of Multi-Threaded Applications

    Science.gov (United States)

    2015-08-10

    precisely model the real network. Providing initial blockchain state. Each node in the Bitcoin network typically maintains its own copy of the entire... blockchain . In our model network, we begin with all the nodes “in sync” to some prior blockchain state. To reduce the storage cost, we allow the

  7. System, methods and apparatus for program optimization for multi-threaded processor architectures

    Science.gov (United States)

    Bastoul, Cedric; Lethin, Richard A; Leung, Allen K; Meister, Benoit J; Szilagyi, Peter; Vasilache, Nicolas T; Wohlford, David E

    2015-01-06

    Methods, apparatus and computer software product for source code optimization are provided. In an exemplary embodiment, a first custom computing apparatus is used to optimize the execution of source code on a second computing apparatus. In this embodiment, the first custom computing apparatus contains a memory, a storage medium and at least one processor with at least one multi-stage execution unit. The second computing apparatus contains at least two multi-stage execution units that allow for parallel execution of tasks. The first custom computing apparatus optimizes the code for parallelism, locality of operations and contiguity of memory accesses on the second computing apparatus. This Abstract is provided for the sole purpose of complying with the Abstract requirement rules. This Abstract is submitted with the explicit understanding that it will not be used to interpret or to limit the scope or the meaning of the claims.

  8. Hardware based redundant multi-threading inside a GPU for improved reliability

    Science.gov (United States)

    Sridharan, Vilas; Gurumurthi, Sudhanva

    2015-05-05

    A system and method for verifying computation output using computer hardware are provided. Instances of computation are generated and processed on hardware-based processors. As instances of computation are processed, each instance of computation receives a load accessible to other instances of computation. Instances of output are generated by processing the instances of computation. The instances of output are verified against each other in a hardware based processor to ensure accuracy of the output.

  9. Detectability of T1a lung cancer on digital chest radiographs: an observer-performance comparison among 2-megapixel general-purpose, 2-megapixel medical-purpose, and 3-megapixel medical-purpose liquid-crystal display (LCD) monitors.

    Science.gov (United States)

    Yabuuchi, Hidetake; Matsuo, Yoshio; Kamitani, Takeshi; Jinnnouchi, Mikako; Yonezawa, Masato; Yamasaki, Yuzo; Nagao, Michinobu; Kawanami, Satoshi; Okamoto, Tatsuro; Sasaki, Masayuki; Honda, Hiroshi

    2015-08-01

    There has been no comparison of detectability of small lung cancer between general and medical LCD monitors or no comparison of detectability of small lung cancer between solid and part-solid nodules. To compare the detectabilities of T1a lung cancer on chest radiographs on three LCD monitor types: 2-megapixel (MP) for general purpose (General), 2-MP for medical purpose (Medical), and 3-MP-Medical. Radiographs from forty patients with T1aN0M0 primary lung cancer (27 solid nodules, 13 part-solid nodules) and 60 patients with no abnormalities on both chest X-ray and computed tomography (CT) were consecutively collected. Five readers assessed 100 cases for each monitor. The observations were analyzed using receiver operating characteristic (ROC) analysis. A jackknife method was used for statistical analysis. A P value of General, 2-MP-Medical, and 3-MP-Medical LCD monitors were 0.86, 0.89, and 0.89, respectively; there were no significant differences among them. The average AUC for part-solid nodule detection using a 2-MP-General, 2-MP-Medical, and 3-MP-Medical LCD monitors were 0.77, 0.86, and 0.89, respectively. There were significant differences between the 2-MP-General and 2-MP-Medical LCD monitors (P = 0.043) and between the 2-MP-General and 3-MP-Medical LCD monitors (P = 0.027). There was no significant difference between the 2-MP-Medical and 3-MP-Medical LCD monitors. The average AUC for solid nodule detection using a 2-MP-General, 2-MP-Medical, and 3-MP-Medical LCD monitors were 0.90, 0.90, and 0.88, respectively; there were no significant differences among them. The mean AUC values for all and part-solid nodules of the low-experienced readers were significantly lower than those of the high-experienced readers with the 2 M-GP color LCD monitor (P general-purpose LCD monitor was significantly lower than those using medical-purpose LCD monitors. © The Foundation Acta Radiologica 2014.

  10. Performance Analysis of MTD64, our Tiny Multi-Threaded DNS64 Server Implementation: Proof of Concept

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2016-07-01

    In this paper, the performance of MTD64 is measured and compared to that of the industry standard BIND in order to check the correctness of the design concepts of MTD64, especially of the one that we use a new thread for each request. For the performance measurements, our earlier proposed dns64perf program is enhanced as dns64perf2, which one is also documented in this paper. We found that MTD64 seriously outperformed BIND and hence our design principles may be useful for the design of a high performance production class DNS64 server. As an additional test, we have also examined the effect of dynamic CPU frequency scaling to the performance of the implementations.

  11. Application of multi-thread computing and domain decomposition to the 3-D neutronics Fem code Cronos

    International Nuclear Information System (INIS)

    Ragusa, J.C.

    2003-01-01

    The purpose of this paper is to present the parallelization of the flux solver and the isotopic depletion module of the code, either using Message Passing Interface (MPI) or OpenMP. Thread parallelism using OpenMP was used to parallelize the mixed dual FEM (finite element method) flux solver MINOS. Investigations regarding the opportunity of mixing parallelism paradigms will be discussed. The isotopic depletion module was parallelized using domain decomposition and MPI. An attempt at using OpenMP was unsuccessful and will be explained. This paper is organized as follows: the first section recalls the different types of parallelism. The mixed dual flux solver and its parallelization are then presented. In the third section, we describe the isotopic depletion solver and its parallelization; and finally conclude with some future perspectives. Parallel applications are mandatory for fine mesh 3-dimensional transport and simplified transport multigroup calculations. The MINOS solver of the FEM neutronics code CRONOS2 was parallelized using the directive based standard OpenMP. An efficiency of 80% (resp. 60%) was achieved with 2 (resp. 4) threads. Parallelization of the isotopic depletion solver was obtained using domain decomposition principles and MPI. Efficiencies greater than 90% were reached. These parallel implementations were tested on a shared memory symmetric multiprocessor (SMP) cluster machine. The OpenMP implementation in the solver MINOS is only the first step towards fully using the SMPs cluster potential with a mixed mode parallelism. Mixed mode parallelism can be achieved by combining message passing interface between clusters with OpenMP implicit parallelism within a cluster

  12. Co Modeling and Co Synthesis of Safety Critical Multi threaded Embedded Software for Multi Core Embedded Platforms

    Science.gov (United States)

    2017-03-20

    Kaiserslautern Kaiserslautern, Germany Sandeep Shukla FERMAT Lab Electrical and Computer Engineering Department Virginia Tech 900 North Glebe Road...Software Engineering , Software Producibility, Component-based software design, behavioral types, behavioral type inference, Polychronous model of...near future, many embedded applications including safety critical ones as used in avionics, automotive , mission control systems will run on

  13. 64k networked multi-threaded processors and their real-time application in high energy physics

    CERN Document Server

    Schneider, R; Gutfleisch, M; Gareus, R; Lesser, F; Lindenstruth, V; Reichling, C; Torralba, G

    2002-01-01

    Particle physics experiments create large data streams at high rates ranging from kHz to MHz. In a single event the number of created particles can easily exceed 20.000. The architecture of high resolution tracking detectors does not allow to handle the event data stream exceeding 10 TByte/s. Since only some rare scenarios are interesting a selection process increases the efficiency by identifying relevant events which are processed afterwards. This trigger has to be fast enough to avoid loss of data. In case of the ALICE experiment at CERN the trigger is created by analyzing data of the transition radiation detector where about 16.000 charged particles cross six independent layers. Nearly 1.2 million analog data channels are digitized at 10 MHz by 10 bit ADCs within 2 mu s. On this data stream of 13 TByte/s a trigger decision has to be made within 6 mu s. (5 refs).

  14. Application of multi-thread computing and domain decomposition to the 3-D neutronics Fem code Cronos

    Energy Technology Data Exchange (ETDEWEB)

    Ragusa, J.C. [CEA Saclay, Direction de l' Energie Nucleaire, Service d' Etudes des Reacteurs et de Modelisations Avancees (DEN/SERMA), 91 - Gif sur Yvette (France)

    2003-07-01

    The purpose of this paper is to present the parallelization of the flux solver and the isotopic depletion module of the code, either using Message Passing Interface (MPI) or OpenMP. Thread parallelism using OpenMP was used to parallelize the mixed dual FEM (finite element method) flux solver MINOS. Investigations regarding the opportunity of mixing parallelism paradigms will be discussed. The isotopic depletion module was parallelized using domain decomposition and MPI. An attempt at using OpenMP was unsuccessful and will be explained. This paper is organized as follows: the first section recalls the different types of parallelism. The mixed dual flux solver and its parallelization are then presented. In the third section, we describe the isotopic depletion solver and its parallelization; and finally conclude with some future perspectives. Parallel applications are mandatory for fine mesh 3-dimensional transport and simplified transport multigroup calculations. The MINOS solver of the FEM neutronics code CRONOS2 was parallelized using the directive based standard OpenMP. An efficiency of 80% (resp. 60%) was achieved with 2 (resp. 4) threads. Parallelization of the isotopic depletion solver was obtained using domain decomposition principles and MPI. Efficiencies greater than 90% were reached. These parallel implementations were tested on a shared memory symmetric multiprocessor (SMP) cluster machine. The OpenMP implementation in the solver MINOS is only the first step towards fully using the SMPs cluster potential with a mixed mode parallelism. Mixed mode parallelism can be achieved by combining message passing interface between clusters with OpenMP implicit parallelism within a cluster.

  15. 21 CFR 880.6890 - General purpose disinfectants.

    Science.gov (United States)

    2010-04-01

    ... (CONTINUED) MEDICAL DEVICES GENERAL HOSPITAL AND PERSONAL USE DEVICES General Hospital and Personal Use... disinfectant is a germicide intended to process noncritical medical devices and equipment surfaces. A general... prior to terminal sterilization or high level disinfection. Noncritical medical devices make only...

  16. An FPGA- Based General-Purpose Data Acquisition Controller

    Science.gov (United States)

    Robson, C. C. W.; Bousselham, A.; Bohm

    2006-08-01

    System development in advanced FPGAs allows considerable flexibility, both during development and in production use. A mixed firmware/software solution allows the developer to choose what shall be done in firmware or software, and to make that decision late in the process. However, this flexibility comes at the cost of increased complexity. We have designed a modular development framework to help to overcome these issues of increased complexity. This framework comprises a generic controller that can be adapted for different systems by simply changing the software or firmware parts. The controller can use both soft and hard processors, with or without an RTOS, based on the demands of the system to be developed. The resulting system uses the Internet for both control and data acquisition. In our studies we developed the embedded system in a Xilinx Virtex-II Pro FPGA, where we used both PowerPC and MicroBlaze cores, http, Java, and LabView for control and communication, together with the MicroC/OS-II and OSE operating systems

  17. General Purpose Data-Driven Monitoring for Space Operations

    Science.gov (United States)

    Iverson, David L.; Martin, Rodney A.; Schwabacher, Mark A.; Spirkovska, Liljana; Taylor, William McCaa; Castle, Joseph P.; Mackey, Ryan M.

    2009-01-01

    As modern space propulsion and exploration systems improve in capability and efficiency, their designs are becoming increasingly sophisticated and complex. Determining the health state of these systems, using traditional parameter limit checking, model-based, or rule-based methods, is becoming more difficult as the number of sensors and component interactions grow. Data-driven monitoring techniques have been developed to address these issues by analyzing system operations data to automatically characterize normal system behavior. System health can be monitored by comparing real-time operating data with these nominal characterizations, providing detection of anomalous data signatures indicative of system faults or failures. The Inductive Monitoring System (IMS) is a data-driven system health monitoring software tool that has been successfully applied to several aerospace applications. IMS uses a data mining technique called clustering to analyze archived system data and characterize normal interactions between parameters. The scope of IMS based data-driven monitoring applications continues to expand with current development activities. Successful IMS deployment in the International Space Station (ISS) flight control room to monitor ISS attitude control systems has led to applications in other ISS flight control disciplines, such as thermal control. It has also generated interest in data-driven monitoring capability for Constellation, NASA's program to replace the Space Shuttle with new launch vehicles and spacecraft capable of returning astronauts to the moon, and then on to Mars. Several projects are currently underway to evaluate and mature the IMS technology and complementary tools for use in the Constellation program. These include an experiment on board the Air Force TacSat-3 satellite, and ground systems monitoring for NASA's Ares I-X and Ares I launch vehicles. The TacSat-3 Vehicle System Management (TVSM) project is a software experiment to integrate fault and anomaly detection algorithms and diagnosis tools with executive and adaptive planning functions contained in the flight software on-board the Air Force Research Laboratory TacSat-3 satellite. The TVSM software package will be uploaded after launch to monitor spacecraft subsystems such as power and guidance, navigation, and control (GN&C). It will analyze data in real-time to demonstrate detection of faults and unusual conditions, diagnose problems, and react to threats to spacecraft health and mission goals. The experiment will demonstrate the feasibility and effectiveness of integrated system health management (ISHM) technologies with both ground and on-board experiments.

  18. General-purpose isiZulu speech synthesiser

    CSIR Research Space (South Africa)

    Louw, A

    2005-07-01

    Full Text Available listener simply commented that “this speaker comes from a different region”. We therefore believe that we can improve the quality of synthesis substantially by explicitly aiming for monotone recordings. Of course, the eventual aim is to produce “natural... an explicit duration model, but weigh the syllable position heavily in the calculation of the target costs during synthesis. Again, listeners find this to be an acceptable compromise. Development of appropriate target-cost function To select...

  19. Managing RFID Sensors Networks with a General Purpose RFID Middleware

    Science.gov (United States)

    Abad, Ismael; Cerrada, Carlos; Cerrada, Jose A.; Heradio, Rubén; Valero, Enrique

    2012-01-01

    RFID middleware is anticipated to one of the main research areas in the field of RFID applications in the near future. The Data EPC Acquisition System (DEPCAS) is an original proposal designed by our group to transfer and apply fundamental ideas from System and Data Acquisition (SCADA) systems into the areas of RFID acquisition, processing and distribution systems. In this paper we focus on how to organize and manage generic RFID sensors (edge readers, readers, PLCs, etc…) inside the DEPCAS middleware. We denote by RFID Sensors Networks Management (RSNM) this part of DEPCAS, which is built on top of two new concepts introduced and developed in this work: MARC (Minimum Access Reader Command) and RRTL (RFID Reader Topology Language). MARC is an abstraction layer used to hide heterogeneous devices inside a homogeneous acquisition network. RRTL is a language to define RFID Reader networks and to describe the relationship between them (concentrator, peer to peer, master/submaster). PMID:22969370

  20. MONK - a general purpose Monte Carlo neutronics program

    International Nuclear Information System (INIS)

    Sherriffs, V.S.W.

    1978-01-01

    MONK is a Monte Carlo neutronics code written principally for criticality calculations relevant to the transport, storage, and processing of fissile material. The code exploits the ability of the Monte Carlo method to represent complex shapes with very great accuracy. The nuclear data used is derived from the UK Nuclear Data File processed to the required format by a subsidiary program POND. A general description is given of the MONK code together with the subsidiary program SCAN which produces diagrams of the system specified. Details of the data input required by MONK and SCAN are also given. (author)

  1. Operating parameters for a general purpose computerized tomography system

    International Nuclear Information System (INIS)

    Walmsley, B.J.

    1976-01-01

    The diagnostic possibilities of the whole-body scanner of EMI are briefly mentioned. Picture quality, versatility in the computer controlled contrast selection as well as time saving in individual investigations are particularly pointed out. Besides an improved diagnostics, the apparatus can also lead to a considerable saving of costs when used appropriately. (ORU/LH) [de

  2. Probabilistic structural analysis using a general purpose finite element program

    Science.gov (United States)

    Riha, D. S.; Millwater, H. R.; Thacker, B. H.

    1992-07-01

    This paper presents an accurate and efficient method to predict the probabilistic response for structural response quantities, such as stress, displacement, natural frequencies, and buckling loads, by combining the capabilities of MSC/NASTRAN, including design sensitivity analysis and fast probability integration. Two probabilistic structural analysis examples have been performed and verified by comparison with Monte Carlo simulation of the analytical solution. The first example consists of a cantilevered plate with several point loads. The second example is a probabilistic buckling analysis of a simply supported composite plate under in-plane loading. The coupling of MSC/NASTRAN and fast probability integration is shown to be orders of magnitude more efficient than Monte Carlo simulation with excellent accuracy.

  3. Owl: A General-Purpose Numerical Library in OCaml

    OpenAIRE

    Wang, Liang

    2017-01-01

    Owl is a new numerical library developed in the OCaml language. It focuses on providing a comprehensive set of high-level numerical functions so that developers can quickly build up data analytical applications. In this abstract, we will present Owl's design, core components, and its key functionality.

  4. SUPER CAVIAR: Memory mapping the general-purpose microcomputer

    International Nuclear Information System (INIS)

    Cittolin, S.; Taylor, B.G.

    1981-01-01

    Over the past 3 years, CAVIAR (CAMAC Video Autonomous Read-out) microcomputers have been applied in growing numbers at CERN and related institutes. As typical user programs expanded in size, and the incorporated firmware libraries were enlarged also, the microprocessor addressing limit of 64 Kbytes became a serious constraint. An enhanced microcomputer, SUPER CAVIAR, has now been created by the incorporation of memory mapping to expand the physical address space to 344 Kbytes. The new facility provides independent firmware and RAM maps, dynamic allocation of common RAM, automatic inter-page transfer modes, and a RAM/EPROM overlay. A memory-based file system has been implemented, and control and data can be interchanged between separate programs in different RAM maps. 84 Kbytes of EPROM are incorporated on the mapper card itself, as well as an ADLC serial data link. In addition to providing more space for consolidated user programs and data, SUPER CAVIAR has allowed the introduction of several improvements to the BAMBI interpreter and extensions to the CAVIAR libraries. A context editor and enhanced debug monitor have been added, as well as new data types and extended array-handling and graphics routines, including isoline plotting, line-fitting and FFT operations. A SUPER CAVIAR converter has been developed which allows a standard CAVIAR to be upgraded to incorporate the new facilities without loss of the existing investment. (orig.)

  5. On the Equivalence of Tank Trucks and General Purpose Trucks ...

    African Journals Online (AJOL)

    ... which is the subdividing of the overall market into homogeneous subsets of customers where any subset(s) may be selected as a target market to be reached with a distinct marketing mix. But the fact that different segments can be identified with particular kinds of vehicles or type of products, suggests that there are natural ...

  6. 22 CFR 211.1 - General purpose and scope; legislation.

    Science.gov (United States)

    2010-04-01

    ... attempt to alleviate the causes of hunger, mortality and morbidity; promote economic and community... agencies of the United Nations and the World Food Program. The Operational Plan submitted by a cooperating...

  7. General Purpose Data-Driven System Monitoring for Space Operations

    Data.gov (United States)

    National Aeronautics and Space Administration — Modern space propulsion and exploration system designs are becoming increasingly sophisticated and complex. Determining the health state of these systems using...

  8. Applications for a general purpose optical beam propagation code

    International Nuclear Information System (INIS)

    Munroe, J.L.; Wallace, N.W.

    1987-01-01

    Real world beam propagation and diffraction problems can rarely be solved by the analytical expressions commonly found in optics and lasers textbooks. These equations are typically valid only for paraxial geometries, for specific boundary conditions (e.g., infinite apertures), or for special assumptions (e.g., at focus). Numerical techniques must be used to solve the equations for the general case. LOTS, a public domain numerical beam propagation software package developed for this purpose, is a widely used and proven tool. The graphical presentation of results combined with a well designed command language make LOTS particularly user-friendly, and the recent implementation of LOTS on the IBM PC/XT family of desktop computes will make this capability available to a much larger group of users. This paper surveys several applications demonstrating the need for such a capability

  9. A general purpose tomographic program with combined inversions

    International Nuclear Information System (INIS)

    Xu Wenbin; Dong Jiafu; Li Fanzhu

    1996-01-01

    A general tomographic program has been developed by combining the Bessel expansion with the Zernicke expansion. It is useful for studying of the magnetic island structure of the tearing mode and in reconstructing the density profiles of impurities in tokamak plasmas. This combined method have the advantages of both expansions, i.e. there will be no spurious images in the edge and it will be of high inverse precision in the center of plasma

  10. WORM: A general-purpose input deck specification language

    International Nuclear Information System (INIS)

    Jones, T.

    1999-01-01

    Using computer codes to perform criticality safety calculations has become common practice in the industry. The vast majority of these codes use simple text-based input decks to represent the geometry, materials, and other parameters that describe the problem. However, the data specified in input files are usually processed results themselves. For example, input decks tend to require the geometry specification in linear dimensions and materials in atom or weight fractions, while the parameter of interest might be mass or concentration. The calculations needed to convert from the item of interest to the required parameter in the input deck are usually performed separately and then incorporated into the input deck. This process of calculating, editing, and renaming files to perform a simple parameter study is tedious at best. In addition, most computer codes require dimensions to be specified in centimeters, while drawings or other materials used to create the input decks might be in other units. This also requires additional calculation or conversion prior to composition of the input deck. These additional calculations, while extremely simple, introduce a source for error in both the calculations and transcriptions. To overcome these difficulties, WORM (Write One, Run Many) was created. It is an easy-to-use programming language to describe input decks and can be used with any computer code that uses standard text files for input. WORM is available, via the Internet, at worm.lanl.gov. A user's guide, tutorials, example models, and other WORM-related materials are also available at this Web site. Questions regarding WORM should be directed to wormatlanl.gov

  11. Experiments with general purpose visualization software on a unix workstation

    Energy Technology Data Exchange (ETDEWEB)

    Adam, G

    1995-10-01

    A study was performed on the opportunity of buying, for the ICTP use, one of the following visualization systems: Advanced Visualization Systems (AVS) - release 5.02 IRIS Explorer - release 2.2 from NAG IBM Data Explorer (DX) - release 2.1.5 Khoros - Developer`s Release 2.0+p2. Criteria for an optimal choice were defined and it was concluded that none of these visualization systems would be a good today compromise. Conservative consideration of the market opportunities shows that substantially improved releases of these systems are expected to be operational within at most an year. For short term period, the ratio benefit to burden still makes public domain low-end graphics attractive. (author).

  12. 7 CFR 249.1 - General purpose and scope.

    Science.gov (United States)

    2010-01-01

    ... Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE CHILD NUTRITION PROGRAMS SENIOR FARMERS' MARKET NUTRITION PROGRAM (SFMNP) General § 249.1 General.... 2011, et seq.), and to any other Federal or State food or nutrition assistance program under which...

  13. RoboCon: A general purpose telerobotic control center

    International Nuclear Information System (INIS)

    Draper, J.V.; Noakes, M.W.; Blair, L.M.

    1997-01-01

    This report describes human factors issues involved in the design of RoboCon, a multi-purpose control center for use in US Department of Energy remote handling applications. RoboCon is intended to be a flexible, modular control center capable of supporting a wide variety of robotic devices

  14. RoboCon: A general purpose telerobotic control center

    Energy Technology Data Exchange (ETDEWEB)

    Draper, J.V.; Noakes, M.W. [Oak Ridge National Lab., TN (United States). Robotics and Process Systems Div.; Schempf, H. [Carnegie Mellon Univ., Pittsburgh, PA (United States); Blair, L.M. [Human Machine Interfaces, Inc., Knoxville, TN (United States)

    1997-02-01

    This report describes human factors issues involved in the design of RoboCon, a multi-purpose control center for use in US Department of Energy remote handling applications. RoboCon is intended to be a flexible, modular control center capable of supporting a wide variety of robotic devices.

  15. Coursebook Development and Evaluation for English for General Purposes Course

    Science.gov (United States)

    Zohrabi, Mohammad

    2011-01-01

    Writing a coursebook is a demanding task and more important than writing is how to evaluate it in order to pin point its weaknesses and improve them. If we yearn to produce a quality and useful coursebook, we need to consider how to develop and evaluate it. The study reported in this article describes the process in which the researcher developed…

  16. Large General Purpose Frame for Studying Force Vectors

    Science.gov (United States)

    Heid, Christy; Rampolla, Donald

    2011-01-01

    Many illustrations and problems on the vector nature of forces have weights and forces in a vertical plane. One of the common devices for studying the vector nature of forces is a horizontal "force table," in which forces are produced by weights hanging vertically and transmitted to cords in a horizontal plane. Because some students have…

  17. GPUs: An Emerging Platform for General-Purpose Computation

    Science.gov (United States)

    2007-08-01

    programming; real-time cinematic quality graphics Peak stream (26) License required (limited time no- cost evaluation program) Commercially...folding.stanford.edu (accessed 30 March 2007). 2. Fan, Z.; Qiu, F.; Kaufman, A.; Yoakum-Stover, S. GPU Cluster for High Performance Computing. ACM/IEEE...accessed 30 March 2007). 8. Goodnight, N.; Wang, R.; Humphreys, G. Computation on Programmable Graphics Hardware. IEEE Computer Graphics and

  18. General Purpose Probabilistic Programming Platform with Effective Stochastic Inference

    Science.gov (United States)

    2018-04-01

    REFERENCES 74 LIST OF ACRONYMS 80 ii List of Figures Figure 1. The problem of inferring curves from data while simultaneously choosing the...bottom path) as the inverse problem to computer graphics (top path). ........ 18 Figure 18. An illustration of generative probabilistic graphics for 3D...Building these systems involves simultaneously developing mathematical models, inference algorithms and optimized software implementations. Small changes

  19. 7 CFR 240.1 - General purpose and scope.

    Science.gov (United States)

    2010-01-01

    ... Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF... school year the Department programs agricultural commodities and other foods to States for delivery to... changes in the Price Index for Food Used in Schools and Institutions. Section 6(e)(1) further requires...

  20. Evaluation of the Joint Service General Purpose Mask, XM50

    Science.gov (United States)

    2005-07-01

    and vision Trial 7 Trial 8 Trial 12 correction E-2 TRIAL 3299 7795 2079 Did not finish exercises. No No comment on sweat or No comment on sweat or...lhr 50 min playing time). Duringboth activities, slight Reported slight intermittent No comment on. swet or fogging with slight impact on fogging...right eye. During steam No comment on Mask was stationary. engine exercise, reported 4 sweat or fogging Reported that seal mask seal leakage at

  1. standalone general purpose data logger design and implementation

    African Journals Online (AJOL)

    eobe

    volatile EEPROM data memory, four AT24C256 2-wire serial EEPROM chips were used for data storage. wire serial ... analog electrical signal that relays information about .... in-code, ADC, Serial Communication and short term memory ...

  2. Experiments with general purpose visualization software on a unix workstation

    International Nuclear Information System (INIS)

    Adam, G.

    1995-10-01

    A study was performed on the opportunity of buying, for the ICTP use, one of the following visualization systems: Advanced Visualization Systems (AVS) - release 5.02 IRIS Explorer - release 2.2 from NAG IBM Data Explorer (DX) - release 2.1.5 Khoros - Developer's Release 2.0+p2. Criteria for an optimal choice were defined and it was concluded that none of these visualization systems would be a good today compromise. Conservative consideration of the market opportunities shows that substantially improved releases of these systems are expected to be operational within at most an year. For short term period, the ratio benefit to burden still makes public domain low-end graphics attractive. (author)

  3. Online tracking applications of the general purpose EDRO Board

    CERN Document Server

    Annovi, A; The ATLAS collaboration; Cervini, F; Crescioli, F; Fabbri, L; Franchini, M; Giannetti, P; Giannuzzi, F; Giorgi, F; Magalotti, D; Piendibene, M; Sbarra, C; Valentinetti, S; Mauro, V; Zoccoli, A

    2012-01-01

    The capability to perform extremely fast track reconstruction online is becoming more and more important for the LHC upgrade as well as the next generation of HEP experiments, where the expected instantaneous luminosities (in excess of 10^34 /cm2/s) and the very low signal/background ratio ask for fast and clean identification of the main characteristics of interesting events. The Slim5 R&D project studied different aspects of fast and high-precision tracking in dedicated hardware: data-push silicon sensors, high bandwidth DAQ systems and Associative Memories (AM) for fast track identification. The central element of the development system is a high traffic board, called EDRO, capable of collecting and processing digital data with an input rate of 16 Gbps. The input hits, suitably formatted or clusterized, are sent to an AM board sending back candidate tracks, which are identified at a rate of 40 MHz. The EDRO board is then able to deliver triggers and formatted events for further processing. The EDRO-AM ...

  4. 78 FR 7718 - Review of the General Purpose Costing System

    Science.gov (United States)

    2013-02-04

    ...: Acting on railroad requests for authority to engage in Board-regulated financial transactions such as... directly regulates those entities. In other words, the impact must be a direct impact on small entities... analysis of effects on entities that it does not regulate. United Dist. Cos. v. FERC, 88 F.3d 1105, 1170...

  5. 7 CFR 271.1 - General purpose and scope.

    Science.gov (United States)

    2010-01-01

    ... agricultural economy, as well as result in more orderly marketing and distribution of foods. To alleviate such... households to obtain a more nutritious diet through normal channels of trade by increasing food purchasing...

  6. DYNSYL: a general-purpose dynamic simulator for chemical processes

    International Nuclear Information System (INIS)

    Patterson, G.K.; Rozsa, R.B.

    1978-01-01

    Lawrence Livermore Laboratory is conducting a safeguards program for the Nuclear Regulatory Commission. The goal of the Material Control Project of this program is to evaluate material control and accounting (MCA) methods in plants that handle special nuclear material (SNM). To this end we designed and implemented the dynamic chemical plant simulation program DYNSYL. This program can be used to generate process data or to provide estimates of process performance; it simulates both steady-state and dynamic behavior. The MCA methods that may have to be evaluated range from sophisticated on-line material trackers such as Kalman filter estimators, to relatively simple material balance procedures. This report describes the overall structure of DYNSYL and includes some example problems. The code is still in the experimental stage and revision is continuing

  7. A General Purpose Digital System for Field Vibration Testing

    DEFF Research Database (Denmark)

    Brincker, Rune; Larsen, Jesper Abildgaard; Ventura, Carlos

    2007-01-01

    This paper describes the development and concept implementation of a highly sensitive digital recording system for seismic applications and vibration measurements on large Civil Engineering structures. The system is based on highly sensitive motion transducers that have been used by seismologists...

  8. General Purpose Segmentation for Microorganisms in Microscopy Images

    DEFF Research Database (Denmark)

    Jensen, Sebastian H. Nesgaard; Moeslund, Thomas B.; Rankl, Christian

    2014-01-01

    In this paper, we propose an approach for achieving generalized segmentation of microorganisms in mi- croscopy images. It employs a pixel-wise classification strategy based on local features. Multilayer percep- trons are utilized for classification of the local features and is trained for each sp...

  9. Managing RFID Sensors Networks with a General Purpose RFID Middleware

    Directory of Open Access Journals (Sweden)

    Enrique Valero

    2012-06-01

    Full Text Available RFID middleware is anticipated to one of the main research areas in the field of RFID applications in the near future. The Data EPC Acquisition System (DEPCAS is an original proposal designed by our group to transfer and apply fundamental ideas from System and Data Acquisition (SCADA systems into the areas of RFID acquisition, processing and distribution systems. In this paper we focus on how to organize and manage generic RFID sensors (edge readers, readers, PLCs, etc… inside the DEPCAS middleware. We denote by RFID Sensors Networks Management (RSNM this part of DEPCAS, which is built on top of two new concepts introduced and developed in this work: MARC (Minimum Access Reader Command and RRTL (RFID Reader Topology Language. MARC is an abstraction layer used to hide heterogeneous devices inside a homogeneous acquisition network. RRTL is a language to define RFID Reader networks and to describe the relationship between them (concentrator, peer to peer, master/submaster.

  10. An embedded domain specific language for general purpose vectorization

    CERN Document Server

    Karpinski, Przemyslaw

    2017-01-01

    Portable SIMD code generation is an open problem in modern High Performance Computing systems. Performance portability can already be achieved, however it might fail when user-framework interaction is required. Of all portable vectorization techniques, explicit vectorization, using wrapper-class libraries, is proven to achieve the fastest performance, however it does not exploit optimization opportunities outside the simplest algebraic primitives. A more advanced language is therefore required, but the design of a new independent language is not feasible due to its high costs. This work describes an Embedded Domain Specific Language for solving generalized 1-D vectorization problems. The language is implemented using C++ as a host language and published as a lightweight library. By decoupling expression creation from evaluation a wider range of problems can be solved, without sacrificing runtime efficiency. In this paper we discuss design patterns necessary, but not limited, to efficient EDSL implementatio...

  11. 7 CFR 210.1 - General purpose and scope.

    Science.gov (United States)

    2010-01-01

    ... general and special cash assistance and donations of foods acquired by the Department to be used to assist..., preparation and service of nutritious lunches, payment of funds, use of program funds, program monitoring, and...

  12. Miniaturized and general purpose fiber optic ultrasonic sources

    International Nuclear Information System (INIS)

    Biagi, E.; Fontani, S.; Masotti, L.; Pieraccini, M.

    1997-01-01

    Innovative photoacoustic sources for ultrasonic NDE, smart structure, and clinical diagnosis are proposed. The working principle is based on thermal conversion of laser pulses into a metallic film evaporated directly onto the tip of a fiber optic. Unique features of the proposed transducers are very high miniaturization and potential easy embedding in smart structure. Additional advantages, high bedding in smart structure. Additional advantages, high ultrasonic frequency, large and flat bandwidth. All these characteristics make the proposed device an ideal ultrasonic source

  13. General-purpose microprocessor-based control chassis

    International Nuclear Information System (INIS)

    Halbig, J.K.; Klosterbuer, S.F.; Swenson, D.A.

    1979-12-01

    The objective of the Pion Generation for Medical Irradiations (PIGMI) program at the Los Alamos Scientific Laboratory is to develop the technology to build smaller, less expensive, and more reliable proton linear accelerators for medical applications. For this program, a powerful, simple, inexpensive, and reliable control and data acquisition system was developed. The system has a NOVA 3D computer with a real time disk-operating system (RDOS) that communicates with distributed microprocessor-based controllers which directly control data input/output chassis. At the heart of the controller is a microprocessor crate which was conceived at the Fermi National Accelerator Laboratory. This idea was applied to the design of the hardware and software of the controller

  14. A general purpose fiber optic link with radiation resistance

    International Nuclear Information System (INIS)

    Beadle, E.R.

    1995-01-01

    In some applications it is necessary to send wide-band analog data, with good fidelity, between two stations separated by several hundred feet. This is particularly true for instrumentation in an accelerator environment, where the sensing equipment can be inside the tunnel, and the processing equipment outside. Aside from the distortion and loss introduced by low cost coaxial cables, this case is further complicated by the possibility of pick-up from environmental noise, and the possible radiation damage of the transmitting electronics. Fiber optics is be a viable alternative to the standard coaxial driver, particularly where video bandwidths are concerned. This paper discusses basic design, trade-offs, and performance of one such link developed primarily for the AGS-to-RHIC (ATR) Transfer line profile monitors

  15. A General Purpose High Performance Linux Installation Infrastructure

    International Nuclear Information System (INIS)

    Wachsmann, Alf

    2002-01-01

    With more and more and larger and larger Linux clusters, the question arises how to install them. This paper addresses this question by proposing a solution using only standard software components. This installation infrastructure scales well for a large number of nodes. It is also usable for installing desktop machines or diskless Linux clients, thus, is not designed for cluster installations in particular but is, nevertheless, highly performant. The infrastructure proposed uses PXE as the network boot component on the nodes. It uses DHCP and TFTP servers to get IP addresses and a bootloader to all nodes. It then uses kickstart to install Red Hat Linux over NFS. We have implemented this installation infrastructure at SLAC with our given server hardware and installed a 256 node cluster in 30 minutes. This paper presents the measurements from this installation and discusses the bottlenecks in our installation

  16. A systematic approach to platform-independent design based on the service concept

    NARCIS (Netherlands)

    Andrade Almeida, João; van Sinderen, Marten J.; Ferreira Pires, Luis; Quartel, Dick; Duddy, K.

    This paper aims at demonstrating the benefits and importance of the service concept in the model-driven design of distributed applications. A service defines the observable behaviour of a system without constraining the system’s internal structure. We argue that by specifying application-level

  17. Dynamic, Distributed, Platform Independent OR/MS Applications - A Network Perspective

    National Research Council Canada - National Science Library

    Bradley, Gordon H; Buss, Arnold H

    1998-01-01

    .... This concept is closely tied to an evolving view about what a computer is; we now have an exploding number of ubiquitous devices with computational capacity, such as credit cards, cellular phones, and TV set-top boxes...

  18. GENERATING TEST CASES FOR PLATFORM INDEPENDENT MODEL BY USING USE CASE MODEL

    OpenAIRE

    Hesham A. Hassan,; Zahraa. E. Yousif

    2010-01-01

    Model-based testing refers to testing and test case generation based on a model that describes the behavior of the system. Extensive use of models throughout all the phases of software development starting from the requirement engineering phase has led to increased importance of Model Based Testing. The OMG initiative MDA has revolutionized the way models would be used for software development. Ensuring that all user requirements are addressed in system design and the design is getting suffic...

  19. A Platform Independent Game Technology Model for Model Driven Serious Games Development

    Science.gov (United States)

    Tang, Stephen; Hanneghan, Martin; Carter, Christopher

    2013-01-01

    Game-based learning (GBL) combines pedagogy and interactive entertainment to create a virtual learning environment in an effort to motivate and regain the interest of a new generation of "digital native" learners. However, this approach is impeded by the limited availability of suitable "serious" games and high-level design…

  20. Conversion of HSPF Legacy Model to a Platform-Independent, Open-Source Language

    Science.gov (United States)

    Heaphy, R. T.; Burke, M. P.; Love, J. T.

    2015-12-01

    Since its initial development over 30 years ago, the Hydrologic Simulation Program - FORTAN (HSPF) model has been used worldwide to support water quality planning and management. In the United States, HSPF receives widespread endorsement as a regulatory tool at all levels of government and is a core component of the EPA's Better Assessment Science Integrating Point and Nonpoint Sources (BASINS) system, which was developed to support nationwide Total Maximum Daily Load (TMDL) analysis. However, the model's legacy code and data management systems have limitations in their ability to integrate with modern software, hardware, and leverage parallel computing, which have left voids in optimization, pre-, and post-processing tools. Advances in technology and our scientific understanding of environmental processes that have occurred over the last 30 years mandate that upgrades be made to HSPF to allow it to evolve and continue to be a premiere tool for water resource planners. This work aims to mitigate the challenges currently facing HSPF through two primary tasks: (1) convert code to a modern widely accepted, open-source, high-performance computing (hpc) code; and (2) convert model input and output files to modern widely accepted, open-source, data model, library, and binary file format. Python was chosen as the new language for the code conversion. It is an interpreted, object-oriented, hpc code with dynamic semantics that has become one of the most popular open-source languages. While python code execution can be slow compared to compiled, statically typed programming languages, such as C and FORTRAN, the integration of Numba (a just-in-time specializing compiler) has allowed this challenge to be overcome. For the legacy model data management conversion, HDF5 was chosen to store the model input and output. The code conversion for HSPF's hydrologic and hydraulic modules has been completed. The converted code has been tested against HSPF's suite of "test" runs and shown good agreement and similar execution times while using the Numba compiler. Continued verification of the accuracy of the converted code against more complex legacy applications and improvement upon execution times by incorporating an intelligent network change detection tool is currently underway, and preliminary results will be presented.

  1. Concept of AHRS Algorithm Designed for Platform Independent Imu Attitude Alignment

    Science.gov (United States)

    Tomaszewski, Dariusz; Rapiński, Jacek; Pelc-Mieczkowska, Renata

    2017-12-01

    Nowadays, along with the advancement of technology one can notice the rapid development of various types of navigation systems. So far the most popular satellite navigation, is now supported by positioning results calculated with use of other measurement system. The method and manner of integration will depend directly on the destination of system being developed. To increase the frequency of readings and improve the operation of outdoor navigation systems, one will support satellite navigation systems (GPS, GLONASS ect.) with inertial navigation. Such method of navigation consists of several steps. The first stage is the determination of initial orientation of inertial measurement unit, called INS alignment. During this process, on the basis of acceleration and the angular velocity readings, values of Euler angles (pitch, roll, yaw) are calculated allowing for unambiguous orientation of the sensor coordinate system relative to external coordinate system. The following study presents the concept of AHRS (Attitude and heading reference system) algorithm, allowing to define the Euler angles.The study were conducted with the use of readings from low-cost MEMS cell phone sensors. Subsequently the results of the study were analyzed to determine the accuracy of featured algorithm. On the basis of performed experiments the legitimacy of developed algorithm was stated.

  2. A platform independent prototype for data and information exchange between decision support systems

    International Nuclear Information System (INIS)

    Carle, B.; Baig, S.

    2003-01-01

    Full text: A survey amongst participants in the Decision Support System network (DSSNET) community showed that the organization dealing with the Information exchange between participants and stakeholders in nuclear emergency is too disparate to be defined in one well defined procedure or analysis. Looking at the organization of the national emergency response organizations, and especially when modelling the information flow, diversity is the most striking finding: originators of the information are different, decision making organisation can be different, the approval and publishing of information to press and wider public is dealt with in different ways and the responsibilities for the information flow to other authorities differ as well. Moreover, the place of decision support systems (DSS) in the emergency response organization varies for the different countries. This variation can be found in the way one of the 'big three' (RODOS, ARGOS and RECASS) systems is implemented, and even more in the way other, often country-specific systems, are in use and function more integrated with the particular emergency response organization of the country. Hence we can conclude that there is a need to structure the information exchange system, but this has to be flexible enough to work with the above described variety of existing organizations and procedures. Though it may not be feasible to agree on all specifications of information to be exchanged, we can define at least a minimal set. A prototype for data and information exchange is being developed under the EC project MODEM (Monitoring data and Information exchange among decision support systems). It establishes links between the decision support systems RODOS, ARGOS and RECASS. For setting up this data exchange, the use of xml-based data specifications allows a flexible integration with existing applications. The power to include metadata in a structured way allows the use of automated transformation tools and limits the modifications to existing applications to relatively simple generic import/export functionality, leaving existing data models untouched. For the first implementation, standard W3 XMLschema definition (.xsd) was chosen. The protocols and formats are open to other decision support systems allowing any system to become part of the MODEM exchange network. The data and information exchange prototype was tested during the DSSNET exercise in May 2003, and the results will be shown at the symposium. A demonstration of this capability will be set up. (author)

  3. SWPS3 – fast multi-threaded vectorized Smith-Waterman for IBM Cell/B.E. and ×86/SSE2

    Directory of Open Access Journals (Sweden)

    Krähenbühl Philipp

    2008-10-01

    Full Text Available Abstract Background We present swps3, a vectorized implementation of the Smith-Waterman local alignment algorithm optimized for both the Cell/BE and ×86 architectures. The paper describes swps3 and compares its performances with several other implementations. Findings Our benchmarking results show that swps3 is currently the fastest implementation of a vectorized Smith-Waterman on the Cell/BE, outperforming the only other known implementation by a factor of at least 4: on a Playstation 3, it achieves up to 8.0 billion cell-updates per second (GCUPS. Using the SSE2 instruction set, a quad-core Intel Pentium can reach 15.7 GCUPS. We also show that swps3 on this CPU is faster than a recent GPU implementation. Finally, we note that under some circumstances, alignments are computed at roughly the same speed as BLAST, a heuristic method. Conclusion The Cell/BE can be a powerful platform to align biological sequences. Besides, the performance gap between exact and heuristic methods has almost disappeared, especially for long protein sequences.

  4. SWPS3 – fast multi-threaded vectorized Smith-Waterman for IBM Cell/B.E. and ×86/SSE2

    Science.gov (United States)

    Szalkowski, Adam; Ledergerber, Christian; Krähenbühl, Philipp; Dessimoz, Christophe

    2008-01-01

    Background We present swps3, a vectorized implementation of the Smith-Waterman local alignment algorithm optimized for both the Cell/BE and ×86 architectures. The paper describes swps3 and compares its performances with several other implementations. Findings Our benchmarking results show that swps3 is currently the fastest implementation of a vectorized Smith-Waterman on the Cell/BE, outperforming the only other known implementation by a factor of at least 4: on a Playstation 3, it achieves up to 8.0 billion cell-updates per second (GCUPS). Using the SSE2 instruction set, a quad-core Intel Pentium can reach 15.7 GCUPS. We also show that swps3 on this CPU is faster than a recent GPU implementation. Finally, we note that under some circumstances, alignments are computed at roughly the same speed as BLAST, a heuristic method. Conclusion The Cell/BE can be a powerful platform to align biological sequences. Besides, the performance gap between exact and heuristic methods has almost disappeared, especially for long protein sequences. PMID:18959793

  5. SWPS3 - fast multi-threaded vectorized Smith-Waterman for IBM Cell/B.E. and x86/SSE2.

    Science.gov (United States)

    Szalkowski, Adam; Ledergerber, Christian; Krähenbühl, Philipp; Dessimoz, Christophe

    2008-10-29

    We present swps3, a vectorized implementation of the Smith-Waterman local alignment algorithm optimized for both the Cell/BE and x86 architectures. The paper describes swps3 and compares its performances with several other implementations. Our benchmarking results show that swps3 is currently the fastest implementation of a vectorized Smith-Waterman on the Cell/BE, outperforming the only other known implementation by a factor of at least 4: on a Playstation 3, it achieves up to 8.0 billion cell-updates per second (GCUPS). Using the SSE2 instruction set, a quad-core Intel Pentium can reach 15.7 GCUPS. We also show that swps3 on this CPU is faster than a recent GPU implementation. Finally, we note that under some circumstances, alignments are computed at roughly the same speed as BLAST, a heuristic method. The Cell/BE can be a powerful platform to align biological sequences. Besides, the performance gap between exact and heuristic methods has almost disappeared, especially for long protein sequences.

  6. Structural connectivity allows for multi-threading during rest: the structure of the cortex leads to efficient alternation between resting state exploratory behavior and default mode processing.

    Science.gov (United States)

    Senden, Mario; Goebel, Rainer; Deco, Gustavo

    2012-05-01

    Despite the absence of stimulation or task conditions the cortex exhibits highly structured spatio-temporal activity patterns. These patterns are known as resting state networks (RSNs) and emerge as low-frequency fluctuations (rest. We are interested in the relationship between structural connectivity of the cortex and the fluctuations exhibited during resting conditions. We are especially interested in the effect of degree of connectivity on resting state dynamics as the default mode network (DMN) is highly connected. We find in experimental resting fMRI data that the DMN is the functional network that is most frequently active and for the longest time. In large-scale computational simulations of the cortex based on the corresponding underlying DTI/DSI based neuroanatomical connectivity matrix, we additionally find a strong correlation between the mean degree of functional networks and the proportion of time they are active. By artificially modifying different types of neuroanatomical connectivity matrices in the model, we were able to demonstrate that only models based on structural connectivity containing hubs give rise to this relationship. We conclude that, during rest, the cortex alternates efficiently between explorations of its externally oriented functional repertoire and internally oriented processing as a consequence of the DMN's high degree of connectivity. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. Simty: generalized SIMT execution on RISC-V

    OpenAIRE

    Collange , Sylvain

    2017-01-01

    International audience; We present Simty, a massively multi-threaded RISC-V processor core that acts as a proof of concept for dynamic inter-thread vector-ization at the micro-architecture level. Simty runs groups of scalar threads executing SPMD code in lockstep, and assembles SIMD instructions dynamically across threads. Unlike existing SIMD or SIMT processors like GPUs or vector processors, Simty vector-izes scalar general-purpose binaries. It does not involve any instruction set extension...

  8. ImagePy: an open-source, Python-based and platform-independent software package for boimage analysis.

    Science.gov (United States)

    Wang, Anliang; Yan, Xiaolong; Wei, Zhijun

    2018-04-27

    This note presents the design of a scalable software package named ImagePy for analysing biological images. Our contribution is concentrated on facilitating extensibility and interoperability of the software through decoupling the data model from the user interface. Especially with assistance from the Python ecosystem, this software framework makes modern computer algorithms easier to be applied in bioimage analysis. ImagePy is free and open source software, with documentation and code available at https://github.com/Image-Py/imagepy under the BSD license. It has been tested on the Windows, Mac and Linux operating systems. wzjdlut@dlut.edu.cn or yxdragon@imagepy.org.

  9. Evaluation of data discretization methods to derive platform independent isoform expression signatures for multi-class tumor subtyping.

    Science.gov (United States)

    Jung, Segun; Bi, Yingtao; Davuluri, Ramana V

    2015-01-01

    Many supervised learning algorithms have been applied in deriving gene signatures for patient stratification from gene expression data. However, transferring the multi-gene signatures from one analytical platform to another without loss of classification accuracy is a major challenge. Here, we compared three unsupervised data discretization methods--Equal-width binning, Equal-frequency binning, and k-means clustering--in accurately classifying the four known subtypes of glioblastoma multiforme (GBM) when the classification algorithms were trained on the isoform-level gene expression profiles from exon-array platform and tested on the corresponding profiles from RNA-seq data. We applied an integrated machine learning framework that involves three sequential steps; feature selection, data discretization, and classification. For models trained and tested on exon-array data, the addition of data discretization step led to robust and accurate predictive models with fewer number of variables in the final models. For models trained on exon-array data and tested on RNA-seq data, the addition of data discretization step dramatically improved the classification accuracies with Equal-frequency binning showing the highest improvement with more than 90% accuracies for all the models with features chosen by Random Forest based feature selection. Overall, SVM classifier coupled with Equal-frequency binning achieved the best accuracy (> 95%). Without data discretization, however, only 73.6% accuracy was achieved at most. The classification algorithms, trained and tested on data from the same platform, yielded similar accuracies in predicting the four GBM subgroups. However, when dealing with cross-platform data, from exon-array to RNA-seq, the classifiers yielded stable models with highest classification accuracies on data transformed by Equal frequency binning. The approach presented here is generally applicable to other cancer types for classification and identification of molecular subgroups by integrating data across different gene expression platforms.

  10. prfectBLAST: a platform-independent portable front end for the command terminal BLAST+ stand-alone suite.

    Science.gov (United States)

    Santiago-Sotelo, Perfecto; Ramirez-Prado, Jorge Humberto

    2012-11-01

    prfectBLAST is a multiplatform graphical user interface (GUI) for the stand-alone BLAST+ suite of applications. It allows researchers to do nucleotide or amino acid sequence similarity searches against public (or user-customized) databases that are locally stored. It does not require any dependencies or installation and can be used from a portable flash drive. prfectBLAST is implemented in Java version 6 (SUN) and runs on all platforms that support Java and for which National Center for Biotechnology Information has made available stand-alone BLAST executables, including MS Windows, Mac OS X, and Linux. It is free and open source software, made available under the GNU General Public License version 3 (GPLv3) and can be downloaded at www.cicy.mx/sitios/jramirez or http://code.google.com/p/prfectblast/.

  11. Analysis and optimization techniques for real-time streaming image processing software on general purpose systems

    NARCIS (Netherlands)

    Westmijze, Mark

    2018-01-01

    Commercial Off The Shelf (COTS) Chip Multi-Processor (CMP) systems are for cost reasons often used in industry for soft real-time stream processing. COTS CMP systems typically have a low timing predictability, which makes it difficult to develop software applications for these systems with tight

  12. The Fishbone diagram to identify, systematize and analyze the sources of general purpose technologies

    OpenAIRE

    COCCIA, Mario

    2017-01-01

    Abstract. This study suggests the fishbone diagram for technological analysis. Fishbone diagram (also called Ishikawa diagrams or cause-and-effect diagrams) is a graphical technique to show the several causes of a specific event or phenomenon. In particular, a fishbone diagram (the shape is similar to a fish skeleton) is a common tool used for a cause and effect analysis to identify a complex interplay of causes for a specific problem or event. The fishbone diagram can be a comprehensive theo...

  13. The Invisible Hand of Innovation showing in the General Purpose Technology of Electricity

    NARCIS (Netherlands)

    van der Kooij, B.J.G.

    2017-01-01

    The unintended economic effect on society as result of individual behaviour —Adam Smith’s ‘Invisible Hand’ of economic progress in the eighteenth century — had its equivalent in technological progress. In the nineteenth century, again individual behaviour with its Acts of Innovation and Acts of

  14. KDAS: General-Purpose Data Acquisition System Developed for KAIST-Tokamak

    International Nuclear Information System (INIS)

    Seo, Seong-Heon; Choe, Wonho; Chang, Hong-Young; Jeong, Seung-Ho

    2000-01-01

    The Korea Advanced Institute of Science and Technology (KAIST)-Tokamak Data Acquisition System (KDAS) was originally developed for KAIST-Tokamak (R/a = 0.53 m/0.14 m). It operates on a distributed system based on personal computers and has a driver-based hierarchical structure. Since KDAS can be dynamically composed of any number of available computers, and the hardware-dependent codes can be thoroughly separated into external drivers, it exhibits excellent system performance flexibility and extensibility and can optimize various user needs. It collectively controls the VXI, CAMAC, GPIB, and RS232 instrument hybrids. With these useful and convenient features, it can be applied to any computerized experiment, especially to fusion-related research. The system design and features are discussed in detail

  15. Recent advances toward a general purpose linear-scaling quantum force field.

    Science.gov (United States)

    Giese, Timothy J; Huang, Ming; Chen, Haoyuan; York, Darrin M

    2014-09-16

    Conspectus There is need in the molecular simulation community to develop new quantum mechanical (QM) methods that can be routinely applied to the simulation of large molecular systems in complex, heterogeneous condensed phase environments. Although conventional methods, such as the hybrid quantum mechanical/molecular mechanical (QM/MM) method, are adequate for many problems, there remain other applications that demand a fully quantum mechanical approach. QM methods are generally required in applications that involve changes in electronic structure, such as when chemical bond formation or cleavage occurs, when molecules respond to one another through polarization or charge transfer, or when matter interacts with electromagnetic fields. A full QM treatment, rather than QM/MM, is necessary when these features present themselves over a wide spatial range that, in some cases, may span the entire system. Specific examples include the study of catalytic events that involve delocalized changes in chemical bonds, charge transfer, or extensive polarization of the macromolecular environment; drug discovery applications, where the wide range of nonstandard residues and protonation states are challenging to model with purely empirical MM force fields; and the interpretation of spectroscopic observables. Unfortunately, the enormous computational cost of conventional QM methods limit their practical application to small systems. Linear-scaling electronic structure methods (LSQMs) make possible the calculation of large systems but are still too computationally intensive to be applied with the degree of configurational sampling often required to make meaningful comparison with experiment. In this work, we present advances in the development of a quantum mechanical force field (QMFF) suitable for application to biological macromolecules and condensed phase simulations. QMFFs leverage the benefits provided by the LSQM and QM/MM approaches to produce a fully QM method that is able to simultaneously achieve very high accuracy and efficiency. The efficiency of the QMFF is made possible by partitioning the system into fragments and self-consistently solving for the fragment-localized molecular orbitals in the presence of the other fragment's electron densities. Unlike a LSQM, the QMFF introduces empirical parameters that are tuned to obtain very accurate intermolecular forces. The speed and accuracy of our QMFF is demonstrated through a series of examples ranging from small molecule clusters to condensed phase simulation, and applications to drug docking and protein-protein interactions. In these examples, comparisons are made to conventional molecular mechanical models, semiempirical methods, ab initio Hamiltonians, and a hybrid QM/MM method. The comparisons demonstrate the superior accuracy of our QMFF relative to the other models; nonetheless, we stress that the overarching role of QMFFs is not to supplant these established computational methods for problems where their use is appropriate. The role of QMFFs within the toolbox of multiscale modeling methods is to extend the range of applications to include problems that demand a fully quantum mechanical treatment of a large system with extensive configurational sampling.

  16. Prototype performance studies of a Full Mesh ATCA-based General Purpose Data Processing Board

    CERN Document Server

    Okumura, Yasuyuki; Liu, Tiehui Ted; Yin, Hang

    2013-01-01

    High luminosity conditions at the LHC pose many unique challenges for potential silicon based track trigger systems. One of the major challenges is data formatting, where hits from thousands of silicon modules must first be shared and organized into overlapping eta-phi trigger towers. Communication between nodes requires high bandwidth, low latency, and flexible real time data sharing, for which a full mesh backplane is a natural solution. A custom Advanced Telecommunications Computing Architecture data processing board is designed with the goal of creating a scalable architecture abundant in flexible, non-blocking, high bandwidth board to board communication channels while keeping the design as simple as possible. We have performed the first prototype board testing and our first attempt at designing the prototype system has proven to be successful. Leveraging the experience we gained through designing, building and testing the prototype board system we are in the final stages of laying out the next generatio...

  17. General purpose - expert system for the analysis and design of base plates

    International Nuclear Information System (INIS)

    Al-Shawaf, T.D.; Hahn, W.F.; Ho, A.D.

    1987-01-01

    As an expert system, the IMPLATE program uses plant specific information to make decisions in modeling and analysis of baseplates. The user supplies a minimum of information which is checked for validity and reasonableness. Once this data is supplied, the program automatically generates a compatible mesh and finite element model from its data base accounting for the attachments, stiffeners, anchor bolts and plate/concrete interface. Based on the loading direction, the program deletes certain degrees of freedom and performs a linear or a nonlinear solution, whichever is appropriate. Load step sizes and equilibrium iteration are automatically selected by the program to ensure a convergent solution. Once the analysis is completed, a code check is then performed and a summary of results is produced. Plots of the plate deformation pattern and stress contours are also generated. (orig.)

  18. Design and validation of a general purpose robotic testing system for musculoskeletal applications.

    Science.gov (United States)

    Noble, Lawrence D; Colbrunn, Robb W; Lee, Dong-Gil; van den Bogert, Antonie J; Davis, Brian L

    2010-02-01

    Orthopaedic research on in vitro forces applied to bones, tendons, and ligaments during joint loading has been difficult to perform because of limitations with existing robotic simulators in applying full-physiological loading to the joint under investigation in real time. The objectives of the current work are as follows: (1) describe the design of a musculoskeletal simulator developed to support in vitro testing of cadaveric joint systems, (2) provide component and system-level validation results, and (3) demonstrate the simulator's usefulness for specific applications of the foot-ankle complex and knee. The musculoskeletal simulator allows researchers to simulate a variety of loading conditions on cadaver joints via motorized actuators that simulate muscle forces while simultaneously contacting the joint with an external load applied by a specialized robot. Multiple foot and knee studies have been completed at the Cleveland Clinic to demonstrate the simulator's capabilities. Using a variety of general-use components, experiments can be designed to test other musculoskeletal joints as well (e.g., hip, shoulder, facet joints of the spine). The accuracy of the tendon actuators to generate a target force profile during simulated walking was found to be highly variable and dependent on stance position. Repeatability (the ability of the system to generate the same tendon forces when the same experimental conditions are repeated) results showed that repeat forces were within the measurement accuracy of the system. It was determined that synchronization system accuracy was 6.7+/-2.0 ms and was based on timing measurements from the robot and tendon actuators. The positioning error of the robot ranged from 10 microm to 359 microm, depending on measurement condition (e.g., loaded or unloaded, quasistatic or dynamic motion, centralized movements or extremes of travel, maximum value, or root-mean-square, and x-, y- or z-axis motion). Algorithms and methods for controlling specimen interactions with the robot (with and without muscle forces) to duplicate physiological loading of the joints through iterative pseudo-fuzzy logic and real-time hybrid control are described. Results from the tests of the musculoskeletal simulator have demonstrated that the speed and accuracy of the components, the synchronization timing, the force and position control methods, and the system software can adequately replicate the biomechanics of human motion required to conduct meaningful cadaveric joint investigations.

  19. Man machine interaction for operator information systems : a general purpose display package on PC/AT

    International Nuclear Information System (INIS)

    Chandra, A.K.; Dubey, B.P.; Deshpande, S.V.; Vaidya, U.W.; Khandekar, A.B.

    1991-01-01

    Several operator information systems for nuclear plants have been developed at Reactor Control Division of BARC and these have involved extensive operator interaction to extract the maximum information from the systems. Each of these systems used a different scheme for operator interaction. A composite package has now been developed on PC/AT with EGA/VGA for use with any system to obviate the necessity to develop new software for each project. This permits information to be displayed in various formats viz. trend and history curves, tabular data, bar graphs and core matrix (both for 235 and 500 MWe cores). It also allows data to be printed and plotted using multi colour plotter. This package thus integrates all the features of the earlier systems. It also integrates the operator interaction scheme. It uses window based pull down menus to select parameters to be fed into a particular display format. Within any display format the operator has significant flexibility to modify the selected parameters using context dependent soft keys. The package also allows data to be retrieved in machine readable form. This report describes the various user friendly functions implemented and also the design of the system software. (author). 1 tab., 10 fig., 3 refs

  20. Adding Hierarchical Objects to Relational Database General-Purpose XML-Based Information Managements

    Science.gov (United States)

    Lin, Shu-Chun; Knight, Chris; La, Tracy; Maluf, David; Bell, David; Tran, Khai Peter; Gawdiak, Yuri

    2006-01-01

    NETMARK is a flexible, high-throughput software system for managing, storing, and rapid searching of unstructured and semi-structured documents. NETMARK transforms such documents from their original highly complex, constantly changing, heterogeneous data formats into well-structured, common data formats in using Hypertext Markup Language (HTML) and/or Extensible Markup Language (XML). The software implements an object-relational database system that combines the best practices of the relational model utilizing Structured Query Language (SQL) with those of the object-oriented, semantic database model for creating complex data. In particular, NETMARK takes advantage of the Oracle 8i object-relational database model using physical-address data types for very efficient keyword searches of records across both context and content. NETMARK also supports multiple international standards such as WEBDAV for drag-and-drop file management and SOAP for integrated information management using Web services. The document-organization and -searching capabilities afforded by NETMARK are likely to make this software attractive for use in disciplines as diverse as science, auditing, and law enforcement.

  1. Auxiliary subsystems of a General-Purpose IGBT Stack for high ...

    Indian Academy of Sciences (India)

    Anil Kumar Adapa

    back signal to DSC [18, 19]. A simple method ... A push-button or toggle switch feature to manually power on or ... Item 5 in this list allows the operation of multiple con- verters in .... with its own ground reference while ensuring signal integrity.

  2. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors.

    Science.gov (United States)

    Cheung, Kit; Schultz, Simon R; Luk, Wayne

    2015-01-01

    NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation.

  3. Signal processing and general purpose data acquisition system for on-line tomographic measurements

    Science.gov (United States)

    Murari, A.; Martin, P.; Hemming, O.; Manduchi, G.; Marrelli, L.; Taliercio, C.; Hoffmann, A.

    1997-01-01

    New analog signal conditioning electronics and data acquisition systems have been developed for the soft x-ray and bolometric tomography diagnostic in the reverse field pinch experiment (RFX). For the soft x-ray detectors the analog signal processing includes a fully differential current to voltage conversion, with up to a 200 kHz bandwidth. For the bolometers, a 50 kHz carrier frequency amplifier allows a maximum bandwidth of 10 kHz. In both cases the analog signals are digitized with a 1 MHz sampling rate close to the diagnostic and are transmitted via a transparent asynchronous xmitter/receiver interface (TAXI) link to purpose built Versa Module Europa (VME) modules which perform data acquisition. A software library has been developed for data preprocessing and tomographic reconstruction. It has been written in C language and is self-contained, i.e., no additional mathematical library is required. The package is therefore platform-free: in particular it can perform online analysis in a real-time application, such as continuous display and feedback, and is portable for long duration fusion or other physical experiments. Due to the modular organization of the library, new preprocessing and analysis modules can be easily integrated in the environment. This software is implemented in RFX over three different platforms: open VMS, digital Unix, and VME 68040 CPU.

  4. rFerns: An Implementation of the Random Ferns Method for General-Purpose Machine Learning

    Directory of Open Access Journals (Sweden)

    Miron B. Kursa

    2014-11-01

    Full Text Available Random ferns is a very simple yet powerful classification method originally introduced for specific computer vision tasks. In this paper, I show that this algorithm may be considered as a constrained decision tree ensemble and use this interpretation to introduce a series of modifications which enable the use of random ferns in general machine learning problems. Moreover, I extend the method with an internal error approximation and an attribute importance measure based on corresponding features of the random forest algorithm. I also present the R package rFerns containing an efficient implementation of this modified version of random ferns.

  5. Ionic Liquid-Liquid Chromatography: A New General Purpose Separation Methodology.

    Science.gov (United States)

    Brown, Leslie; Earle, Martyn J; Gîlea, Manuela A; Plechkova, Natalia V; Seddon, Kenneth R

    2017-08-10

    Ionic liquids can form biphasic solvent systems with many organic solvents and water, and these solvent systems can be used in liquid-liquid separations and countercurrent chromatography. The wide range of ionic liquids that can by synthesised, with specifically tailored properties, represents a new philosophy for the separation of organic, inorganic and bio-based materials. A customised countercurrent chromatograph has been designed and constructed specifically to allow the more viscous character of ionic liquid-based solvent systems to be used in a wide variety of separations (including transition metal salts, arenes, alkenes, alkanes, bio-oils and sugars).

  6. The SGHWR programme together with memoranda submitted to the General Purposes sub-committee

    International Nuclear Information System (INIS)

    1976-01-01

    The first part of the report sets out the arguments of those against, and those in favour of cancelling the SGHWR. Cost, as affected by more stringent safety standards and by reduced domestic power programme and export expectation, was the main centre of disagreement. The arguments of employee representatives, including sociological considerations, are also reported. Other possible strategies are then discussed: (a) no nuclear power; (b) a nuclear programme based on the FBR; (c) AGR; and (d) PWR. The PWR safety review, and the costs and size of

  7. Analysis of impact of general-purpose graphics processor units in supersonic flow modeling

    Science.gov (United States)

    Emelyanov, V. N.; Karpenko, A. G.; Kozelkov, A. S.; Teterina, I. V.; Volkov, K. N.; Yalozo, A. V.

    2017-06-01

    Computational methods are widely used in prediction of complex flowfields associated with off-normal situations in aerospace engineering. Modern graphics processing units (GPU) provide architectures and new programming models that enable to harness their large processing power and to design computational fluid dynamics (CFD) simulations at both high performance and low cost. Possibilities of the use of GPUs for the simulation of external and internal flows on unstructured meshes are discussed. The finite volume method is applied to solve three-dimensional unsteady compressible Euler and Navier-Stokes equations on unstructured meshes with high resolution numerical schemes. CUDA technology is used for programming implementation of parallel computational algorithms. Solutions of some benchmark test cases on GPUs are reported, and the results computed are compared with experimental and computational data. Approaches to optimization of the CFD code related to the use of different types of memory are considered. Speedup of solution on GPUs with respect to the solution on central processor unit (CPU) is compared. Performance measurements show that numerical schemes developed achieve 20-50 speedup on GPU hardware compared to CPU reference implementation. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.

  8. A General Purpose Feature Extractor for Light Detection and Ranging Data

    Science.gov (United States)

    2010-11-17

    Similarly, the family of stochastic gradient descent (SGD) algorithms [15,16] and Gauss - Seidel relaxation [17,18] have runtimes that are directly...then either (1) computing the centroids of each segment, (2) computing the curvature of each segment, or (3) iteratively computing a locally-weighted...mean position until it converges . Our approach replaces these three mechanisms with a single method. Zlot and Bosse additionally investigate

  9. Reassessment of Resuspension Factor Following Radionuclide Dispersal: Toward a General-purpose Rate Constant

    Energy Technology Data Exchange (ETDEWEB)

    Marshall, Shaun [Worcester Polytechnic Inst., Worcester, MA (United States). Dept. of Physics; Potter, Charles [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Medich, David [Worcester Polytechnic Inst., Worcester, MA (United States). Dept. of Physics

    2018-05-01

    A recent analysis of historical radionuclide resuspension datasets con rmed the general applicability of the Anspaugh and modified Anspaugh models of resuspension factors following both controlled and disastrous releases. The observations appear to increase in variance earlier in time, however all points were equally weighted in statistical fit calculations, inducing a positive skewing of resuspension coeffcients. Such data are extracted from the available deposition experiments spanning 2900 days. Measurements within a 3-day window are grouped into singular sample sets to construct standard deviations. A refitting is performed using a relative instrumental weighting of the observations. The resulting best-fit equations produces tamer exponentials which give decreased integrated resuspension factor values relative to those reported by Anspaugh. As expected, the fits attenuate greater error amongst the data at earlier time. The reevaluation provides a sharper contrast between the empirical models, and reafirms their deficiencies in the short-lived timeframe wherein the dynamics of particulate dispersion dominate the resuspension process.

  10. Performance analysis of general purpose and digital signal processor kernels for heterogeneous systems-on-chip

    Directory of Open Access Journals (Sweden)

    T. von Sydow

    2003-01-01

    Full Text Available Various reasons like technology progress, flexibility demands, shortened product cycle time and shortened time to market have brought up the possibility and necessity to integrate different architecture blocks on one heterogeneous System-on-Chip (SoC. Architecture blocks like programmable processor cores (DSP- and GPP-kernels, embedded FPGAs as well as dedicated macros will be integral parts of such a SoC. Especially programmable architecture blocks and associated optimization techniques are discussed in this contribution. Design space exploration and thus the choice which architecture blocks should be integrated in a SoC is a challenging task. Crucial to this exploration is the evaluation of the application domain characteristics and the costs caused by individual architecture blocks integrated on a SoC. An ATE-cost function has been applied to examine the performance of the aforementioned programmable architecture blocks. Therefore, representative discrete devices have been analyzed. Furthermore, several architecture dependent optimization steps and their effects on the cost ratios are presented.

  11. PANIC: A General-purpose Panoramic Near-infrared Camera for the Calar Alto Observatory

    Science.gov (United States)

    Cárdenas Vázquez, M.-C.; Dorner, B.; Huber, A.; Sánchez-Blanco, E.; Alter, M.; Rodríguez Gómez, J. F.; Bizenberger, P.; Naranjo, V.; Ibáñez Mengual, J.-M.; Panduro, J.; García Segura, A. J.; Mall, U.; Fernández, M.; Laun, W.; Ferro Rodríguez, I. M.; Helmling, J.; Terrón, V.; Meisenheimer, K.; Fried, J. W.; Mathar, R. J.; Baumeister, H.; Rohloff, R.-R.; Storz, C.; Verdes-Montenegro, L.; Bouy, H.; Ubierna, M.; Fopp, P.; Funke, B.

    2018-02-01

    PANIC7 is the new PAnoramic Near-Infrared Camera for Calar Alto and is a project jointly developed by the MPIA in Heidelberg, Germany, and the IAA in Granada, Spain, for the German-Spanish Astronomical Center at Calar Alto Observatory (CAHA; Almería, Spain). This new instrument works with the 2.2 m and 3.5 m CAHA telescopes covering a field of view of 30 × 30 arcmin and 15 × 15 arcmin, respectively, with a sampling of 4096 × 4096 pixels. It is designed for the spectral bands from Z to K S , and can also be equipped with narrowband filters. The instrument was delivered to the observatory in 2014 October and was commissioned at both telescopes between 2014 November and 2015 June. Science verification at the 2.2 m telescope was carried out during the second semester of 2015 and the instrument is now at full operation. We describe the design, assembly, integration, and verification process, the final laboratory tests and the PANIC instrument performance. We also present first-light data obtained during the commissioning and preliminary results of the scientific verification. The final optical model and the theoretical performance of the camera were updated according to the as-built data. The laboratory tests were made with a star simulator. Finally, the commissioning phase was done at both telescopes to validate the camera real performance on sky. The final laboratory test confirmed the expected camera performances, complying with the scientific requirements. The commissioning phase on sky has been accomplished.

  12. cMsg - A general purpose, publish-subscribe, interprocess communication implementation and framework

    International Nuclear Information System (INIS)

    Timmer, C; Abbott, D; Gyurjyan, V; Heyes, G; Jastrzembski, E; Wolin, E

    2008-01-01

    cMsg is software used to send and receive messages in the Jefferson Lab online and runcontrol systems. It was created to replace the several IPC software packages in use with a single API. cMsg is asynchronous in nature, running a callback for each message received. However, it also includes synchronous routines for convenience. On the framework level, cMsg is a thin API layer in Java, C, or C++ that can be used to wrap most message-based interprocess communication protocols. The top layer of cMsg uses this same API and multiplexes user calls to one of many such wrapped protocols (or domains) based on a URL-like string which we call a Uniform Domain Locator or UDL. One such domain is a complete implementation of a publish-subscribe messaging system using network communications and written in Java (user APIs in C and C++ too). This domain is built in a way which allows it to be used as a proxy server to other domains (protocols). Performance is excellent allowing the system not only to be used for messaging but also as a data distribution system

  13. General-Purpose Computation with Neural Networks: A Survey of Complexity Theoretic Results

    Czech Academy of Sciences Publication Activity Database

    Šíma, Jiří; Orponen, P.

    2003-01-01

    Roč. 15, č. 12 (2003), s. 2727-2778 ISSN 0899-7667 R&D Projects: GA AV ČR IAB2030007; GA ČR GA201/02/1456 Institutional research plan: AV0Z1030915 Keywords : computational power * computational complexity * perceptrons * radial basis functions * spiking neurons * feedforward networks * reccurent networks * probabilistic computation * analog computation Subject RIV: BA - General Mathematics Impact factor: 2.747, year: 2003

  14. Reassessment of Resuspension Factor Following Radionuclide Dispersal: Toward a General-purpose Rate Constant.

    Science.gov (United States)

    Marshall, Shaun; Potter, Charles; Medich, David

    2018-05-01

    A recent analysis of historical radionuclide resuspension datasets confirmed the general applicability of the Anspaugh and modified Anspaugh models of resuspension factors following both controlled and disastrous releases. While observations appear to have larger variance earlier in time, previous studies equally weighted the data for statistical fit calculations; this could induce a positive skewing of resuspension coefficients in the early time-period. A refitting is performed using a relative instrumental weighting of the observations. Measurements within a 3-d window are grouped into singular sample sets to construct standard deviations. The resulting best-fit equations produce tamer exponentials, which give decreased integrated resuspension factor values relative to those reported by Anspaugh. As expected, the fits attenuate greater error among the data at earlier time. The reevaluation provides a sharper contrast between the empirical models and reaffirms their deficiencies in the short-lived timeframe wherein the dynamics of particulate dispersion dominate the resuspension process.

  15. The Dynamics of a General Purpose Technology in a Research and Assimilation Model

    NARCIS (Netherlands)

    Nahuis, R.

    1998-01-01

    Where is the productivity growth from the IT revolution? Why did the skill premium rise sharply in the early eighties? Were these phenomena related? This paper examines these questions in a general equilibrium model of growth. Technological progress in firms is driven by research aimed at improving

  16. Accuracy of Surface Plate Measurements - General Purpose Software for Flatness Measurement

    NARCIS (Netherlands)

    Meijer, J.; Heuvelman, C.J.

    1990-01-01

    Flatness departures of surface plates are generally obtained from straightness measurements of lines on the surface. A computer program has been developed for on-line measurement and evaluation, based on the simultaneous coupling of measurements in all grid points. Statistical methods are used to

  17. Factors Affecting Preservice Teachers' Computer Use for General Purposes: Implications for Computer Training Courses

    Science.gov (United States)

    Zogheib, Salah

    2014-01-01

    As the majority of educational research has focused on preservice teachers' computer use for "educational purposes," the question remains: Do preservice teachers use computer technology for daily life activities and encounters? And do preservice teachers' personality traits and motivational beliefs related to computer training provided…

  18. Feasibility study of a novel general purpose CZT-based digital SPECT camera: initial clinical results.

    Science.gov (United States)

    Goshen, Elinor; Beilin, Leonid; Stern, Eli; Kenig, Tal; Goldkorn, Ronen; Ben-Haim, Simona

    2018-03-14

    The performance of a prototype novel digital single-photon emission computed tomography (SPECT) camera with multiple pixelated CZT detectors and high sensitivity collimators (Digital SPECT; Valiance X12 prototype, Molecular Dynamics) was evaluated in various clinical settings. Images obtained in the prototype system were compared to images from an analog camera fitted with high-resolution collimators. Clinical feasibility, image quality, and diagnostic performance of the prototype were evaluated in 36 SPECT studies in 35 patients including bone (n = 21), brain (n = 5), lung perfusion (n = 3), and parathyroid (n = 3) and one study each of sentinel node and labeled white blood cells. Images were graded on a scale of 1-4 for sharpness, contrast, overall quality, and diagnostic confidence. Digital CZT SPECT provided a statistically significant improvement in sharpness and contrast in clinical cases (mean score of 3.79 ± 0.61 vs. 3.26 ± 0.50 and 3.92 ± 0.29 vs. 3.34 ± 0.47 respectively, p < 0.001 for both). Overall image quality was slightly higher for the digital SPECT but not statistically significant (3.74 vs. 3.66). CZT SPECT provided significantly improved image sharpness and contrast compared to the analog system in the clinical settings evaluated. Further studies will evaluate the diagnostic performance of the system in large patient cohorts in additional clinical settings.

  19. Applications for General Purpose Command Buffers: The Emergency Conjunction Avoidance Maneuver

    Science.gov (United States)

    Scheid, Robert J; England, Martin

    2016-01-01

    A case study is presented for the use of Relative Operation Sequence (ROS) command buffers to quickly execute a propulsive maneuver to avoid a collision with space debris. In this process, a ROS is custom-built with a burn time and magnitude, uplinked to the spacecraft, and executed in 15 percent of the time of the previous method. This new process provides three primary benefits. First, the planning cycle can be delayed until it is certain a burn must be performed, reducing team workload. Second, changes can be made to the burn parameters almost up to the point of execution while still allowing the normal uplink product review process, reducing the risk of leaving the operational orbit because of outdated burn parameters, and minimizing the chance of accidents from human error, such as missed commands, in a high-stress situation. Third, the science impacts can be customized and minimized around the burn, and in the event of an abort can be eliminated entirely in some circumstances. The result is a compact burn process that can be executed in as few as four hours and can be aborted seconds before execution. Operational, engineering, planning, and flight dynamics perspectives are presented, as well as a functional overview of the code and workflow required to implement the process. Future expansions and capabilities are also discussed.

  20. Minus 3: a general purpose data acquisition system at LBL's 88''-cyclotron and superhilac

    International Nuclear Information System (INIS)

    Maples, C.; Sivak, J.

    1979-05-01

    MINUS 3 is a general, multi-tasked data acquisition package operating on the ModComp IV/25 computers at both the 88''-Cyclotron and SuperHILAC. It currently can acquire data via three different channels: interrupt; serial DMA link; and remote slave units for histogram type data. Two additional acquisition paths, CAMAC (with programmable differential branch drivers) and MODACS (for multiple CPU linkages and control) are scheduled to be added in the near future. The package operates in a prioritized, time-available mode which permits it to dynamically adapt to microscopic data rate structures due to beam characteristics at different accelerators. Special hardware has been added to the graphics system to provide enhanced high-speed interactive capability. The program framework is also designed as a parasitic environment in which users may, in parallel, attach their own specialized and independent code

  1. Radioactivity decontamination of materials commonly used as surfaces in general-purpose radioisotope laboratories.

    Science.gov (United States)

    Leonardi, Natalia M; Tesán, Fiorella C; Zubillaga, Marcela B; Salgueiro, María J

    2014-12-01

    In accord with as-low-as-reasonably-achievable and good-manufacturing-practice concepts, the present study evaluated the efficiency of radioactivity decontamination of materials commonly used in laboratory surfaces and whether solvent spills on these materials affect the findings. Four materials were evaluated: stainless steel, a surface comprising one-third acrylic resin and two-thirds natural minerals, an epoxy cover, and vinyl-based multipurpose flooring. Radioactive material was eluted from a (99)Mo/(99m)Tc generator, and samples of the surfaces were control-contaminated with 37 MBq (100 μL) of this eluate. The same procedure was repeated with samples of surfaces previously treated with 4 solvents: methanol, methyl ethyl ketone, acetone, and ethanol. The wet radioactive contamination was allowed to dry and then was removed with cotton swabs soaked in soapy water. The effectiveness of decontamination was defined as the percentage of activity removed per cotton swab, and the efficacy of decontamination was defined as the total percentage of activity removed, which was obtained by summing the percentages of activity in all the swabs required to complete the decontamination. Decontamination using our protocol was most effective and most efficacious for stainless steel and multipurpose flooring. Moreover, treatment with common organic solvents seemed not to affect the decontamination of these surfaces. Decontamination of the other two materials was less efficient and was interfered with by the organic solvents; there was also great variability in the overall results obtained for these other two materials. In expanding our laboratory, it is possible for us to select those surface materials on which our decontamination protocol works best. © 2014 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  2. Working memory training mostly engages general-purpose large-scale networks for learning.

    Science.gov (United States)

    Salmi, Juha; Nyberg, Lars; Laine, Matti

    2018-03-21

    The present meta-analytic study examined brain activation changes following working memory (WM) training, a form of cognitive training that has attracted considerable interest. Comparisons with perceptual-motor (PM) learning revealed that WM training engages domain-general large-scale networks for learning encompassing the dorsal attention and salience networks, sensory areas, and striatum. Also the dynamics of the training-induced brain activation changes within these networks showed a high overlap between WM and PM training. The distinguishing feature for WM training was the consistent modulation of the dorso- and ventrolateral prefrontal cortex (DLPFC/VLPFC) activity. The strongest candidate for mediating transfer to similar untrained WM tasks was the frontostriatal system, showing higher striatal and VLPFC activations, and lower DLPFC activations after training. Modulation of transfer-related areas occurred mostly with longer training periods. Overall, our findings place WM training effects into a general perception-action cycle, where some modulations may depend on the specific cognitive demands of a training task. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Interactive general-purpose function minimization for the analysis of neutron scattering data

    International Nuclear Information System (INIS)

    Abel, W.

    1981-12-01

    An on-line graphic display facility has been employed mainly for the peak analysis of time-of-flight spectra measured by inelastic scattering of thermal neutrons. But it is useful also for the analysis of spectra measured with triple axis spectrometers and of diffraction patterns. The spectral lines may be fitted by the following analytical shape functions: (i) a Gaussian, (ii) a Lorentzian, or (iii) a convolution of a Lorentzian with a Gaussian, plus a background continuum. Data reduction or correction may be invoked optionally. For more general applications in analysing of numerical data there is also the possibility to define the analytical shape functions by the user. Three different minimization methods are available which may be used alone or in combination. The parameters of the shape functions may be kept fixed or variable during the minimization steps. The width of variation may be restricted. Global correlation coefficients, parameter errors and the chi 2 are displayed to inform the user about the quality of the fit. A detailed description of the program operations is given. The programs are written in FORTRAN IV and use an IBM/2250-1 graphic display unit. (orig.) [de

  4. 369 TFlop/s molecular dynamics simulations on the Roadrunner general-purpose heterogeneous supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Swaminarayan, Sriram [Los Alamos National Laboratory; Germann, Timothy C [Los Alamos National Laboratory; Kadau, Kai [Los Alamos National Laboratory; Fossum, Gordon C [IBM CORPORATION

    2008-01-01

    The authors present timing and performance numbers for a short-range parallel molecular dynamics (MD) code, SPaSM, that has been rewritten for the heterogeneous Roadrunner supercomputer. Each Roadrunner compute node consists of two AMD Opteron dual-core microprocessors and four PowerXCell 8i enhanced Cell microprocessors, so that there are four MPI ranks per node, each with one Opteron and one Cell. The interatomic forces are computed on the Cells (each with one PPU and eight SPU cores), while the Opterons are used to direct inter-rank communication and perform I/O-heavy periodic analysis, visualization, and checkpointing tasks. The performance measured for our initial implementation of a standard Lennard-Jones pair potential benchmark reached a peak of 369 Tflop/s double-precision floating-point performance on the full Roadrunner system (27.7% of peak), corresponding to 124 MFlop/Watt/s at a price of approximately 3.69 MFlops/dollar. They demonstrate an initial target application, the jetting and ejection of material from a shocked surface.

  5. General-Purpose Genotype or How Epigenetics Extend the Flexibility of a Genotype

    Directory of Open Access Journals (Sweden)

    Rachel Massicotte

    2012-01-01

    Full Text Available This project aims at investigating the link between individual epigenetic variability (not related to genetic variability and the variation of natural environmental conditions. We studied DNA methylation polymorphisms of individuals belonging to a single genetic lineage of the clonal diploid fish Chrosomus eos-neogaeus sampled in seven geographically distant lakes. In spite of a low number of informative fragments obtained from an MSAP analysis, individuals of a given lake are epigenetically similar, and methylation profiles allow the clustering of individuals in two distinct groups of populations among lakes. More importantly, we observed a significant pH variation that is consistent with the two epigenetic groups. It thus seems that the genotype studied has the potential to respond differentially via epigenetic modifications under variable environmental conditions, making epigenetic processes a relevant molecular mechanism contributing to phenotypic plasticity over variable environments in accordance with the GPG model.

  6. A Full Mesh ATCA-based General Purpose Data Processing Board (Pulsar II)

    Energy Technology Data Exchange (ETDEWEB)

    Ajuha, S. [Univ. of Sao Paulo (Brazil); et al.

    2017-06-29

    The Pulsar II is a custom ATCA full mesh enabled FPGA-based processor board which has been designed with the goal of creating a scalable architecture abundant in flexible, non-blocking, high bandwidth interconnections. The design has been motivated by silicon-based tracking trigger needs for LHC experiments. In this technical memo we describe the Pulsar II hardware and its performance, such as the performance test results with full mesh backplanes from different vendors, how the backplane is used for the development of low-latency time-multiplexed data transfer schemes and how the inter-shelf and intra-shelf synchronization works.

  7. ABAQUS/EPGEN - a general purpose finite element code with emphasis on nonlinear applications

    International Nuclear Information System (INIS)

    Hibbitt, H.D.

    1984-01-01

    The article contains a summary description of ABAQUS, a finite element program designed for general use in nonlinear as well as linear structural problems, in the context of its application to nuclear structural integrity analysis. The article begins with a discussion of the design criteria and methods upon which the code development has been based. The engineering modelling capabilities, currently implemented in the program - elements, constitutive models and analysis procedures - are then described. Finally, a few demonstration examples are presented, to illustrate some of the program's features that are of interest in structural integrity analysis associated with nuclear power plants. (orig.)

  8. ABAQUS-EPGEN: a general-purpose finite element code. Volume 3. Example problems manual

    International Nuclear Information System (INIS)

    Hibbitt, H.D.; Karlsson, B.I.; Sorensen, E.P.

    1983-03-01

    This volume is the Example and Verification Problems Manual for ABAQUS/EPGEN. Companion volumes are the User's, Theory and Systems Manuals. This volume contains two major parts. The bulk of the manual (Sections 1-8) contains worked examples that are discussed in detail, while Appendix A documents a large set of basic verification cases that provide the fundamental check of the elements in the code. The examples in Sections 1-8 illustrate and verify significant aspects of the program's capability. Most of these problems provide verification, but they have also been chosen to allow discussion of modeling and analysis techniques. Appendix A contains basic verification cases. Each of these cases verifies one element in the program's library. The verification consists of applying all possible load or flux types (including thermal loading of stress elements), and all possible foundation or film/radiation conditions, and checking the resulting force and stress solutions or flux and temperature results. This manual provides program verification. All of the problems described in the manual are run and the results checked, for each release of the program, and these verification results are made available

  9. 78 FR 77662 - Notice of Availability (NOA) for General Purpose Warehouse and Information Technology Center...

    Science.gov (United States)

    2013-12-24

    ...,500 square feet and would include a 360,000 square feet active bulk warehouse and a 5,500 square feet... parking lot (approximately 295,000 square feet) and new laydown area (approximately 240,000 square feet.... The laydown area would be constructed in an unimproved, irregularly shaped open lot that is...

  10. A Comprehensive Toolset for General-Purpose Private Computing and Outsourcing

    Science.gov (United States)

    2016-12-08

    contexts businesses are also hesitant to make their proprietary available to the cloud [1]. While in general sensitive data can be protected by the...data sources, gathering and maintaining the data needed , and completing and reviewing the collection of information. Send comments regarding this...project and scientific advances made towards each of the research thrusts throughout the project duration. 1 Project Objectives Cloud computing enables

  11. Limits to high-speed simulations of spiking neural networks using general-purpose computers.

    Science.gov (United States)

    Zenke, Friedemann; Gerstner, Wulfram

    2014-01-01

    To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed toward synaptic plasticity. In particular spike-timing-dependent plasticity (STDP) creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall, these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite.

  12. General-Purpose Data Acquisition Cards Based on FPGAs and High Speed Serial Protocols

    OpenAIRE

    Giannuzzi, Fabio

    2016-01-01

    This thesis exhibits the results of my PhD Apprenticeship Program, carried out at the “Marposs S.p.a.” firm, in the electronic research division, and at the Department of Physics and Astronomy of the Bologna University, in the INFN's electronics laboratories of the ATLAS group. During these three years of research, I worked on the development and realization of electronic boards dedicated to flexible data acquisition, designed to be applied in several contexts, that need to share high per...

  13. A NEW APROACH OF CONCEPTUAL FRAMEWORK FOR GENERAL PURPOSE FINANCIAL REPORTING BY PUBLIC SECTOR ENTITIES

    Directory of Open Access Journals (Sweden)

    Nistor Cristina

    2011-12-01

    Full Text Available The importance of accounting in the modern economy is obvious. That is more elevated bodies of the European Union and elsewhere dealing with the organization and functioning of accounting as a fundamental component of business (Nistor C., 2009. The mission of the International Federation of Accountants (IFAC is to serve the public interest, strengthen the worldwide accountancy profession and contribute to the development of strong international economies by initiating and encouraging the professional standards of high quality, the convergence process these international standards and to discuss issues of public interest which is extremely relevant international experience of (IFAC, 2011. Currently, the concepts related to financial reports in public sector are developed by IPSAS references. Many of today's IPSAS are based on international accounting standards (IAS / IFRS, to the extent that they are relevant to the requirements of the public sector. Therefore today's IPSAS are based on concepts and definitions of the IASB's conceptual framework, with changes where necessary for public sector specific approach. Thus this study present this brief draft statement under discussion by the leadership of IFAC in collaboration with other organizations and groups that develop financial reporting requirements of the public sector. Then, we highlight the importance and the degree of acceptance of the project which results from comments received. On the basis of combining qualitative with quantitative research seeks to demonstrate the necessity and usefulness of a common conceptual framework of the International Accounting Standards (in this case the Public Sector, starting from their emergence from presenting their bodies involved in the foundation, the content standards, experience of different countries. The results have direct implications on Romanian public accounting system, given that the reference of the international implementation and reporting is an actual goal. The study is primarily addressed to graduate, doctoral students, professors and researchers working in public sector accounting. The study aims at presenting the acceptance of the theme subject for discussion by the IPSASB. It is addressed also to all those interested to know the current evoltia development of International Public Sector Accounting.

  14. A NEW APROACH OF CONCEPTUAL FRAMEWORK FOR GENERAL PURPOSE FINANCIAL REPORTING BY PUBLIC SECTOR ENTITIES

    OpenAIRE

    Nistor Cristina

    2011-01-01

    The importance of accounting in the modern economy is obvious. That is more elevated bodies of the European Union and elsewhere dealing with the organization and functioning of accounting as a fundamental component of business (Nistor C., 2009). The mission of the International Federation of Accountants (IFAC) is to serve the public interest, strengthen the worldwide accountancy profession and contribute to the development of strong international economies by initiating and encouraging the pr...

  15. Preparing General Purpose Forces in the United States and British Armies for Counterinsurgent Operations

    Science.gov (United States)

    2010-12-10

    Operations In Iraq: Planning, Combat, And Occupation,” Thomas Ricks’ Fiasco, and reports by Army historian Major Isaiah Wilson, and former CENCTOM J-4...established Multi-National Forces-Iraq, and Lieutenant General Thomas Metz, commander of the Army’s III Corps, assumed the mantle of Multi-National Corps...Donald P. Wright and Colonel Timothy R. Reese or Thomas Ricks’ Fiasco among other books. 18Dr. Carter Malkasian, “Counterinsurgency in Iraq: May 2003

  16. Development of a general-purpose mobile robot for use in nuclear power plants

    International Nuclear Information System (INIS)

    Martinez, A.; Yague, M.A.; Linares, F.

    1993-01-01

    In recent years, the Space Division of CONSTRUCCIONES AERONAUTICAS (CASA) and EQUIPOS NUCLEARES (ENSA) have participated in several national and international robotics programs in the respective space and nuclear areas. In mid-1992, they decided to jointly undertake the development of a mobile inspection and maintenance robot for Nuclear Power Plants. The success of such a multidisciplinary project was ensured by the way both companies complement each other and by their previous development. Work was begun on the feasibility study and specifications, for which technical meetings were held with personnel from the Medical and Health Physics Association of the utilities (AMYS) and several Nuclear Power Plants. The result of these conversations was a preliminary system design along with the specifications with which the system must comply. With these results, a report and job plan were prepared for construction of two prototypes and submitted to the INI (National Institute of Industry Shareholder of both CASA and ENSA), which decided to finance this second Phase of Development by charging it to the Group's Research Development Fund

  17. The Spiral Discovery Network as an Automated General-Purpose Optimization Tool

    Directory of Open Access Journals (Sweden)

    Adam B. Csapo

    2018-01-01

    Full Text Available The Spiral Discovery Method (SDM was originally proposed as a cognitive artifact for dealing with black-box models that are dependent on multiple inputs with nonlinear and/or multiplicative interaction effects. Besides directly helping to identify functional patterns in such systems, SDM also simplifies their control through its characteristic spiral structure. In this paper, a neural network-based formulation of SDM is proposed together with a set of automatic update rules that makes it suitable for both semiautomated and automated forms of optimization. The behavior of the generalized SDM model, referred to as the Spiral Discovery Network (SDN, and its applicability to nondifferentiable nonconvex optimization problems are elucidated through simulation. Based on the simulation, the case is made that its applicability would be worth investigating in all areas where the default approach of gradient-based backpropagation is used today.

  18. Plasduino: An inexpensive, general-purpose data acquisition framework for educational experiments

    International Nuclear Information System (INIS)

    Baldini, L.

    2014-01-01

    Based on the Arduino development platform, plasduino is an open source data acquisition framework specifically designed for educational physics experiments. The source code, schematics and documentation are in the public domain under a GPL license and the system, streamlined for low cost and ease of use, can be replicated on the scale of a typical didactic lab with minimal effort. We describe the basic architecture of the system and illustrate its potential with some real-life examples.

  19. A Full Mesh ATCA-based General Purpose Data Processing Board (Pulsar II)

    CERN Document Server

    Ajuha, S; Costa de Paiva, Thiago; Das, Souvik; Eusebi, Ricardo; Finotti Ferreira, Vitor; Hahn, Kristian; Hu, Zhen; Jindariani, Sergo; Konigsberg, Jacobo; Liu, Tiehui Ted; Low, Jia Fu; Okumura, Yasuyuki; Olsen, Jamieson; Arruda Ramalho, Lucas; Rossin, Roberto; Ristori, Luciano; Akira Shinoda, Ailton; Tran, Nhan; Trovato, Marco; Ulmer, Keith; Vaz, Mario; Wen, Xianshan; Wu, Jin-Yuan; Xu, Zijun; Yin, Han; Zorzetti, Silvia

    2017-01-01

    The Pulsar II is a custom ATCA full mesh enabled FPGA-based processor board which has been designed with the goal of creating a scalable architecture abundant in flexible, non-blocking, high bandwidth interconnections. The design has been motivated by silicon-based tracking trigger needs for LHC experiments. In this technical memo we describe the Pulsar II hardware and its performance, such as the performance test results with full mesh backplanes from di↵erent vendors, how the backplane is used for the development of low-latency time-multiplexed data transfer schemes and how the inter-shelf and intra-shelf synchronization works.

  20. 41 CFR 60-2.10 - General purpose and contents of affirmative action programs.

    Science.gov (United States)

    2010-07-01

    ... central premise underlying affirmative action is that, absent discrimination, over time a contractor's workforce, generally, will reflect the gender, racial and ethnic profile of the labor pools from which the... include action-oriented programs. If women and minorities are not being employed at a rate to be expected...

  1. Implementation of Finite Volume based Navier Stokes Algorithm Within General Purpose Flow Network Code

    Science.gov (United States)

    Schallhorn, Paul; Majumdar, Alok

    2012-01-01

    This paper describes a finite volume based numerical algorithm that allows multi-dimensional computation of fluid flow within a system level network flow analysis. There are several thermo-fluid engineering problems where higher fidelity solutions are needed that are not within the capacity of system level codes. The proposed algorithm will allow NASA's Generalized Fluid System Simulation Program (GFSSP) to perform multi-dimensional flow calculation within the framework of GFSSP s typical system level flow network consisting of fluid nodes and branches. The paper presents several classical two-dimensional fluid dynamics problems that have been solved by GFSSP's multi-dimensional flow solver. The numerical solutions are compared with the analytical and benchmark solution of Poiseulle, Couette and flow in a driven cavity.

  2. Cross-Cultural Competency in the General Purpose Force: Training Strategies and Implications for Future Operations

    Science.gov (United States)

    2013-04-09

    the Behavioral and Social Sciences Research Report 1951, October 2011. 29 Zimbardo, Phillip. The Lucifer Effect : Understanding How...12. 13 Phillip Zimbardo, The Lucifer Effect : Understanding How Good People Turn Evil, (New York, NY: Random House, Inc., 2007), p. 353-355. Dr...culture and language will enhance the effectiveness of the GPF. Using the same body of research, this paper will delineate the key However, the

  3. Design of a general purpose data collection module for the NuTel telescope

    International Nuclear Information System (INIS)

    Velikzhanin, Y.S.; Chi, Y.; Hou, W.S.; Hsu, C.C.; Shiu, J.G.; Ueno, K.; Wang, M.Z.; Yeh, P.

    2005-01-01

    We have developed a Data Collection Module (DCM) to digitize, store and select data from the NuTel telescope, which observes Cherenkov photons from near horizontal air showers. Multi-anode photo-multiplier tubes (MAPMT) are used as photon-sensitive devices. DCM processes 32 input signals from the charge-sensitive pre-amplifiers located close to the MAPMT. The module design uses 40-MHz 10-bit pipeline ADCs and medium-size FPGAs. A programmable gain/attenuation control x0.5-2 is applied to each channel before the ADC providing a comfortable operation with a multi-channel system using MAPMT as photon-sensitive device because the gain of MAPMT fluctuates from channel to channel as 1:3. DCM has a flexible on-board trigger inside FPGA firmware. The system design is made in 32-bit 33-MHz cPCI. Thirty-two DCMs housed in two crates process signals from the two telescopes of 512 channels each looking to the same direction for coincidence

  4. TOUGH2: A general-purpose numerical simulator for multiphase nonisothermal flows

    Energy Technology Data Exchange (ETDEWEB)

    Pruess, K. [Lawrence Berkeley Lab., CA (United States)

    1991-06-01

    Numerical simulators for multiphase fluid and heat flows in permeable media have been under development at Lawrence Berkeley Laboratory for more than 10 yr. Real geofluids contain noncondensible gases and dissolved solids in addition to water, and the desire to model such `compositional` systems led to the development of a flexible multicomponent, multiphase simulation architecture known as MULKOM. The design of MULKOM was based on the recognition that the mass-and energy-balance equations for multiphase fluid and heat flows in multicomponent systems have the same mathematical form, regardless of the number and nature of fluid components and phases present. Application of MULKOM to different fluid mixtures, such as water and air, or water, oil, and gas, is possible by means of appropriate `equation-of-state` (EOS) modules, which provide all thermophysical and transport parameters of the fluid mixture and the permeable medium as a function of a suitable set of primary thermodynamic variables. Investigations of thermal and hydrologic effects from emplacement of heat-generating nuclear wastes into partially water-saturated formations prompted the development and release of a specialized version of MULKOM for nonisothermal flow of water and air, named TOUGH. TOUGH is an acronym for `transport of unsaturated groundwater and heat` and is also an allusion to the tuff formations at Yucca Mountain, Nevada. The TOUGH2 code is intended to supersede TOUGH. It offers all the capabilities of TOUGH and includes a considerably more general subset of MULKOM modules with added capabilities. The paper briefly describes the simulation methodology and user features.

  5. A new General Purpose Decontamination System for Chemical and Biological Warfare and Terrorism Agents

    National Research Council Canada - National Science Library

    Khetan, Sushil; Banerjee, xdDeboshri; Chanda, Arani; Collins, Terry

    2003-01-01

    Partial contents: Fe-TAML Activator of Peroxide,Activators of Hydrogen peroxide,Biological Warfare Agents,Bacterial Endospore,Bacterial Spore Deactivation,Modeling Studies,Deactivation Studies with Bacillus spores...

  6. Ionic Liquid-Liquid Chromatography: A New General Purpose Separation Methodology

    OpenAIRE

    Brown, Leslie; Earle, Martyn J; Gilea, Manuela; Plechkova, Natalia V; Seddon, Kenneth R

    2017-01-01

    Ionic liquids can form biphasic solvent systems with many organic solvents and water, and these solvent systems can be used in liquid-liquid separations and countercurrent chromatography. The wide range of ionic liquids that can by synthesised, with specifically tailored properties, represents a new philosophy for the separation of organic, inorganic and bio-based materials. A customised countercurrent chromatograph has been designed and constructed specifically to allow the more viscous char...

  7. A Full Mesh ATCA-based General Purpose Data Processing Board: Pulsar II

    CERN Document Server

    Olsen, J; Okumura, Y

    2014-01-01

    High luminosity conditions at the LHC pose many unique challenges for potential silicon based track trigger systems. Among those challenges is data formatting, where hits from thousands of silicon modules must first be shared and organized into overlapping trigger towers. Other challenges exist for Level-1 track triggers, where many parallel data paths may be used for 5 high speed time multiplexed data transfers. Communication between processing nodes requires high bandwidth, low latency, and flexible real time data sharing, for which a full mesh backplane is a natural fit. A custom full mesh enabled ATCA board called the Pulsar II has been designed with the goal of creating a scalable architecture abundant in flexible, non-blocking, high bandwidth board- to-board communication channels while keeping the design as simple as possible.

  8. General purpose graphic processing unit implementation of adaptive pulse compression algorithms

    Science.gov (United States)

    Cai, Jingxiao; Zhang, Yan

    2017-07-01

    This study introduces a practical approach to implement real-time signal processing algorithms for general surveillance radar based on NVIDIA graphical processing units (GPUs). The pulse compression algorithms are implemented using compute unified device architecture (CUDA) libraries such as CUDA basic linear algebra subroutines and CUDA fast Fourier transform library, which are adopted from open source libraries and optimized for the NVIDIA GPUs. For more advanced, adaptive processing algorithms such as adaptive pulse compression, customized kernel optimization is needed and investigated. A statistical optimization approach is developed for this purpose without needing much knowledge of the physical configurations of the kernels. It was found that the kernel optimization approach can significantly improve the performance. Benchmark performance is compared with the CPU performance in terms of processing accelerations. The proposed implementation framework can be used in various radar systems including ground-based phased array radar, airborne sense and avoid radar, and aerospace surveillance radar.

  9. A Framework for a General Purpose Intelligent Control System for Particle Accelerators. Phase II Final Report

    International Nuclear Information System (INIS)

    Westervelt, Robert; Klein, William; Kroupa, Michael; Olsson, Eric; Rothrock, Rick

    1999-01-01

    Vista Control Systems, Inc. has developed a portable system for intelligent accelerator control. The design is general in scope and is thus configurable to a wide range of accelerator facilities and control problems. The control system employs a multi-layer organization in which knowledge-based decision making is used to dynamically configure lower level optimization and control algorithms

  10. Modelling the distribution of 222Rn concentration in a multi level, general purpose building

    International Nuclear Information System (INIS)

    Toro, Laszlo; Noditi, Mihaela; Gheorghe, Raluca; Gheorghe, Dan

    2008-01-01

    The importance of 222 Rn (radon) in the indoor air related to the exposure form natural sources is relatively well documented. About 30% of the individual effective dose from natural sources is coming from the inhalation of 222 Rn and his short lived daughters. In unfavorable conditions given by the soil porosity and the existence of upward air movement in the soil there is a possibility to have unusually high radon concentration in houses even on soil with 'normal' 226 Ra content. Some construction solutions (high indoor spaces) should generate a significant indoor-outdoor negative pressure differences and consequently upward air currents (stack effect) which will facilitate the entrance of radon in the building. This effect will multiply the possibility of migration of radon in the building. The difficulty of the prediction of radon migration in the soil-building system increase the importance of the mathematical modelling of the behavior of radon-soil emission, infiltration and migration in the building - in areas with high radon potential. For one level simple buildings there are several models in the literature but the information regarding the multilevel building models are relatively scarce. Two different approaches used to describe the behavior of the radon gas in large (mainly high) buildings have been analyzed: Direct approach: computational fluid dynamics, solving the transport equations for the whole building (the domain of the solution of the transport and flow equations is delimited by the building envelope - the external walls); the openings (internal and external) and ventilation are defined by the boundary conditions. This approach is quite complex, the equations are solved (numerically) for highly inhomogeneous medium but is based on the fundamental processes governing the transport. In the same time it gives the possibility to obtain a concentration pattern in every part of the building. Multi-zone approach treating the building as interconnected 'zones' with constant concentration, the transfer between zones is realized through walls and openings. The ventilation and openings permit the movement of the air (and radon) through the building. These openings are treated by linear transfer between zones and the exterior. This approach is much simpler mathematically, involve lower computational effort. The second, multi-zone approach has been used to analyze the radon behavior in a hypothetical 12 above grade and three underground level building. Numerical results of radon concentration will be presented for different external conditions. (author)

  11. Plasduino: An inexpensive, general-purpose data acquisition framework for educational experiments

    Energy Technology Data Exchange (ETDEWEB)

    Baldini, L. [Universita' di Pisa and INFN Sez. di Pisa, Pisa (Italy)

    2014-07-15

    Based on the Arduino development platform, plasduino is an open source data acquisition framework specifically designed for educational physics experiments. The source code, schematics and documentation are in the public domain under a GPL license and the system, streamlined for low cost and ease of use, can be replicated on the scale of a typical didactic lab with minimal effort. We describe the basic architecture of the system and illustrate its potential with some real-life examples.

  12. The Transcriptome Analysis and Comparison Explorer--T-ACE: a platform-independent, graphical tool to process large RNAseq datasets of non-model organisms.

    Science.gov (United States)

    Philipp, E E R; Kraemer, L; Mountfort, D; Schilhabel, M; Schreiber, S; Rosenstiel, P

    2012-03-15

    Next generation sequencing (NGS) technologies allow a rapid and cost-effective compilation of large RNA sequence datasets in model and non-model organisms. However, the storage and analysis of transcriptome information from different NGS platforms is still a significant bottleneck, leading to a delay in data dissemination and subsequent biological understanding. Especially database interfaces with transcriptome analysis modules going beyond mere read counts are missing. Here, we present the Transcriptome Analysis and Comparison Explorer (T-ACE), a tool designed for the organization and analysis of large sequence datasets, and especially suited for transcriptome projects of non-model organisms with little or no a priori sequence information. T-ACE offers a TCL-based interface, which accesses a PostgreSQL database via a php-script. Within T-ACE, information belonging to single sequences or contigs, such as annotation or read coverage, is linked to the respective sequence and immediately accessible. Sequences and assigned information can be searched via keyword- or BLAST-search. Additionally, T-ACE provides within and between transcriptome analysis modules on the level of expression, GO terms, KEGG pathways and protein domains. Results are visualized and can be easily exported for external analysis. We developed T-ACE for laboratory environments, which have only a limited amount of bioinformatics support, and for collaborative projects in which different partners work on the same dataset from different locations or platforms (Windows/Linux/MacOS). For laboratories with some experience in bioinformatics and programming, the low complexity of the database structure and open-source code provides a framework that can be customized according to the different needs of the user and transcriptome project.

  13. GARLIC - A general purpose atmospheric radiative transfer line-by-line infrared-microwave code: Implementation and evaluation

    Science.gov (United States)

    Schreier, Franz; Gimeno García, Sebastián; Hedelt, Pascal; Hess, Michael; Mendrok, Jana; Vasquez, Mayte; Xu, Jian

    2014-04-01

    A suite of programs for high resolution infrared-microwave atmospheric radiative transfer modeling has been developed with emphasis on efficient and reliable numerical algorithms and a modular approach appropriate for simulation and/or retrieval in a variety of applications. The Generic Atmospheric Radiation Line-by-line Infrared Code - GARLIC - is suitable for arbitrary observation geometry, instrumental field-of-view, and line shape. The core of GARLIC's subroutines constitutes the basis of forward models used to implement inversion codes to retrieve atmospheric state parameters from limb and nadir sounding instruments. This paper briefly introduces the physical and mathematical basics of GARLIC and its descendants and continues with an in-depth presentation of various implementation aspects: An optimized Voigt function algorithm combined with a two-grid approach is used to accelerate the line-by-line modeling of molecular cross sections; various quadrature methods are implemented to evaluate the Schwarzschild and Beer integrals; and Jacobians, i.e. derivatives with respect to the unknowns of the atmospheric inverse problem, are implemented by means of automatic differentiation. For an assessment of GARLIC's performance, a comparison of the quadrature methods for solution of the path integral is provided. Verification and validation are demonstrated using intercomparisons with other line-by-line codes and comparisons of synthetic spectra with spectra observed on Earth and from Venus.

  14. GARLIC — A general purpose atmospheric radiative transfer line-by-line infrared-microwave code: Implementation and evaluation

    International Nuclear Information System (INIS)

    Schreier, Franz; Gimeno García, Sebastián; Hedelt, Pascal; Hess, Michael; Mendrok, Jana; Vasquez, Mayte; Xu, Jian

    2014-01-01

    A suite of programs for high resolution infrared-microwave atmospheric radiative transfer modeling has been developed with emphasis on efficient and reliable numerical algorithms and a modular approach appropriate for simulation and/or retrieval in a variety of applications. The Generic Atmospheric Radiation Line-by-line Infrared Code — GARLIC — is suitable for arbitrary observation geometry, instrumental field-of-view, and line shape. The core of GARLIC's subroutines constitutes the basis of forward models used to implement inversion codes to retrieve atmospheric state parameters from limb and nadir sounding instruments. This paper briefly introduces the physical and mathematical basics of GARLIC and its descendants and continues with an in-depth presentation of various implementation aspects: An optimized Voigt function algorithm combined with a two-grid approach is used to accelerate the line-by-line modeling of molecular cross sections; various quadrature methods are implemented to evaluate the Schwarzschild and Beer integrals; and Jacobians, i.e. derivatives with respect to the unknowns of the atmospheric inverse problem, are implemented by means of automatic differentiation. For an assessment of GARLIC's performance, a comparison of the quadrature methods for solution of the path integral is provided. Verification and validation are demonstrated using intercomparisons with other line-by-line codes and comparisons of synthetic spectra with spectra observed on Earth and from Venus. - Highlights: • High resolution infrared-microwave radiative transfer model. • Discussion of algorithmic and computational aspects. • Jacobians by automatic/algorithmic differentiation. • Performance evaluation by intercomparisons, verification, validation

  15. SHRIF, a General-Purpose System for Heuristic Retrieval of Information and Facts, Applied to Medical Knowledge Processing.

    Science.gov (United States)

    Findler, Nicholas V.; And Others

    1992-01-01

    Describes SHRIF, a System for Heuristic Retrieval of Information and Facts, and the medical knowledge base that was used in its development. Highlights include design decisions; the user-machine interface, including the language processor; and the organization of the knowledge base in an artificial intelligence (AI) project like this one. (57…

  16. The Development of a General Purpose ARM-based Processing Unit for the ATLAS TileCal sROD

    OpenAIRE

    Cox, Mitchell Arij; Reed, Robert; Mellado Garcia, Bruce Rafael

    2014-01-01

    The Large Hadron Collider at CERN generates enormous amounts of raw data which present a serious computing challenge. After Phase-II upgrades in 2022, the data output from the ATLAS Tile Calorimeter will increase by 200 times to 41 Tb/s! ARM processors are common in mobile devices due to their low cost, low energy consumption and high performance. It is proposed that a cost-effective, high data throughput Processing Unit (PU) can be developed by using several consumer ARM processors in a clus...

  17. The development of a general purpose ARM-based processing unit for the ATLAS TileCal sROD

    Science.gov (United States)

    Cox, M. A.; Reed, R.; Mellado, B.

    2015-01-01

    After Phase-II upgrades in 2022, the data output from the LHC ATLAS Tile Calorimeter will increase significantly. ARM processors are common in mobile devices due to their low cost, low energy consumption and high performance. It is proposed that a cost-effective, high data throughput Processing Unit (PU) can be developed by using several consumer ARM processors in a cluster configuration to allow aggregated processing performance and data throughput while maintaining minimal software design difficulty for the end-user. This PU could be used for a variety of high-level functions on the high-throughput raw data such as spectral analysis and histograms to detect possible issues in the detector at a low level. High-throughput I/O interfaces are not typical in consumer ARM System on Chips but high data throughput capabilities are feasible via the novel use of PCI-Express as the I/O interface to the ARM processors. An overview of the PU is given and the results for performance and throughput testing of four different ARM Cortex System on Chips are presented.

  18. Cafe Variome : General-Purpose Software for Making Genotype-Phenotype Data Discoverable in Restricted or Open Access Contexts

    NARCIS (Netherlands)

    Lancaster, Owen; Beck, Tim; Atlan, David; Swertz, Morris; Thangavelu, Dhiwagaran; Veal, Colin; Dalgleish, Raymond; Brookes, Anthony J.

    2015-01-01

    Biomedical data sharing is desirable, but problematic. Data "discovery" approaches-which establish the existence rather than the substance of data-precisely connect data owners with data seekers, and thereby promote data sharing. Cafe Variome (http://www.cafevariome.org) was therefore designed to

  19. Online retrieval of patient information by asynchronous communication between general purpose computer and stand-alone personal computer

    International Nuclear Information System (INIS)

    Tsutsumi, Reiko; Takahashi, Kazuei; Sato, Toshiko; Komatani, Akio; Yamaguchi, Koichi

    1988-01-01

    Asynchronous communication was made between host (FACOM M-340) and personal computer (OLIBETTIE S-2250) to get patient's information required for RIA test registration. The retrieval system consists of a keyboad input of six numeric codes, patient's ID, and a real time reply containing six parameters for the patient. Their identified parameters are patient's name, sex, date of birth (include area), department, and out- or inpatient. Linking this program to RIA registration program for individual patient, then, operator can input name of RIA test requested. Our simple retrieval program made a useful data network between different types of host and stand-alone personal computers, and enabled us accurate and labor-saving registration for RIA test. (author)

  20. A General-Purpose Spatial Survey Design for Collaborative Science and Monitoring of Global Environmental Change: The Global Grid

    Directory of Open Access Journals (Sweden)

    David M. Theobald

    2016-09-01

    Full Text Available Recent guidance on environmental modeling and global land-cover validation stresses the need for a probability-based design. Additionally, spatial balance has also been recommended as it ensures more efficient sampling, which is particularly relevant for understanding land use change. In this paper I describe a global sample design and database called the Global Grid (GG that has both of these statistical characteristics, as well as being flexible, multi-scale, and globally comprehensive. The GG is intended to facilitate collaborative science and monitoring of land changes among local, regional, and national groups of scientists and citizens, and it is provided in a variety of open source formats to promote collaborative and citizen science. Since the GG sample grid is provided at multiple scales and is globally comprehensive, it provides a universal, readily-available sample. It also supports uneven probability sample designs through filtering sample locations by user-defined strata. The GG is not appropriate for use at locations above ±85° because the shape and topological distortion of quadrants becomes extreme near the poles. Additionally, the file sizes of the GG datasets are very large at fine scale (resolution ~600 m × 600 m and require a 64-bit integer representation.

  1. TAKING THE LONG VIEW TOWARDS THE LONG WAR. Equipping General Purpose Force Leaders with Soft Power Tools for Irregular Warfare

    Science.gov (United States)

    2009-02-12

    equivalent to usual printing or typescript . Can read either representations of familiar formulaic verbal exchanges or simple language containing only...read simple, authentic written material in a form equivalent to usual printing or typescript on subjects within a familiar context. Able to read with

  2. SafeNet: a methodology for integrating general-purpose unsafe devices in safe-robot rehabilitation systems.

    Science.gov (United States)

    Vicentini, Federico; Pedrocchi, Nicola; Malosio, Matteo; Molinari Tosatti, Lorenzo

    2014-09-01

    Robot-assisted neurorehabilitation often involves networked systems of sensors ("sensory rooms") and powerful devices in physical interaction with weak users. Safety is unquestionably a primary concern. Some lightweight robot platforms and devices designed on purpose include safety properties using redundant sensors or intrinsic safety design (e.g. compliance and backdrivability, limited exchange of energy). Nonetheless, the entire "sensory room" shall be required to be fail-safe and safely monitored as a system at large. Yet, sensor capabilities and control algorithms used in functional therapies require, in general, frequent updates or re-configurations, making a safety-grade release of such devices hardly sustainable in cost-effectiveness and development time. As such, promising integrated platforms for human-in-the-loop therapies could not find clinical application and manufacturing support because of lacking in the maintenance of global fail-safe properties. Under the general context of cross-machinery safety standards, the paper presents a methodology called SafeNet for helping in extending the safety rate of Human Robot Interaction (HRI) systems using unsafe components, including sensors and controllers. SafeNet considers, in fact, the robotic system as a device at large and applies the principles of functional safety (as in ISO 13489-1) through a set of architectural procedures and implementation rules. The enabled capability of monitoring a network of unsafe devices through redundant computational nodes, allows the usage of any custom sensors and algorithms, usually planned and assembled at therapy planning-time rather than at platform design-time. A case study is presented with an actual implementation of the proposed methodology. A specific architectural solution is applied to an example of robot-assisted upper-limb rehabilitation with online motion tracking. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  3. A general-purpose framework to simulate musculoskeletal system of human body: using a motion tracking approach.

    Science.gov (United States)

    Ehsani, Hossein; Rostami, Mostafa; Gudarzi, Mohammad

    2016-02-01

    Computation of muscle force patterns that produce specified movements of muscle-actuated dynamic models is an important and challenging problem. This problem is an undetermined one, and then a proper optimization is required to calculate muscle forces. The purpose of this paper is to develop a general model for calculating all muscle activation and force patterns in an arbitrary human body movement. For this aim, the equations of a multibody system forward dynamics, which is considered for skeletal system of the human body model, is derived using Lagrange-Euler formulation. Next, muscle contraction dynamics is added to this model and forward dynamics of an arbitrary musculoskeletal system is obtained. For optimization purpose, the obtained model is used in computed muscle control algorithm, and a closed-loop system for tracking desired motions is derived. Finally, a popular sport exercise, biceps curl, is simulated by using this algorithm and the validity of the obtained results is evaluated via EMG signals.

  4. Area-based cell colony surviving fraction evaluation: A novel fully automatic approach using general-purpose acquisition hardware.

    Science.gov (United States)

    Militello, Carmelo; Rundo, Leonardo; Conti, Vincenzo; Minafra, Luigi; Cammarata, Francesco Paolo; Mauri, Giancarlo; Gilardi, Maria Carla; Porcino, Nunziatina

    2017-10-01

    The current methodology for the Surviving Fraction (SF) measurement in clonogenic assay, which is a technique to study the anti-proliferative effect of treatments on cell cultures, involves manual counting of cell colony forming units. This procedure is operator-dependent and error-prone. Moreover, the identification of the exact colony number is often not feasible due to the high growth rate leading to the adjacent colony merging. As a matter of fact, conventional assessment does not deal with the colony size, which is generally correlated with the delivered radiation dose or the administered cytotoxic agent. Considering that the Area Covered by Colony (ACC) is proportional to the colony number and size as well as to the growth rate, we propose a novel fully automatic approach exploiting Circle Hough Transform, to automatically detect the wells in the plate, and local adaptive thresholding, which calculates the percentage of ACC for the SF quantification. This measurement relies just on this covering percentage and does not consider the colony number, preventing inconsistencies due to intra- and inter-operator variability. To evaluate the accuracy of the proposed approach, we compared the SFs obtained by our automatic ACC-based method against the conventional counting procedure. The achieved results (r = 0.9791 and r = 0.9682 on MCF7 and MCF10A cells, respectively) showed values highly correlated with the measurements using the traditional approach based on colony number alone. The proposed computer-assisted methodology could be integrated in laboratory practice as an expert system for the SF evaluation in clonogenic assays. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Comparison of General Purpose Heat Source testing with the ANSI N43.6-1977 (R 1989) sealed source standard

    International Nuclear Information System (INIS)

    Grigsby, C.O.

    1998-01-01

    This analysis provides a comparison of the testing of Radioisotope Thermoelectric Generators (RTGs) and RTG components with the testing requirements of ANSI N43.6-1977 (R1989) ''Sealed Radioactive Sources, Categorization''. The purpose of this comparison is to demonstrate that the RTGs meet or exceed the requirements of the ANSI standard, and thus can be excluded from the radioactive inventory of the Chemistry and Metallurgy Research (CMR) building in Los Alamos per Attachment 1 of DOE STD 1027-92. The approach used in this analysis is as follows: (1) describe the ANSI sealed source classification methodology; (2) develop sealed source performance requirements for the RTG and/or RTG components based on criteria from the accident analysis for CMR; (3) compare the existing RTG or RTG component test data to the CMR requirements; and (4) determine the appropriate ANSI classification for the RTG and/or RTG components based on CMR performance requirements. The CMR requirements for treating RTGs as sealed sources are derived from the radiotoxicity of the isotope ( 238 P7) and amount (13 kg) of radioactive material contained in the RTG. The accident analysis for the CMR BIO identifies the bounding accidents as wing-wide fire, explosion and earthquake. These accident scenarios set the requirements for RTGs or RTG components stored within the CMR

  6. Using Low-Level Architectural Features for Configuration InfoSec in a General-Purpose Self-Configurable System

    OpenAIRE

    Nicholas J. Macias; Peter M. Athanas

    2009-01-01

    Unique characteristics of biological systems are described, and similarities are made to certain computing architectures. The security challenges posed by these characteristics are discussed. A method of securely isolating portions of a design using introspective capabilities of a fine-grain self-configurable device is presented. Experimental results are discussed, and plans for future work are given.

  7. Semiautomated Segmentation and Measurement of Cytoplasmic Vacuoles in a Neutrophil With General-Purpose Image Analysis Software.

    Science.gov (United States)

    Mizukami, Maki; Yamada, Misaki; Fukui, Sayaka; Fujimoto, Nao; Yoshida, Shigeru; Kaga, Sanae; Obata, Keiko; Jin, Shigeki; Miwa, Keiko; Masauzi, Nobuo

    2016-11-01

    Morphological observation of blood or marrow film is still described nonquantitatively. We developed a semiautomatic method for segmenting vacuoles from the cytoplasm using Photoshop (PS) and Image-J (IJ), called PS-IJ, and measured the relative entire cell area (rECA) and relative areas of vacuoles (rAV) in the cytoplasm of neutrophil with PS-IJ. Whole-blood samples were stored at 4°C with ethylenediaminetetraacetate and in two different preserving manners (P1 and P2). Color-tone intensity levels of neutrophil images were semiautomatically compensated using PS, and then vacuole portions were automatically segmented by IJ. The rAV and rECA were measured by counting pixels by IJ. For evaluating the accuracy in segmentations of vacuoles with PS-IJ, the rAV/rECA ratios calculated with results from PS-IJ were compared with those calculated with human eye and IJ (HE-IJ). The rECA and rAV/ in P1 significantly (P < 0.05, P < 0.05) were enlarged and increased, but did not significantly (P = 0.46, P = 0.21) change in P2. The rAV/rECA ratios by PS-IJ were significantly correlated (r = 0.90, P < 0.01) with those by HE-IJ. PS-IJ method can successfully segment vacuoles and measure the rAV and rECA, becoming a useful tool for quantitative description of morphological observation of blood and marrow film. © 2016 Wiley Periodicals, Inc.

  8. Safety-analysis report for packaging (SARP) general-purpose heat-source module 750-Watt shipping container

    International Nuclear Information System (INIS)

    Whitney, M.A.; Burgan, C.E.; Blauvelt, R.K.; Zocher, R.W.; Bronisz, S.E.

    1981-01-01

    The SARP includes discussions of structural integrity, thermal resistance, radiation shielding and radiological safety, nuclear criticality safety, and quality control. Extensive tests and evaluations were performed to show that the container will function effectively with respect to all required standards and when subjected to normal transportation conditions and the sequence of four hypothetical accident conditions (free drop, puncture, thermal, and water immersion). In addition, a steady state temperature profile and radiation profile were measured using two heat sources that very closely resemble the GPHS. This gave an excellent representation of the GPHS temperature and radiation profile. A nuclear criticality safety analysis determined that all safety requirements are met

  9. CRSEC: a general purpose Hauser--Feshbach code for the calculation of nuclear cross-sections and thermonuclear reaction rates

    International Nuclear Information System (INIS)

    Woosley, S.; Fowler, W.A.

    1977-09-01

    CRSEC is a FORTRAN IV computer code designed for the efficient calculation of average nuclear cross sections in situations where a statistical theory of nuclear reactions is applicable and where compound nuclear formation is the dominant reaction mechanism. This code generates cross sections of roughly factor of 2 accuracy for incident particle energies in the range of 10 keV to 10 MeV for most target nuclei from magnesium to bismuth. Exceptions usually involve reactions that enter the compound nucleus at such a low energy that fewer than 10 levels are present in the ''energy window of interest.'' The incident particle must be a neutron, proton, or alpha particle, and only binary reactions resulting in the emission of a single n, p, α, or γ (cascade) are calculated. CRSEC is quite fast, a complete calculation of 12 different reactions over a grid of roughly 150 energy points and the generation of Maxwellian averaged rates taking about 30 seconds of CDC7600 time. Also the semi-empirical parameterization of nuclear properties contained in CRSEC is very general. Greater accuracy may be obtained, however, by furnishing specific low-lying excited states, level density parameterization, and nuclear strength functions. A more general version of CRSEC, called CRSECI, is available that conserves isospin properly in all reactions and allows the user to specify a given degree of isospin mixing in the highly excited states of the compound nucleus. Besides the cross section as a function of center-of-mass energy, CRSEC also generates the Maxwell--Boltzmann averaged thermonuclear reaction rate and temperature dependent nuclear partition function for a grid of temperatures from 10 8 to 10 10 0 K. Sections of this report describe in greater detail the physics employed in CRSEC and how to use the code. 2 tables

  10. The Development of a General Purpose ARM-based Processing Unit for the ATLAS TileCal sROD

    CERN Document Server

    Cox, Mitchell Arij; The ATLAS collaboration; Mellado Garcia, Bruce Rafael

    2015-01-01

    The Large Hadron Collider at CERN generates enormous amounts of raw data which present a serious computing challenge. After Phase-II upgrades in 2022, the data output from the ATLAS Tile Calorimeter will increase by 200 times to 41 Tb/s! ARM processors are common in mobile devices due to their low cost, low energy consumption and high performance. It is proposed that a cost-effective, high data throughput Processing Unit (PU) can be developed by using several consumer ARM processors in a cluster configuration to allow aggregated processing performance and data throughput while maintaining minimal software design difficulty for the end-user. This PU could be used for a variety of high-level functions on the high-throughput raw data such as spectral analysis and histograms to detect possible issues in the detector at a low level. High-throughput I/O interfaces are not typical in consumer ARM System on Chips but high data throughput capabilities are feasible via the novel use of PCI-Express as the I/O interface ...

  11. The development of a general purpose ARM-based processing unit for the ATLAS TileCal sROD

    International Nuclear Information System (INIS)

    Cox, M A; Reed, R; Mellado, B

    2015-01-01

    After Phase-II upgrades in 2022, the data output from the LHC ATLAS Tile Calorimeter will increase significantly. ARM processors are common in mobile devices due to their low cost, low energy consumption and high performance. It is proposed that a cost-effective, high data throughput Processing Unit (PU) can be developed by using several consumer ARM processors in a cluster configuration to allow aggregated processing performance and data throughput while maintaining minimal software design difficulty for the end-user. This PU could be used for a variety of high-level functions on the high-throughput raw data such as spectral analysis and histograms to detect possible issues in the detector at a low level. High-throughput I/O interfaces are not typical in consumer ARM System on Chips but high data throughput capabilities are feasible via the novel use of PCI-Express as the I/O interface to the ARM processors. An overview of the PU is given and the results for performance and throughput testing of four different ARM Cortex System on Chips are presented

  12. Joint Service General Purpose Mask (JSGPM) Human Systems Integration (HSI) Evaluation: Comfort and Vision Correction Insert Stability Evaluation

    National Research Council Canada - National Science Library

    Garrett, Lamar; Harper, William H; Ortega, Samson V; White, Timothy L

    2006-01-01

    .... The JSGPM, together with personal protective equipment, allows the operators the flexibility to tailor their protection, based on mission threat, thereby minimizing weight, bulk, and heat stress...

  13. General Purpose Force Capability; the Challenge of Versatility and Achieving Balance Along the Widest Possible Spectrum of Conflict

    Science.gov (United States)

    2010-04-01

    STRATEGY 16 Ir 16 Balance 17 The Imbalance between Traditional and Irregul 17 Finding the Proper Balance 18 CHAPTER 3 - DEFINING 20 Introduction 20...wider struggle for control and support of the contested c ir warfare as well as W 41      4. IRREGULAR WARFARE – HISTORICAL CONTEXT AND CURRENT...Congressional Research Service, Washington, D.C., July 20, 2009. R 1984. homas E. The Gamble: General David Petraeus and the American Military

  14. General-purpose parallel algorithm based on CUDA for source pencils' deployment of large γ irradiator

    International Nuclear Information System (INIS)

    Yang Lei; Gong Xueyu; Wang Ling

    2013-01-01

    Combined with standard mathematical model for evaluating quality of deploying results, a new high-performance parallel algorithm for source pencils' deployment was obtained by using parallel plant growth simulation algorithm which was completely parallelized with CUDA execute model, and the corresponding code can run on GPU. Based on such work, several instances in various scales were used to test the new version of algorithm. The results show that, based on the advantage of old versions. the performance of new one is improved more than 500 times comparing with the CPU version, and also 30 times with the CPU plus GPU hybrid version. The computation time of new version is less than ten minutes for the irradiator of which the activity is less than 111 PBq. For a single GTX275 GPU, the maximum computing power of new version is no more than 167 PBq as well as the computation time is no more than 25 minutes, and for multiple GPUs, the power can be improved more. Overall, the new version of algorithm running on GPU can satisfy the requirement of source pencils' deployment of any domestic irradiator, and it is of high competitiveness. (authors)

  15. The Development of a General Purpose ARM-based Processing Unit for the TileCal sROD

    CERN Multimedia

    Cox, Mitchell A

    2014-01-01

    The Large Hadron Collider at CERN generates enormous amounts of raw data which present a serious computing challenge. After planned upgrades in 2022, the data output from the ATLAS Tile Calorimeter will increase by 200 times to 41 Tb/s! ARM processors are common in mobile devices due to their low cost, low energy consumption and high performance. It is proposed that a cost-effective, high data throughput Processing Unit (PU) can be developed by using several consumer ARM processors in a cluster configuration to allow aggregated processing performance and data throughput while maintaining minimal software design difficulty for the end-user. This PU could be used for a variety of high-level functions on the high-throughput raw data such as spectral analysis and histograms to detect possible issues in the detector at a low level. High-throughput I/O interfaces are not typical in consumer ARM System on Chips but high data throughput capabilities are feasible via the novel use of PCI-Express as the I/O interface t...

  16. PolarBRDF: A general purpose Python package for visualization and quantitative analysis of multi-angular remote sensing measurements

    Science.gov (United States)

    Poudyal, R.; Singh, M.; Gautam, R.; Gatebe, C. K.

    2016-12-01

    The Bidirectional Reflectance Distribution Function (BRDF) is a fundamental concept for characterizing the reflectance property of a surface, and helps in the analysis of remote sensing data from satellite, airborne and surface platforms. Multi-angular remote sensing measurements are required for the development and evaluation of BRDF models for improved characterization of surface properties. However, multi-angular data and the associated BRDF models are typically multidimensional involving multi-angular and multi-wavelength information. Effective visualization of such complex multidimensional measurements for different wavelength combinations is presently somewhat lacking in the literature, and could serve as a potentially useful research and teaching tool in aiding both interpretation and analysis of BRDF measurements. This article describes a newly developed software package in Python (PolarBRDF) to help visualize and analyze multi-angular data in polar and False Color Composite (FCC) forms. PolarBRDF also includes functionalities for computing important multi-angular reflectance/albedo parameters including spectral albedo, principal plane reflectance and spectral reflectance slope. Application of PolarBRDF is demonstrated using various case studies obtained from airborne multi-angular remote sensing measurements using NASA's Cloud Absorption Radiometer (CAR)- http://car.gsfc.nasa.gov/. Our visualization program also provides functionalities for untangling complex surface/atmosphere features embedded in pixel-based remote sensing measurements, such as the FCC imagery generation of BRDF measurements of grasslands in the presence of wildfire smoke and clouds. Furthermore, PolarBRDF also provides quantitative information of the angular distribution of scattered surface/atmosphere radiation, in the form of relevant BRDF variables such as sunglint, hotspot and scattering statistics.

  17. Expert systems for the analysis of transients on nuclear reactors: crisis analysis, sextant, a general purpose physical analyser

    International Nuclear Information System (INIS)

    Barbet, N.; Dumas, M.; Mihelich, G.; Souchet, Y.; Thomas, J.B.

    1987-04-01

    Two developments of expert systems intended to work on line to the analysis of nuclear reactor transients are reported. During an hypothetical crisis occurring in a nuclear facility, a staff of the Institute for Protection and Nuclear Safety (IPSN) has to assess the risk to local population. The expert system is intended to work as an assistant to the staff. At the present time, it deals with the availability of the safety systems of the plant (e.g. ECCS), depending on the functional state of the support systems. A next step is to take into account the physical transient of the reactor (mass and energy balance, pressure, flows). In order to reach this goal as in the development of other similar expert systems, a physical analyser is required. This is the aim of SEXTANT, which combines several knowledge bases concerning measurements, models and qualitative behaviour of the plant with a mechanism of conjecture-refutation and a set of simplified models matching the current physical state. A prototype is under assessment by dealing with integral test facility transients. Both expert systems require powerful shells for their development. SPIRAL is such a toolkit for the development of expert systems devoted to the computer aided management of complex processes

  18. A Complete Design Flow of a General Purpose Wireless GPS/Inertial Platform for Motion Data Monitoring

    Directory of Open Access Journals (Sweden)

    Gianluca Borgese

    2015-07-01

    Full Text Available This work illustrates a complete design flow of an electronic system developed to support applications in which there are the need to measure motion parameters and transmit them to a remote unit for real-time teleprocessing. In order to be useful in many operative contexts, the system is flexible, compact, and lightweight. It integrates a tri-axial inertial sensor, a GPS module, a wireless transceiver and can drive a pocket camera. Data acquisition and packetization are handled in order to increase data throughput on Radio Bridge and to minimize power consumption. A trajectory reconstruction algorithm, implementing the Kalman-filter technique, allows obtaining real-time body tracking using only inertial sensors. Thanks to a graphical user interface it is possible to remotely control the system operations and to display the motion data.

  19. Polarbrdf: A General Purpose Python Package for Visualization Quantitative Analysis of Multi-Angular Remote Sensing Measurements

    Science.gov (United States)

    Singh, Manoj K.; Gautam, Ritesh; Gatebe, Charles K.; Poudyal, Rajesh

    2016-01-01

    The Bidirectional Reflectance Distribution Function (BRDF) is a fundamental concept for characterizing the reflectance property of a surface, and helps in the analysis of remote sensing data from satellite, airborne and surface platforms. Multi-angular remote sensing measurements are required for the development and evaluation of BRDF models for improved characterization of surface properties. However, multi-angular data and the associated BRDF models are typically multidimensional involving multi-angular and multi-wavelength information. Effective visualization of such complex multidimensional measurements for different wavelength combinations is presently somewhat lacking in the literature, and could serve as a potentially useful research and teaching tool in aiding both interpretation and analysis of BRDF measurements. This article describes a newly developed software package in Python (PolarBRDF) to help visualize and analyze multi-angular data in polar and False Color Composite (FCC) forms. PolarBRDF also includes functionalities for computing important multi-angular reflectance/albedo parameters including spectral albedo, principal plane reflectance and spectral reflectance slope. Application of PolarBRDF is demonstrated using various case studies obtained from airborne multi-angular remote sensing measurements using NASA's Cloud Absorption Radiometer (CAR). Our visualization program also provides functionalities for untangling complex surface/atmosphere features embedded in pixel-based remote sensing measurements, such as the FCC imagery generation of BRDF measurements of grasslands in the presence of wild fire smoke and clouds. Furthermore, PolarBRDF also provides quantitative information of the angular distribution of scattered surface/atmosphere radiation, in the form of relevant BRDF variables such as sunglint, hotspot and scattering statistics.

  20. A Fault Detection Mechanism in a Data-flow Scheduled Multithreaded Processor

    NARCIS (Netherlands)

    Fu, J.; Yang, Q.; Poss, R.; Jesshope, C.R.; Zhang, C.

    2014-01-01

    This paper designs and implements the Redundant Multi-Threading (RMT) in a Data-flow scheduled MultiThreaded (DMT) multicore processor, called Data-flow scheduled Redundant Multi-Threading (DRMT). Meanwhile, It presents Asynchronous Output Comparison (AOC) for RMT techniques to avoid fault detection