WorldWideScience

Sample records for platform-independent multi-threaded general-purpose

  1. Evaluating the multi-threading countermeasure

    CSIR Research Space (South Africa)

    Frieslaar, Ibraheem

    2016-12-01

    Full Text Available This research investigates the resistance of the multi-threaded countermeasure to side channel analysis (SCA) attacks. The multi-threaded countermeasure is attacked using the Correlation Power Analysis (CPA) and template attacks. Additionally...

  2. Multi-threaded Object Streaming

    CERN Document Server

    Pfeiffer, Andreas; Govi, Giacomo; Ojeda, Miguel; Sipos, Roland

    2015-01-01

    The CMS experiment at CERNs Large Hadron Collider in Geneva redesigned the code handling the conditions data during the last years, aiming to increase performance and enhance maintainability. The new design includes a move to serialise all payloads before storing them into the database, allowing the handling of the payloads in external tools independent of a given software release. In this talk we present the results of performance studies done using the serialisation package from the Boost suite as well as serialisation done with the ROOT (v5) tools. Furthermore, as the Boost tools allow parallel (de-)serialisation, we show the performance gains achieved with parallel threads when de-serialising a realistic set of conditions in CMS. Without specific optimisations an overall speed up of a factor of 3-4 was achieved using multi-threaded loading and de-serialisation of our conditions.

  3. Confidentiality for Probabilistic Multi-Threaded Programs and Its Verification

    NARCIS (Netherlands)

    Ngo, Minh Tri; Stoelinga, Mariëlle Ida Antoinette; Huisman, Marieke

    2012-01-01

    Confidentiality is an important concern in today's information society: electronic payment and personal data should be protected appropriately. This holds in particular for multi-threaded applications, which are generally seen the future of high-performance computing. Multi-threading poses new

  4. A Multi-Threaded Semantic Focused Crawler

    Institute of Scientific and Technical Information of China (English)

    Punam Bedi; Anjali Thukral; Hema Banati; Abhishek Behl; Varun Mendiratta

    2012-01-01

    The Web comprises of voluminous rich learning content.The volume of ever growing learning resources however leads to the problem of information overload.A large number of irrelevant search results generated from search engines based on keyword matching techniques further augment the problem.A learner in such a scenario needs semantically matched learning resources as the search results.Keeping in view the volume of content and significance of semantic knowledge,our paper proposes a multi-threaded semantic focused crawler (SFC) specially designed and implemented to crawl on the WWW for educational learning content.The proposed SFC utilizes domain ontology to expand a topic term and a set of seed URLs to initiate the crawl.The results obtained by multiple iterations of the crawl on various topics are shown and compared with the results obtained by executing an open source crawler on the similar dataset.The results are evaluated using Semantic Similarity,a vector space model based metric,and the harvest ratio.

  5. A multi-threading approach to secure VERIFYPIN

    CSIR Research Space (South Africa)

    Frieslaar, Ibraheem

    2016-10-01

    Full Text Available This research investigates the use of a multi-threaded framework as a software countermeasure mechanism to prevent attacks on the verifypin process in a pin-acceptance program. The implementation comprises of using various mathematical operations...

  6. Complexity Information Flow in a Multi-threaded Imperative Language

    CERN Document Server

    Marion, Jean-Yves

    2012-01-01

    We propose a type system to analyze the time consumed by multi-threaded imperative programs with a shared global memory, which delineates a class of safe multi-threaded programs. We demonstrate that a safe multi-threaded program runs in polynomial time if (i) it is strongly terminating wrt a non-deterministic scheduling policy or (ii) it terminates wrt a deterministic and quiet scheduling policy. As a consequence, we also characterize the set of polynomial time functions. The type system presented is based on the fundamental notion of data tiering, which is central in implicit computational complexity. It regulates the information flow in a computation. This aspect is interesting in that the type system bears a resemblance to typed based information flow analysis and notions of non-interference. As far as we know, this is the first characterization by a type system of polynomial time multi-threaded programs.

  7. Effective verification of confidentiality for multi-threaded programs

    NARCIS (Netherlands)

    Ngo, Minh Tri; Stoelinga, Mariëlle; Huisman, Marieke

    2014-01-01

    This paper studies how confidentiality properties of multi-threaded programs can be verified efficiently by a combination of newly developed and existing model checking algorithms. In particular, we study the verification of scheduler-specific observational determinism (SSOD), a property that charac

  8. Complexity and information flow analysis for multi-threaded programs

    Science.gov (United States)

    Ngo, Tri Minh; Huisman, Marieke

    2017-01-01

    This paper studies the security of multi-threaded programs. We combine two methods, i.e., qualitative and quantitative security analysis, to check whether a multi-threaded program is secure or not. In this paper, besides reviewing classical analysis models, we present a novel model of quantitative analysis where the attacker is able to select the scheduling policy. This model does not follow the traditional information-theoretic channel setting. Our analysis first studies what extra information an attacker can get if he knows the scheduler's choices, and then integrates this information into the transition system modeling the program execution. Via a case study, we compare this approach with the traditional information-theoretic models, and show that this approach gives more intuitive-matching results.

  9. Complexity and information flow analysis for multi-threaded programs

    Science.gov (United States)

    Ngo, Tri Minh; Huisman, Marieke

    2017-07-01

    This paper studies the security of multi-threaded programs. We combine two methods, i.e., qualitative and quantitative security analysis, to check whether a multi-threaded program is secure or not. In this paper, besides reviewing classical analysis models, we present a novel model of quantitative analysis where the attacker is able to select the scheduling policy. This model does not follow the traditional information-theoretic channel setting. Our analysis first studies what extra information an attacker can get if he knows the scheduler's choices, and then integrates this information into the transition system modeling the program execution. Via a case study, we compare this approach with the traditional information-theoretic models, and show that this approach gives more intuitive-matching results.

  10. Multi-thread Parallel Speech Recognition for Mobile Applications

    Directory of Open Access Journals (Sweden)

    LOJKA Martin

    2014-05-01

    Full Text Available In this paper, the server based solution of the multi-thread large vocabulary automatic speech recognition engine is described along with the Android OS and HTML5 practical application examples. The basic idea was to bring speech recognition available for full variety of applications for computers and especially for mobile devices. The speech recognition engine should be independent of commercial products and services (where the dictionary could not be modified. Using of third-party services could be also a security and privacy problem in specific applications, when the unsecured audio data could not be sent to uncontrolled environments (voice data transferred to servers around the globe. Using our experience with speech recognition applications, we have been able to construct a multi-thread speech recognition serverbased solution designed for simple applications interface (API to speech recognition engine modified to specific needs of particular application.

  11. Model Checking with Multi-Threaded IC3 Portfolios

    Science.gov (United States)

    2015-01-15

    Model Checking with Multi-Threaded IC3 Portfolios Sagar Chaki and Derrick Karimi Software Engineering Institute, Carnegie Mellon University {chaki...different runs varies randomly depending on the thread interleaving. The use of a portfolio of solvers to maximize the likelihood of a quick solution is...investigated. Using the Extreme Value theorem, the runtime of each variant, as well as their portfolios is analysed statistically. A formula for the

  12. A PREDICTABLE MULTI-THREADED MAIN-MEMORY STORAGE MANAGER

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper introduces the design and implementation of a predictable multi-threaded main-memo- ry storage manager (CS20), and emphasizes the database service mediator(DSM), an operation prediction model using exponential averaging. The memory manager, indexing, as well as lock manager in CS20 are also presented briefly. CS20 has been embedded in a mobile telecommunication service system. Practice showed, DSM effectively controls system load and hence improves the real-time characteristics of data accessing.

  13. General purpose MDE tools

    Directory of Open Access Journals (Sweden)

    Juan Manuel Cueva Lovelle

    2008-12-01

    Full Text Available MDE paradigm promises to release developers from writing code. The basis of this paradigm consists in working at such a level of abstraction that will make it easyer for analysts to detail the project to be undertaken. Using the model described by analysts, software tools will do the rest of the task, generating software that will comply with customer's defined requirements. The purpose of this study is to compare general purpose tools available right now that enable to put in practice the principles of this paradigm and aimed at generating a wide variety of applications composed by interactive multimedia and artificial intelligence components.

  14. General purpose operator interface

    Energy Technology Data Exchange (ETDEWEB)

    Bennion, S. I.

    1979-07-01

    The Hanford Engineering Development Laboratory in Richland, Washington is developing a general-purpose operator interface for controlling set-point driven processes. The interface concept is being developed around graphics display devices with touch-sensitive screens for direct interaction with the displays. Additional devices such as trackballs and keyboards are incorporated for the operator's convenience, but are not necessary for operation. The hardware and software are modular; only those capabilities needed for a particular application need to be used. The software is written in FORTRAN IV with minimal use of operating system calls to increase portability. Several ASCII files generated by the user define displays and correlate the display variables with the process parameters. It is also necessary for the user to build an interface routine which translates the internal graphics commands into device-specific commands. The interface is suited for both continuous flow processes and unit operations. An especially useful feature for controlling unit operations is the ability to generate and execute complex command sequences from ASCII files. This feature relieves operators of many repetitive tasks. 2 figures.

  15. Multi-threaded software framework development for the ATLAS experiment

    Science.gov (United States)

    Stewart, G. A.; Baines, J.; Bold, T.; Calafiura, P.; Dotti, A.; Farrell, S. A.; Leggett, C.; Malon, D.; Ritsch, E.; Snyder, S.; Tsulaia, V.; Van Gemmeren, P.; Wynne, B. M.; ATLAS Experiment,the

    2016-10-01

    ATLAS's current software framework, Gaudi/Athena, has been very successful for the experiment in LHC Runs 1 and 2. However, its single-threaded design has been recognised for some time to be increasingly problematic as CPUs have increased core counts and decreased available memory per core. Even the multi-process version of Athena, AthenaMP, will not scale to the range of architectures we expect to use beyond Run2. ATLAS examined the requirements on an updated multi-threaded framework and laid out plans for a new framework, including better support for High Level Trigger use cases, in 2014. In this paper we report on our progress in developing the new multi-threaded task parallel extension of Athena, AthenaMT. Implementing AthenaMT has required many significant code changes. Progress has been made in updating key concepts of the framework, allowing different levels of thread safety in algorithmic code. Substantial advances have also been made in implementing a data flow centric design, as well as on the development of the new ‘event views’ infrastructure. These event views support partial event processing and are an essential component to support the High Level Trigger's processing of certain regions of interest. A major effort has also been invested to have an early version of AthenaMT that can run simulation on many core architectures, which has augmented the understanding gained from work on earlier ATLAS demonstrators.

  16. Multi-threaded Software Framework Development for the ATLAS Experiment

    CERN Document Server

    Stewart, Graeme; The ATLAS collaboration; Baines, John; Calafiura, Paolo; Dotti, Andrea; Farrell, Steven; Leggett, Charles; Malon, David; Ritsch, Elmar; Snyder, Scott; Tsulaia, Vakhtang; van Gemmeren, Peter; Wynne, Benjamin

    2016-01-01

    ATLAS's current software framework, Gaudi/Athena, has been very successful for the experiment in LHC Runs 1 and 2. However, its single threaded design has been recognised for some time to be increasingly problematic as CPUs have increased core counts and decreased available memory per core. Even the multi-process version of Athena, AthenaMP, will not scale to the range of architectures we expect to use beyond Run2. ATLAS examined the requirements on an updated multi-threaded framework and layed out plans for a new framework, including better support for high level trigger (HLT) use cases, in 2014. In this paper we report on our progress in developing the new multi-threaded task parallel extension of Athena, AthenaMT. Implementing AthenaMT has required many significant code changes. Progress has been made in updating key concepts of the framework, to allow the incorporation of different levels of thread safety in algorithmic code (from un-migrated thread-unsafe code, to thread safe copyable code to reentrant c...

  17. Multi-threaded software framework development for the ATLAS experiment

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00226135; Baines, John; Bold, Tomasz; Calafiura, Paolo; Dotti, Andrea; Farrell, Steven; Leggett, Charles; Malon, David; Ritsch, Elmar; Snyder, Scott; Tsulaia, Vakhtang; van Gemmeren, Peter; Wynne, Benjamin

    2016-01-01

    ATLAS's current software framework, Gaudi/Athena, has been very successful for the experiment in LHC Runs 1 and 2. However, its single threaded design has been recognised for some time to be increasingly problematic as CPUs have increased core counts and decreased available memory per core. Even the multi-process version of Athena, AthenaMP, will not scale to the range of architectures we expect to use beyond Run2. ATLAS examined the requirements on an updated multi-threaded framework and laid out plans for a new framework, including better support for high level trigger (HLT) use cases, in 2014. In this paper we report on our progress in developing the new multi-threaded task parallel extension of Athena, AthenaMT. Implementing AthenaMT has required many significant code changes. Progress has been made in updating key concepts of the framework, to allow the incorporation of different levels of thread safety in algorithmic code (from un-migrated thread-unsafe code, to thread safe copyable code to reentrant co...

  18. Multi-threaded ATLAS Simulation on Intel Knights Landing Processors

    CERN Document Server

    Farrell, Steven; The ATLAS collaboration; Calafiura, Paolo; Leggett, Charles

    2016-01-01

    The Knights Landing (KNL) release of the Intel Many Integrated Core (MIC) Xeon Phi line of processors is a potential game changer for HEP computing. With 72 cores and deep vector registers, the KNL cards promise significant performance benefits for highly-parallel, compute-heavy applications. Cori, the newest supercomputer at the National Energy Research Scientific Computing Center (NERSC), will be delivered to its users in two phases with the first phase online now and the second phase expected in mid-2016. Cori Phase 2 will be based on the KNL architecture and will contain over 9000 compute nodes with 96GB DDR4 memory. ATLAS simulation with the multithreaded Athena Framework (AthenaMT) is a great use-case for the KNL architecture and supercomputers like Cori. Simulation jobs have a high ratio of CPU computation to disk I/O and have been shown to scale well in multi-threading and across many nodes. In this presentation we will give an overview of the ATLAS simulation application with details on its multi-thr...

  19. Multi-threaded ATLAS Simulation on Intel Knights Landing Processors

    CERN Document Server

    Farrell, Steven; The ATLAS collaboration

    2017-01-01

    The Knights Landing (KNL) release of the Intel Many Integrated Core (MIC) Xeon Phi line of processors is a potential game changer for HEP computing. With 72 cores and deep vector registers, the KNL cards promise significant performance benefits for highly-parallel, compute-heavy applications. Cori, the newest supercomputer at the National Energy Research Scientific Computing Center (NERSC), was delivered to its users in two phases with the first phase online at the end of 2015 and the second phase now online at the end of 2016. Cori Phase 2 is based on the KNL architecture and contains over 9000 compute nodes with 96GB DDR4 memory. ATLAS simulation with the multithreaded Athena Framework (AthenaMT) is a good potential use-case for the KNL architecture and supercomputers like Cori. ATLAS simulation jobs have a high ratio of CPU computation to disk I/O and have been shown to scale well in multi-threading and across many nodes. In this paper we will give an overview of the ATLAS simulation application with detai...

  20. On the multi-threaded nature of solar spicules

    CERN Document Server

    Skogsrud, H; De Pontieu, B

    2014-01-01

    A dominant constituent in the dynamic chromosphere are spicules. Spicules at the limb appear as relatively small and dynamic jets that are observed to everywhere stick out. Many papers emphasize the important role spicules might play in the energy and mass balance of the chromosphere and corona. However, many aspects of spicules remain a mystery. In this Letter we shed more light on the multi-threaded nature of spicules and their torsional component. We use high spatial, spectral and temporal resolution observations from the Swedish 1-m Solar Telescope in the H{\\alpha} spectral line. The data targets the limb and we extract spectra from spicules far out from the limb to reduce the line-of-sight superposition effect. We discover that many spicules display very asymmetric spectra with some even showing multiple peaks. To quantify this asymmetry we use a double Gaussian fitting procedure and find an average velocity difference between the single Gaussian components to be between 20-30 km s$^{-1}$ for a sample of...

  1. Multi-Thread Hydrodynamic Modeling of a Solar Flare

    CERN Document Server

    Warren, H P

    2006-01-01

    Past hydrodynamic simulations have been able to reproduce the high temperatures and densities characteristic of solar flares. These simulations, however, have not been able to account for the slow decay of the observed flare emission or the absence of blueshifts in high spectral resolution line profiles. Recent work has suggested that modeling a flare as an sequence of independently heated threads instead of as a single loop may resolve the discrepancies between the simulations and observations. In this paper we present a method for computing multi-thread, time-dependent hydrodynamic simulations of solar flares and apply it to observations of the Masuda flare of 1992 January 13. We show that it is possible to reproduce the temporal evolution of high temperature thermal flare plasma observed with the instruments on the \\textit{GOES} and \\textit{Yohkoh} satellites. The results from these simulations suggest that the heating time-scale for a individual thread is on the order of 200 s. Significantly shorter heati...

  2. A Comprehensive Experimental Comparison of Event Driven and Multi-Threaded Sensor Node Operating Systems

    Directory of Open Access Journals (Sweden)

    Cormac Duffy

    2008-03-01

    Full Text Available The capabilities of a sensor network are strongly influenced by the operating system used on the sensor nodes. In general, two different sensor network operating system types are currently considered: event driven and multi-threaded. It is commonly assumed that event driven operating systems are more suited to sensor networks as they use less memory and processing resources. However, if factors other than resource usage are considered important, a multi-threaded system might be preferred. This paper compares the resource needs of multi-threaded and event driven sensor network operating systems. The resources considered are memory usage and power consumption. Additionally, the event handling capabilities of event driven and multi-threaded operating systems are analyzed and compared. The results presented in this paper show that for a number of application areas a thread-based sensor network operating system is feasible and preferable.

  3. Designing platform independent mobile apps and services

    CERN Document Server

    Heckman, Rocky

    2016-01-01

    This book explains how to help create an innovative and future proof architecture for mobile apps by introducing practical approaches to increase the value and flexibility of their service layers and reduce their delivery time. Designing Platform Independent Mobile Apps and Services begins by describing the mobile computing landscape and previous attempts at cross platform development. Platform independent mobile technologies and development strategies are described in chapter two and three. Communication protocols, details of a recommended five layer architecture, service layers, and the data abstraction layer are also introduced in these chapters. Cross platform languages and multi-client development tools for the User Interface (UI) layer, as well as message processing patterns and message routing of the Service Int rface (SI) layer are explained in chapter four and five. Ways to design the service layer for mobile computing, using Command Query Responsibility Segregation (CQRS) and the Data Abstraction La...

  4. General Purpose (office) Network reorganisation

    CERN Multimedia

    IT Department

    2016-01-01

    On Saturday 27 August, the IT Department’s Communication Systems group will perform a major reorganisation of CERN’s General Purpose Network.   This reorganisation will cause network interruptions on Saturday 27 August (and possibly Sunday 28 August) and will be followed by a change to the IP addresses of connected systems that will come into effect on Monday 3 October. For further details and information about the actions you may need to take, please see: https://information-technology.web.cern.ch/news/general-purpose-office-network-reorganisation.

  5. Performance improvement of developed program by using multi-thread technique

    Directory of Open Access Journals (Sweden)

    Surasak Jabal

    2015-03-01

    Full Text Available This research presented how to use a multi-thread programming technique to improve the performance of a program written by Windows Presentation Foundation (WPF. The Computer Assisted Instruction (CAI software, named GAME24, was selected to use as a case study. This study composed of two main parts. The first part was about design and modification of the program structure upon the Object Oriented Programing (OOP approach. The second part was about coding the program using the multi-thread technique which the number of threads were based on the calculated Catalan number. The result showed that the multi-thread programming technique increased the performance of the program 44%-88% compared to the single-thread technique. In addition, it has been found that the number of cores in the CPU also increase the performance of multithreaded program proportionally.

  6. A Multi-thread Data Flow Solution Applying to Java Extension

    Science.gov (United States)

    Chen, Li

    The multi-core processors environment is increasingly popular, the parallel studies of application program for this architecture has become the focus. In object-oriented program design language, often use the thread to implement an application program parallelism, however, synchronization and communication between threads are often very complicated to implement and can not take full advantage of multi-core advantage. Proposed a multi-thread basing on data flow and Java extensions to achieve solutions, presents a new multi-thread programming method. The results show that this method is not only easy to implement and can better take advantage of multi-core advantage.

  7. General purpose steam table library :

    Energy Technology Data Exchange (ETDEWEB)

    Carpenter, John H.; Belcourt, Kenneth Noel; Nourgaliev, Robert

    2013-08-01

    Completion of the CASL L3 milestone THM.CFD.P7.04 provides a general purpose tabular interpolation library for material properties to support, in particular, standardized models for steam properties. The software consists of three parts, implementations of analytic steam models, a code to generate tables from those models, and an interpolation package to interface the tables to CFD codes such as Hydra-TH. Verification of the standard model is maintained through the entire train of routines. The performance of interpolation package exceeds that of freely available analytic implementation of the steam properties by over an order of magnitude.

  8. Dynamically Translating Binary Code for Multi-Threaded Programs Using Shared Code Cache

    Institute of Scientific and Technical Information of China (English)

    Chia-Lun Liu; Jiunn-Yeu Chen; Wuu Yang; Wei-Chung Hsu

    2014-01-01

    mc2llvm is a process-level ARM-to-x86 binary translator developed in our lab in the past several years. Currently, it is able to emulate single-threaded programs. We extend mc2llvm to emulate multi-threaded programs. Our main task is to reconstruct its architecture for multi-threaded programs. Register mapping, code cache management, and address mapping in mc2llvm have all been modified. In addition, to further speed up the emulation, we collect hot paths, aggressively optimize and generate code for them at run time. Additional threads are used to alleviate the overhead. Thus, when the same hot path is walked through again, the corresponding optimized native code will be executed instead. In our experiments, our system is 8.8X faster than QEMU (quick emulator) on average when emulating the specified benchmarks with 8 guest threads.

  9. UTLEON3 Exploring Fine-Grain Multi-Threading in FPGAs

    CERN Document Server

    Daněk, Martin; Kohout, Lukáš; Sýkora, Jaroslav; Bartosinski, Roman

    2013-01-01

    This book describes a specification, microarchitecture, VHDL implementation and evaluation of a SPARC v8 CPU with fine-grain multi-threading, called micro-threading. The CPU, named UTLEON3, is an alternative platform for exploring CPU multi-threading that is compatible with the industry-standard GRLIB package. The processor microarchitecture was designed to map in an efficient way the data-flow scheme on a classical von Neumann pipelined processing used in common processors, while retaining full binary compatibility with existing legacy programs.  Describes and documents a working SPARC v8, with fine-grain multithreading and fast context switch; Provides VHDL sources for the described processor; Describes a latency-tolerant framework for coupling hardware accelerators to microthreaded processor pipelines; Includes programming by example in the micro-threaded assembly language.    

  10. Multi-threaded acceleration of ORBIT code on CPU and GPU with minimal modifications

    Science.gov (United States)

    Qu, Ante; Ethier, Stephane; Feibush, Eliot; White, Roscoe

    2013-10-01

    The guiding center code ORBIT was originally developed 30 years ago to study the drift orbit effects of charged particles in the strong equilibrium magnetic fields of tokamaks. Today, ORBIT remains a very active tool in magnetic confinement fusion research and continues to adapt to the latest toroidal devices, such as the NSTX-Upgrade, for which it plays a very important role in the study of energetic particle effects. Although the capabilities of ORBIT have improved throughout the years, the code still remains a serial application, which has now become an impediment to the lengthy simulations required for the NSTX-U project. In this work, multi-threaded parallelism is introduced in the core of the code with the goal of achieving the largest performance improvement while minimizing changes made to the source code. To that end, we introduce preprocessor directives in the most compute-intensive parts of the code, which constitute the stable core that seldom changes. Standard OpenMP directives are used for shared-memory CPU multi-threading while newly developed OpenACC (www.openacc.org) directives are used for GPU (Graphical Processing Unit) multi-threading. Implementation details and performance results are presented.

  11. Radiative transfer in cylindrical threads with incident radiation VII. Multi-thread models

    CERN Document Server

    Labrosse, N

    2016-01-01

    We solved the radiative transfer and statistical equilibrium equations in a two-dimensional cross-section of a cylindrical structure oriented horizontally and lying above the solar surface. The cylinder is filled with a mixture of hydrogen and helium and is illuminated at a given altitude from the solar disc. We constructed simple models made from a single thread or from an ensemble of several threads along the line of sight. This first use of two-dimensional, multi-thread fine structure modelling combining hydrogen and helium radiative transfer allowed us to compute synthetic emergent spectra from cylindrical structures and to study the effect of line-of-sight integration of an ensemble of threads under a range of physical conditions. We analysed the effects of variations in temperature distribution and in gas pressure. We considered the effect of multi-thread structures within a given field of view and the effect of peculiar velocities between the structures in a multi-thread model. We compared these new mo...

  12. A Light-Weight Approach for Verifying Multi-Threaded Programs with CPAchecker

    Directory of Open Access Journals (Sweden)

    Dirk Beyer

    2016-12-01

    Full Text Available Verifying multi-threaded programs is becoming more and more important, because of the strong trend to increase the number of processing units per CPU socket. We introduce a new configurable program analysis for verifying multi-threaded programs with a bounded number of threads. We present a simple and yet efficient implementation as component of the existing program-verification framework CPAchecker. While CPAchecker is already competitive on a large benchmark set of sequential verification tasks, our extension enhances the overall applicability of the framework. Our implementation of handling multiple threads is orthogonal to the abstract domain of the data-flow analysis, and thus, can be combined with several existing analyses in CPAchecker, like value analysis, interval analysis, and BDD analysis. The new analysis is modular and can be used, for example, to verify reachability properties as well as to detect deadlocks in the program. This paper includes an evaluation of the benefit of some optimization steps (e.g., changing the iteration order of the reachability algorithm or applying partial-order reduction as well as the comparison with other state-of-the-art tools for verifying multi-threaded programs.

  13. A platform independent communication library for distributed computing

    NARCIS (Netherlands)

    Groen, D.; Rieder, S.; Grosso, P.; de Laat, C.; Portegies Zwart, S.

    2010-01-01

    We present MPWide, a platform independent communication library for performing message passing between supercomputers. Our library couples several local MPI applications through a long distance network using, for example, optical links. The implementation is deliberately kept light-weight, platform

  14. Optimising GPR modelling: A practical, multi-threaded approach to 3D FDTD numerical modelling

    Science.gov (United States)

    Millington, T. M.; Cassidy, N. J.

    2010-09-01

    The demand for advanced interpretational tools has lead to the development of highly sophisticated, computationally demanding, 3D GPR processing and modelling techniques. Many of these methods solve very large problems with stepwise methods that utilise numerically similar functions within iterative computational loops. Problems of this nature are readily parallelised by splitting the computational domain into smaller, independent chunks for direct use on cluster-style, multi-processor supercomputers. Unfortunately, the implications of running such facilities, as well as time investment needed to develop the parallel codes, means that for most researchers, the use of these advanced methods is too impractical. In this paper, we propose an alternative method of parallelisation which exploits the capabilities of the modern multi-core processors (upon which today's desktop PCs are built) by multi-threading the calculation of a problem's individual sub-solutions. To illustrate the approach, we have applied it to an advanced, 3D, finite-difference time-domain (FDTD) GPR modelling tool in which the calculation of the individual vector field components is multi-threaded. To be of practical use, the FDTD scheme must be able to deliver accurate results with short execution times and we, therefore, show that the performance benefits of our approach can deliver runtimes less than half those of the more conventional, serial programming techniques. We evaluate implementations of the technique using different programming languages (e.g., Matlab, Java, C++), which will facilitate the construction of a flexible modelling tool for use in future GPR research. The implementations are compared on a variety of typical hardware platforms, having between one and eight processing cores available, and also a modern Graphical Processing Unit (GPU)-based computer. Our results show that a multi-threaded xyz modelling approach is easy to implement and delivers excellent results when implemented

  15. MT-ADRES: multi-threading on coarse-grained reconfigurable architecture

    DEFF Research Database (Denmark)

    Wu, Kehuai; Kanstein, Andreas; Madsen, Jan;

    2008-01-01

    -ILP architectures achieve only low parallelism when executing partially sequential code segments, which is also known as Amdahl's law, this article proposes to extend ADRES to MT-ADRES (multi-threaded ADRES) to also exploit thread-level parallelism. On MT-ADRES architectures, the array can be partitioned...... in multiple smaller arrays that can execute threads in parallel. Because the partition can be changed dynamically, this extension provides more flexibility than a multi-core approach. This article presents details of the enhanced architecture and results obtained from an MPEG-2 decoder implementation...... that exploits a mix of thread-level parallelism and instruction-level parallelism....

  16. Multi-threaded Query Agent and Engine for a Very Large Astronomical Database

    Science.gov (United States)

    Thakkar, A. R.; Kunszt, P. Z.; Szalay, A. S.; Szokoly, G. P.

    We describe the Query Agent and Query Engine for the Science Archive of the Sloan Digital Sky Survey. In our client-server model, a GUI client communicates with a Query Agent that retrieves the requested data from the repository (a commercial ODBMS). The multi-threaded Agent is able to maintain multiple concurrent user sessions as well as multiple concurrent queries within each session. We describe the parallel, distributed design of the Query Agent and present the results of performance benchmarks that we have run using typical queries on our test data. We also report on our experiences with loading large amounts of data into Objectivity.

  17. Verificación modular de atomicidad en bytecode Java Multi-Thread

    OpenAIRE

    Bavera, Francisco

    2007-01-01

    En este trabajo se presenta una técnica para verificar modularmente atomicidad de programas bytecode Java multi-thread. Los programas deben contar con una especificación referente a los bloqueos y al acceso a los recursos compartidos para realizar la verificación modular. Se presenta la compilación propuesta de programas fuente Java con la especificación de atomicidad a bytecode Java, con dichas especificaciones incluidas el código compilado. Garantizar atomicidad en programas mult-thread per...

  18. Una versión paralela del NSGA II utilizando multi-threads

    OpenAIRE

    2004-01-01

    El trabajo presenta una versión paralela mediante estrategias de multi-threads, del algoritmo evolutivo para optimización multiobjetivo NSGA-II. Se muestran los detalles de diseño e implementación de la versión paralela, en la que se define una estructura de vecindad la cual estipula la interacción entre las distintas sub-poblaciones. Se analiza la calidad de resultados y la eficiencia computacional, comparando con los resultados y tiempos de ejecución de la versión secuencial del algoritmo N...

  19. GEANT4-MT : bringing multi-threading into GEANT4 production

    Science.gov (United States)

    Ahn, Sunil; Apostolakis, John; Asai, Makoto; Brandt, Daniel; Cooperman, Gene; Cosmo, Gabriele; Dotti, Andrea; Dong, Xin; Jun, Soon Yung; Nowak, Andrzej

    2014-06-01

    GEANT4-MT is the multi-threaded version of the GEANT4 particle transport code.(1, 2) The key goals for the design of GEANT4-MT have been a) the need to reduce the memory footprint of the multi-threaded application compared to the use of separate jobs and processes; b) to create an easy migration of the existing applications; and c) to use efficiently many threads or cores, by scaling up to tens and potentially hundreds of workers. The first public release of a GEANT4-MT prototype was made in 2011. We report on the revision of GEANT4-MT for inclusion in the production-level release scheduled for end of 2013. This has involved significant re-engineering of the prototype in order to incorporate it into the main GEANT4 development line, and the porting of GEANT4-MT threading code to additional platforms. In order to make the porting of applications as simple as possible, refinements addressed the needs of standalone applications. Further adaptations were created to improve the fit with the frameworks of High Energy Physics (HEP) experiments. We report on performances measurements on Intel Xeon™, AMD Opteron™ the first trials of GEANT4-MT on the Intel Many Integrated Cores (MIC) architecture, in the form of the Xeon Phi™ co-processor.(3) These indicate near-linear scaling through about 200 threads on 60 cores, when holding fixed the number of events per thread.

  20. Low-latency multi-threaded processing of neuronal signals for brain-computer interfaces

    Directory of Open Access Journals (Sweden)

    Jörg eFischer

    2014-01-01

    Full Text Available Brain-computer interfaces (BCIs require demanding numerical computations to transfer brain signals into control signals driving an external actuator. Increasing the computational performance of the BCI algorithms carrying out these calculations enables faster reaction to user inputs and allows using more demanding decoding algorithms. Here we introduce a modular and extensible software architecture with a multi-threaded signal processing pipeline suitable for BCI applications. The computational load and latency (the time that the system needs to react to user input are measured for different pipeline implementations in typical BCI applications with realistic parameter settings. We show that BCIs can benefit substantially from the proposed parallelization: firstly, by reducing the latency and secondly, by increasing the amount of recording channels and signal features that can be used for decoding beyond the amount which can be handled by a single thread. The proposed software architecture provides a simple, yet flexible solution for BCI applications.

  1. An HLA/RTI Architecture Based on Multi-thread Processing

    Institute of Scientific and Technical Information of China (English)

    GUAN Li; ZOU Ru-ping; ZHU Bin; HAO Chong-yang

    2010-01-01

    In order to improve the real-time performance of the real-time HLA (high level architecture) in the application of massive data communication volume, multi-thread processing was adopted, thread pool structure was introduced into the system, different threads to handle corresponding message queues was utilized to respond different message requests. Fur-thermore, an allocation strategy of semi-complete deprivation of priority was adopted, which reduces thread switching cost and processing burden in the system, provided that the message requests with high priority can be responded in time, thus improves the system's overall performance. The design and experiment results indicate that the method proposed in this pa-per can improve the real-time performance of HLA in distributed system applications greatly.

  2. Servicing a globally broadcast interrupt signal in a multi-threaded computer

    Energy Technology Data Exchange (ETDEWEB)

    Attinella, John E.; Davis, Kristan D.; Musselman, Roy G.; Satterfield, David L.

    2015-12-29

    Methods, apparatuses, and computer program products for servicing a globally broadcast interrupt signal in a multi-threaded computer comprising a plurality of processor threads. Embodiments include an interrupt controller indicating in a plurality of local interrupt status locations that a globally broadcast interrupt signal has been received by the interrupt controller. Embodiments also include a thread determining that a local interrupt status location corresponding to the thread indicates that the globally broadcast interrupt signal has been received by the interrupt controller. Embodiments also include the thread processing one or more entries in a global interrupt status bit queue based on whether global interrupt status bits associated with the globally broadcast interrupt signal are locked. Each entry in the global interrupt status bit queue corresponds to a queued global interrupt.

  3. 12 CFR 1703.31 - General purposes.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false General purposes. 1703.31 Section 1703.31 Banks... DEVELOPMENT OFHEO ORGANIZATION AND FUNCTIONS RELEASE OF INFORMATION Testimony and Production of Documents in... time of employees for their official duties, maintain the impartial position of OFHEO in litigation...

  4. A Platform-Independent Plugin for Navigating Online Radiology Cases.

    Science.gov (United States)

    Balkman, Jason D; Awan, Omer A

    2016-06-01

    Software methods that enable navigation of radiology cases on various digital platforms differ between handheld devices and desktop computers. This has resulted in poor compatibility of online radiology teaching files across mobile smartphones, tablets, and desktop computers. A standardized, platform-independent, or "agnostic" approach for presenting online radiology content was produced in this work by leveraging modern hypertext markup language (HTML) and JavaScript web software technology. We describe the design and evaluation of this software, demonstrate its use across multiple viewing platforms, and make it publicly available as a model for future development efforts.

  5. A Multi-Threaded Fast Convolver for Dynamically Parallel Image Filtering

    CERN Document Server

    Kepner, J V

    2001-01-01

    2D convolution is a staple of digital image processing. The advent of large format imagers makes it possible to literally ``pave'' with silicon the focal plane of an optical sensor, which results in very large images that can require a significant amount computation to process. Filtering of large images via 2D convolutions is often complicated by a variety of effects (e.g., non-uniformities found in wide field of view instruments). This paper describes a fast (FFT based) method for convolving images, which is also well suited to very large images. A parallel version of the method is implemented using a multi-threaded approach, which allows more efficient load balancing and a simpler software architecture. The method has been implemented within in a high level interpreted language (IDL), while also exploiting open standards vector libraries (VSIPL) and open standards parallel directives (OpenMP). The parallel approach and software architecture are generally applicable to a variety of algorithms and has the adv...

  6. Towards Fast Reverse Time Migration Kernels using Multi-threaded Wavefront Diamond Tiling

    KAUST Repository

    Malas, T.

    2015-09-13

    Today’s high-end multicore systems are characterized by a deep memory hierarchy, i.e., several levels of local and shared caches, with limited size and bandwidth per core. The ever-increasing gap between the processor and memory speed will further exacerbate the problem and has lead the scientific community to revisit numerical software implementations to better suit the underlying memory subsystem for performance (data reuse) as well as energy efficiency (data locality). The authors propose a novel multi-threaded wavefront diamond blocking (MWD) implementation in the context of stencil computations, which represents the core operation for seismic imaging in oil industry. The stencil diamond formulation introduces temporal blocking for high data reuse in the upper cache levels. The wavefront optimization technique ensures data locality by allowing multiple threads to share common adjacent point stencil. Therefore, MWD is able to take up the aforementioned challenges by alleviating the cache size limitation and releasing pressure from the memory bandwidth. Performance comparisons are shown against the optimized 25-point stencil standard seismic imaging scheme using spatial and temporal blocking and demonstrate the effectiveness of MWD.

  7. General purpose fast decoupled power flow

    Energy Technology Data Exchange (ETDEWEB)

    Nanda, J.; Bijwe, P.R.; Henry, J.; Bapi Raju, V. (Indian Inst. of Tech., New Delhi (IN). Dept. of Electrical Engineering)

    1992-03-01

    A general purpose fast decoupled power flow model (GFDPF) is presented that exhibits more or less best convergence properties for both well-behaved and ill-conditioned systems. In the proposed model, all network shunts such as line charging, external shunts at buses, shunts formed due to {pi} representation of off-nominal in-phase transformers etc. are treated as constant impedance loads. The effect of line resistances is considered while forming the (B') matrix and are ignored in forming the (B'') matrix. This model is tested on several systems for both well-behaved and ill-conditioned situations. A simple, efficient compensation technique is proposed to deal with Q-limit enforcements associated with bus-type switchings at voltage-controlled buses. The results demonstrate that the proposed GFDPF model exhibits more or less stable convergence behaviour for both well-behaved and ill-conditioned situations. (author).

  8. A Platform-independent Programming Environment for Robot Control

    CERN Document Server

    Reckhaus, Michael; Ploeger, Paul G; Kraetzschmar, Gerhard K

    2010-01-01

    The development of robot control programs is a complex task. Many robots are different in their electrical and mechanical structure which is also reflected in the software. Specific robot software environments support the program development, but are mainly text-based and usually applied by experts in the field with profound knowledge of the target robot. This paper presents a graphical programming environment which aims to ease the development of robot control programs. In contrast to existing graphical robot programming environments, our approach focuses on the composition of parallel action sequences. The developed environment allows to schedule independent robot actions on parallel execution lines and provides mechanism to avoid side-effects of parallel actions. The developed environment is platform-independent and based on the model-driven paradigm. The feasibility of our approach is shown by the application of the sequencer to a simulated service robot and a robot for educational purpose.

  9. SRAC95; general purpose neutronics code system

    Energy Technology Data Exchange (ETDEWEB)

    Okumura, Keisuke; Tsuchihashi, Keichiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Kaneko, Kunio

    1996-03-01

    SRAC is a general purpose neutronics code system applicable to core analyses of various types of reactors. Since the publication of JAERI-1302 for the revised SRAC in 1986, a number of additions and modifications have been made for nuclear data libraries and programs. Thus, the new version SRAC95 has been completed. The system consists of six kinds of nuclear data libraries(ENDF/B-IV, -V, -VI, JENDL-2, -3.1, -3.2), five modular codes integrated into SRAC95; collision probability calculation module (PIJ) for 16 types of lattice geometries, Sn transport calculation modules(ANISN, TWOTRAN), diffusion calculation modules(TUD, CITATION) and two optional codes for fuel assembly and core burn-up calculations(newly developed ASMBURN, revised COREBN). In this version, many new functions and data are implemented to support nuclear design studies of advanced reactors, especially for burn-up calculations. SRAC95 is available not only on conventional IBM-compatible computers but also on scalar or vector computers with the UNIX operating system. This report is the SRAC95 users manual which contains general description, contents of revisions, input data requirements, detail information on usage, sample input data and list of available libraries. (author).

  10. General purpose optimization software for engineering design

    Science.gov (United States)

    Vanderplaats, G. N.

    1990-01-01

    The author has developed several general purpose optimization programs over the past twenty years. The earlier programs were developed as research codes and served that purpose reasonably well. However, in taking the formal step from research to industrial application programs, several important lessons have been learned. Among these are the importance of clear documentation, immediate user support, and consistent maintenance. Most important has been the issue of providing software that gives a good, or at least acceptable, design at minimum computational cost. Here, the basic issues developing optimization software for industrial applications are outlined and issues of convergence rate, reliability, and relative minima are discussed. Considerable feedback has been received from users, and new software is being developed to respond to identified needs. The basic capabilities of this software are outlined. A major motivation for the development of commercial grade software is ease of use and flexibility, and these issues are discussed with reference to general multidisciplinary applications. It is concluded that design productivity can be significantly enhanced by the more widespread use of optimization as an everyday design tool.

  11. 47 CFR 32.6124 - General purpose computers expense.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false General purpose computers expense. 32.6124... General purpose computers expense. This account shall include the costs of personnel whose principal job is the physical operation of general purpose computers and the maintenance of operating systems. This...

  12. MultiThreaded Algorithms for GPGPUs in the ATLAS High Level Trigger

    CERN Document Server

    Conde Mui\\~no, Patricia; The ATLAS collaboration

    2016-01-01

    General purpose Graphics Processor Units (GPGPU) are being evaluated for possible future inclusion in an upgraded ATLAS High Level Trigger farm. We have developed a demonstrator including GPGPU implementations of Inner Detector and Muon tracking and Calorimeter clustering within the ATLAS software framework. ATLAS is a general purpose particle physics experiment located on the LHC collider at CERN. The ATLAS Trigger system consists of two levels, with level 1 implemented in hardware and the High Level Trigger implemented in software running on a farm of commodity CPU. The High Level Trigger reduces the trigger rate from the 100~kHz level 1 acceptance rate to 1.5~kHz for recording, requiring an average per­-event processing time of $\\sim 250 $~ms for this task. The selection in the high level trigger is based on reconstructing tracks in the Inner Detector and Muon Spectrometer and clusters of energy deposited in the Calorimeter. Performing this reconstruction within the available farm resources presents a sig...

  13. Java多线程编程中数据安全的应用研究%Research on the Application of Data Security in Java Multi- thread Programming

    Institute of Scientific and Technical Information of China (English)

    韦庆清; 任卫东

    2012-01-01

    通过分析Java多线程并发机制的基本特征,着重针对Java多线程程序中的数据安全问题作深入探讨研究.指出在利用Java多线程技术进行实际编程过程中容易出现的数据安全问题以及相应解决方法.并结合实例说明数据安全在Java多线程编程中的具体实现。%Based on the analysis of the basic characteristics of Java multi-thread concurrent mechanism, deelply studies the data security problems of Java multi-thread program. Points out the data se- curity problems in programming with Java multi-thread and corresponding solutions. And pre- sents some examples to show the implementation of the data security in Java multi-thread pro- grammmg

  14. 47 CFR 32.2124 - General purpose computers.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false General purpose computers. 32.2124 Section 32... General purpose computers. (a) This account shall include the original cost of computers and peripheral... cost of computers and their associated peripheral devices associated with switching, network signaling...

  15. 78 FR 7718 - Review of the General Purpose Costing System

    Science.gov (United States)

    2013-02-04

    ... Surface Transportation Board 49 CFR Parts 1247 and 1248 Review of the General Purpose Costing System... general purpose costing system, the Uniform Railroad Costing System (URCS). Specifically, the Board is..., 2013. ADDRESSES: Comments may be submitted either via the Board's e-filing format or in the traditional...

  16. Application of Multi-thread Technology in Measurement and Control Software under Windows NT%Windows NT环境下多线程技术在测控软件中的应用

    Institute of Scientific and Technical Information of China (English)

    刘九七

    2001-01-01

    论述了WindowsNT环境下多线程的应用,介绍了Delphi中多线程的编程方法,举出了应用多线程技术编写测控软件的实例。%he application of multi-thread technique in Windows NT isresearched, the method of developing multi-thread program in Delphi is introduced,and the example of multi-thread programming in measurement and control software is presented.

  17. AthenaMT: Upgrading the ATLAS Software Framework for the Many-Core World with Multi-Threading

    CERN Document Server

    Leggett, Charles; The ATLAS collaboration

    2017-01-01

    ATLAS's current software framework, Gaudi/Athena, has been very successful for the experiment in LHC Runs 1 and 2. However, its single threaded design has been recognized for some time to be increasingly problematic as CPUs have increased core counts and decreased available memory per core. Even the multi-process version of Athena, AthenaMP, will not scale to the range of architectures we expect to use beyond Run2. After concluding a rigorous requirements phase, where many design components were examined in detail, ATLAS has begun the migration to a new data-flow driven, multi-threaded framework, which enables the simultaneous processing of singleton, thread unsafe legacy Algorithms, cloned Algorithms that execute concurrently in their own threads with different Event contexts, and fully re-entrant, thread safe Algorithms. In this paper we will report on the process of modifying the framework to safely process multiple concurrent events in different threads, which entails significant changes in the underlying...

  18. AthenaMT: Upgrading the ATLAS Software Framework for the Many-Core World with Multi-Threading

    CERN Document Server

    Leggett, Charles; The ATLAS collaboration; Bold, Tomasz; Calafiura, Paolo; Farrell, Steven; Malon, David; Ritsch, Elmar; Stewart, Graeme; Snyder, Scott; Tsulaia, Vakhtang; Wynne, Benjamin; van Gemmeren, Peter

    2016-01-01

    ATLAS's current software framework, Gaudi/Athena, has been very successful for the experiment in LHC Runs 1 and 2. However, its single threaded design has been recognised for some time to be increasingly problematic as CPUs have increased core counts and decreased available memory per core. Even the multi-process version of Athena, AthenaMP, will not scale to the range of architectures we expect to use beyond Run2. After concluding a rigorous requirements phase, where many design components were examined in detail, ATLAS has begun the migration to a new data-flow driven, multi-threaded framework, which enables the simultaneous processing of singleton, thread unsafe legacy Algorithms, cloned Algorithms that execute concurrently in their own threads with different Event contexts, and fully re-entrant, thread safe Algorithms. In this paper we will report on the process of modifying the framework to safely process multiple concurrent events in different threads, which entails significant changes in the underlying...

  19. 多线程技术在电视播控系统中的应用%Application of Multi-Threading Technology in Television Broadcast Controlling System

    Institute of Scientific and Technical Information of China (English)

    蔡泷鑫

    2011-01-01

    文中通过设计播出控制系统的实践经验,论述了多线程技术在播出控制系统程序中的应用,阐明了在播控程序界面下隐藏着的worker线程,及这些线程的相互关系,并介绍了播控系统设计中多线程技术在关键处的使用方法。%This article discusses the application of multi-threading technology in broadcast controlling system program by practical experience of broadcast controlling system design;clarifies the worker thread hidden in the broadcast controlling program interface and the interrelationship of these threads,and introduces the multi-threading technology in the key place application method in broadcast controlling system design.

  20. Expanding the Media Mix in Statistics Education through Platform-Independent and Interactive Learning Objects

    Science.gov (United States)

    Mittag, Hans-Joachim

    2015-01-01

    The ubiquity of mobile devices demands the exploitation of their potentials in distance and face-to-face teaching, as well for complementing textbooks in printed or electronic format. There is a strong need to develop innovative resources that open up new dimensions of learning and teaching through interactive and platform-independent content.…

  1. 7 CFR 226.1 - General purpose and scope.

    Science.gov (United States)

    2010-01-01

    ... Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE CHILD NUTRITION PROGRAMS CHILD AND ADULT CARE FOOD PROGRAM General § 226.1 General purpose and... Child and Adult Care Food Program. Section 17 of the National School Lunch Act, as amended,...

  2. General-purpose isiZulu speech synthesiser

    CSIR Research Space (South Africa)

    Louw, A

    2005-07-01

    Full Text Available A general-purpose isiZulu text-to-speech (TTS) system was developed, based on the “Multisyn” unit-selection approach supported by the Festival TTS toolkit. The development involved a number of challenges related to the interface between speech...

  3. 基于X264多线程并行编码研究%Study of Multi-thread Parallel Coding Based on X264

    Institute of Scientific and Technical Information of China (English)

    魏妃妃; 梁久祯; 柴志雷

    2011-01-01

    以X264编码器作为研究对象,着重研究片级多线程并行编码算法和帧间宏块级并行编码算法.帧间宏块级多线程并行编码算法需要对多帧图像进行并行编码,在系统中需保存与多帧图像编码有关的参考帧图像数据,所以占用大量的内存.但通过实验证明,在编码码率相对恒定的条件下,帧间宏块级多线程并行编码算法比片级多线程并行编码算法具有更高的编码速度.根据两种算法的特点,提出可以将两种算法结合进行多粒度并行编码算法研究.%Taking X264 coder as researching object, this paper focuses on the slice - level and inter - frame macroblock - level multi - thread parallel coding algorithm. Because of the need for parallel multi -frame image codeing, inter-frame macroblock -level multi -thread parallel coding algorithm needs to store the multi - frame image coding data of the reference frame in the system , so it takes a lot of memory. However, the experimental results illustrate that the inter - frame macroblock - level multi - thread parallel coding algorithm possesses more higher encoding speedup than Slice - level while the coding rate is relatively constant. Considering the characteristic of the two algorithms, the research of the multi -granularity parallel coding algorithm combining with the two methods is proposed.

  4. High-efficiency Design of Spectrum Analyzer Software in Multi-threading%频谱分析仪软件之高效率多线程设计

    Institute of Scientific and Technical Information of China (English)

    马风军; 刘宝东

    2012-01-01

    多线程设计是现代软件设计的一个重点,在便携式频谱分析仪软件设计中有着更加严格的要求。本文针对便携式频谱分析仪高性能、低功耗指标需求,充分挖掘多线程程序设计的优点,提出了一种优化设计方案。该方案考虑"硬件优先,测量优先"的原则,权衡各线程在频谱分析仪软件中密切协作关系,从高效运行、节能、多线程协作与消除副作用4个方面进行了详细阐述。%The multi-threading design is a point in modern software project, especially having more strict demand in portable spectrum analyzer software design. This text aiming at portable spectrum's high performance and low achievement consume target requirement, well scooping out the advantage of multi-threading program design, puts forward a kind of excellent project of design. The project considering the principle of "Hardware First, Measuring First", weighing close to cooperation relation of each threading in the spectrum analyzer software, sets forth its detailed design from the efficiently running, economizing on energy, multi- threading cooperation and cancellation side effect.

  5. Using a source-to-source transformation to introduce multi-threading into the AliRoot framework for a parallel event reconstruction

    Science.gov (United States)

    Lohn, Stefan B.; Dong, Xin; Carminati, Federico

    2012-12-01

    Chip-Multiprocessors are going to support massive parallelism by many additional physical and logical cores. Improving performance can no longer be obtained by increasing clock-frequency because the technical limits are almost reached. Instead, parallel execution must be used to gain performance. Resources like main memory, the cache hierarchy, bandwidth of the memory bus or links between cores and sockets are not going to be improved as fast. Hence, parallelism can only result into performance gains if the memory usage is optimized and the communication between threads is minimized. Besides concurrent programming has become a domain for experts. Implementing multi-threading is error prone and labor-intensive. A full reimplementation of the whole AliRoot source-code is unaffordable. This paper describes the effort to evaluate the adaption of AliRoot to the needs of multi-threading and to provide the capability of parallel processing by using a semi-automatic source-to-source transformation to address the problems as described before and to provide a straight-forward way of parallelization with almost no interference between threads. This makes the approach simple and reduces the required manual changes in the code. In a first step, unconditional thread-safety will be introduced to bring the original sequential and thread unaware source-code into the position of utilizing multi-threading. Afterwards further investigations have to be performed to point out candidates of classes that are useful to share amongst threads. Then in a second step, the transformation has to change the code to share these classes and finally to verify if there are anymore invalid interferences between threads.

  6. Using general-purpose compression algorithms for music analysis

    DEFF Research Database (Denmark)

    Louboutin, Corentin; Meredith, David

    2016-01-01

    General-purpose compression algorithms encode files as dictionaries of substrings with the positions of these strings’ occurrences. We hypothesized that such algorithms could be used for pattern discovery in music. We compared LZ77, LZ78, Burrows–Wheeler and COSIATEC on classifying folk song...... melodies. A novel method was used, combining multiple viewpoints, the k-nearest-neighbour algorithm and a novel distance metric, corpus compression distance. Using single viewpoints, COSIATEC outperformed the general-purpose compressors, with a classification success rate of 85% on this task. However...... in the input data, COSIATEC outperformed LZ77 with a mean F1 score of 0.123, compared with 0.053 for LZ77. However, when the music was processed a voice at a time, the F1 score for LZ77 more than doubled to 0.124. We also discovered a significant correlation between compression factor and F1 score for all...

  7. General-Purpose Serial Interface For Remote Control

    Science.gov (United States)

    Busquets, Anthony M.; Gupton, Lawrence E.

    1990-01-01

    Computer controls remote television camera. General-purpose controller developed to serve as interface between host computer and pan/tilt/zoom/focus functions on series of automated video cameras. Interface port based on 8251 programmable communications-interface circuit configured for tristated outputs, and connects controller system to any host computer with RS-232 input/output (I/O) port. Accepts byte-coded data from host, compares them with prestored codes in read-only memory (ROM), and closes or opens appropriate switches. Six output ports control opening and closing of as many as 48 switches. Operator controls remote television camera by speaking commands, in system including general-purpose controller.

  8. General Purpose Multimedia Dataset - GarageBand 2008

    DEFF Research Database (Denmark)

    Meng, Anders

    This document describes a general purpose multimedia data-set to be used in cross-media machine learning problems. In more detail we describe the genre taxonomy applied at http://www.garageband.com, from where the data-set was collected, and how the taxonomy have been fused into a more human...... understandable taxonomy. Finally, a description of various features extracted from both the audio and text are presented....

  9. POLITO- A new open-source, platform independent software for generating high-quality lithostratigraphic columns

    Directory of Open Access Journals (Sweden)

    Cipran C. Stremtan

    2010-08-01

    Full Text Available POLITO is a free, open-source, and platform-independent software which can automatically generate lithostratigraphic columns from field data. Its simple and easy to use interface allows users to manipulate large datasets and create high-quality graphical outputs, either in editable vector or raster format, or as PDF files. POLITO uses USGS standard lithology patterns and can be downloaded from its Sourceforge project page (http://sourceforge.net/projects/polito/.

  10. Using a source-to-source transformation to introduce multi-threading into the AliRoot framework for a parallel event reconstruction

    CERN Document Server

    Lohn, Stefan B; Carminati, Federico

    2012-01-01

    Chip-Multiprocessors are going to support massive parallelism by many additional physical and logical cores. Improving performance can no longer be obtained by increasing clock-frequency because the technical limits are almost reached. Instead, parallel execution must be used to gain performance. Resources like main memory, the cache hierarchy, bandwidth of the memory bus or links between cores and sockets are not going to be improved as fast. Hence, parallelism can only result into performance gains if the memory usage is optimized and the communication between threads is minimized. Besides concurrent programming has become a domain for experts. Implementing multi-threading is error prone and labor-intensive. A full reimplementation of the whole AliRoot source-code is unaffordable. This paper describes the effort to evaluate the adaption of AliRoot to the needs of multi-threading and to provide the capability of parallel processing by using a semi-automatic source-to-source transformation to address the prob...

  11. How General-Purpose can a GPU be?

    Directory of Open Access Journals (Sweden)

    Philip Machanick

    2015-12-01

    Full Text Available The use of graphics processing units (GPUs in general-purpose computation (GPGPU is a growing field. GPU instruction sets, while implementing a graphics pipeline, draw from a range of single instruction multiple datastream (SIMD architectures characteristic of the heyday of supercomputers. Yet only one of these SIMD instruction sets has been of application on a wide enough range of problems to survive the era when the full range of supercomputer design variants was being explored: vector instructions. This paper proposes a reconceptualization of the GPU as a multicore design with minimal exotic modes of parallelism so as to make GPGPU truly general.

  12. A general-purpose optimization program for engineering design

    Science.gov (United States)

    Vanderplaats, G. N.; Sugimoto, H.

    1986-01-01

    A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis) is a FORTRAN program for nonlinear constrained (or unconstrained) function minimization. The optimization process is segmented into three levels: Strategy, Optimizer, and One-dimensional search. At each level, several options are available so that a total of nearly 100 possible combinations can be created. An example of available combinations is the Augmented Lagrange Multiplier method, using the BFGS variable metric unconstrained minimization together with polynomial interpolation for the one-dimensional search.

  13. GENETIC ALGORITHM ON GENERAL PURPOSE GRAPHICS PROCESSING UNIT: PARALLELISM REVIEW

    Directory of Open Access Journals (Sweden)

    A.J. Umbarkar

    2013-01-01

    Full Text Available Genetic Algorithm (GA is effective and robust method for solving many optimization problems. However, it may take more runs (iterations and time to get optimal solution. The execution time to find the optimal solution also depends upon the niching-technique applied to evolving population. This paper provides the information about how various authors, researchers, scientists have implemented GA on GPGPU (General purpose Graphics Processing Units with and without parallelism. Many problems have been solved on GPGPU using GA. GA is easy to parallelize because of its SIMD nature and therefore can be implemented well on GPGPU. Thus, speedup can definitely be achieved if bottleneck in GAs are identified and implemented effectively on GPGPU. Paper gives review of various applications solved using GAs on GPGPU with the future scope in the area of optimization.

  14. Geographical parthenogenesis: General purpose genotypes and frozen niche variation

    DEFF Research Database (Denmark)

    Vrijenhoek, Robert C.; Parker, Dave

    2009-01-01

    marginal environments to escape competition with their sexual relatives. These ideas often fail to consider the early competitive interactions with immediate sexual ancestors, which shape alternative paths that newly formed clonal lineages might follow. Here we review the history and evidence for two...... hypotheses concerning the evolution of niche breadth in asexual species - the "general-purpose genotype" (GPG) and "frozen niche-variation" (FNV) models. The two models are often portrayed as mutually exclusive, respectively viewing clonal lineages as generalists versus specialists. Nonetheless......, they are complex syllogisms that share common assumptions regarding the likely origins of clonal diversity and the strength of interclonal selection in shaping the ecological breadth of asexual populations. Both models find support in ecological and phylogeographic studies of a wide range of organisms...

  15. SNAP: A General Purpose Network Analysis and Graph Mining Library

    CERN Document Server

    Leskovec, Jure

    2016-01-01

    Large networks are becoming a widely used abstraction for studying complex systems in a broad set of disciplines, ranging from social network analysis to molecular biology and neuroscience. Despite an increasing need to analyze and manipulate large networks, only a limited number of tools are available for this task. Here, we describe Stanford Network Analysis Platform (SNAP), a general-purpose, high-performance system that provides easy to use, high-level operations for analysis and manipulation of large networks. We present SNAP functionality, describe its implementational details, and give performance benchmarks. SNAP has been developed for single big-memory machines and it balances the trade-off between maximum performance, compact in-memory graph representation, and the ability to handle dynamic graphs where nodes and edges are being added or removed over time. SNAP can process massive networks with hundreds of millions of nodes and billions of edges. SNAP offers over 140 different graph algorithms that ...

  16. A Chemical Containment Model for the General Purpose Work Station

    Science.gov (United States)

    Flippen, Alexis A.; Schmidt, Gregory K.

    1994-01-01

    Contamination control is a critical safety requirement imposed on experiments flying on board the Spacelab. The General Purpose Work Station, a Spacelab support facility used for life sciences space flight experiments, is designed to remove volatile compounds from its internal airpath and thereby minimize contamination of the Spacelab. This is accomplished through the use of a large, multi-stage filter known as the Trace Contaminant Control System. Many experiments planned for the Spacelab require the use of toxic, volatile fixatives in order to preserve specimens prior to postflight analysis. The NASA-Ames Research Center SLS-2 payload, in particular, necessitated the use of several toxic, volatile compounds in order to accomplish the many inflight experiment objectives of this mission. A model was developed based on earlier theories and calculations which provides conservative predictions of the resultant concentrations of these compounds given various spill scenarios. This paper describes the development and application of this model.

  17. General purpose multiplexing device for cryogenic microwave systems

    Science.gov (United States)

    Chapman, Benjamin J.; Moores, Bradley A.; Rosenthal, Eric I.; Kerckhoff, Joseph; Lehnert, K. W.

    2016-05-01

    We introduce and experimentally characterize a general purpose device for signal processing in circuit quantum electrodynamics systems. The device is a broadband two-port microwave circuit element with three modes of operation: it can transmit, reflect, or invert incident signals between 4 and 8 GHz. This property makes it a versatile tool for lossless signal processing at cryogenic temperatures. In particular, rapid switching (≤ 15 ns ) between these operation modes enables several multiplexing readout protocols for superconducting qubits. We report the device's performance in a two-channel code domain multiplexing demonstration. The multiplexed data are recovered with fast readout times (up to 400 ns ) and infidelities ≤ 10-2 for probe powers ≥ 7 fW , in agreement with the expectation for binary signaling with Gaussian noise.

  18. General-purpose fuzzy controller for dc-dc converters

    Energy Technology Data Exchange (ETDEWEB)

    Mattavelli, P.; Rossetto, L.; Spiazzi, G.; Tenti, P. [Univ. of Padova (Italy)

    1997-01-01

    In this paper, a general-purpose fuzzy controller for dc-dc converters is investigated. Based on a qualitative description of the system to be controlled, fuzzy controllers are capable of good performances, even for those systems where linear control techniques fail, e.g., when a mathematical description is not available or is in the presence of wide parameter variations. The presented approach is general and can be applied to any dc-dc converter topologies. Controller implementation is relatively simple and can guarantee a small-signal response as fast and stable as other standard regulators and an improved large-signal response. Simulation results of Buck-Boost and Sepic converters show control potentialities.

  19. Toward a General-Purpose Heterogeneous Ensemble for Pattern Classification

    Directory of Open Access Journals (Sweden)

    Loris Nanni

    2015-01-01

    Full Text Available We perform an extensive study of the performance of different classification approaches on twenty-five datasets (fourteen image datasets and eleven UCI data mining datasets. The aim is to find General-Purpose (GP heterogeneous ensembles (requiring little to no parameter tuning that perform competitively across multiple datasets. The state-of-the-art classifiers examined in this study include the support vector machine, Gaussian process classifiers, random subspace of adaboost, random subspace of rotation boosting, and deep learning classifiers. We demonstrate that a heterogeneous ensemble based on the simple fusion by sum rule of different classifiers performs consistently well across all twenty-five datasets. The most important result of our investigation is demonstrating that some very recent approaches, including the heterogeneous ensemble we propose in this paper, are capable of outperforming an SVM classifier (implemented with LibSVM, even when both kernel selection and SVM parameters are carefully tuned for each dataset.

  20. Toward a General-Purpose Heterogeneous Ensemble for Pattern Classification.

    Science.gov (United States)

    Nanni, Loris; Brahnam, Sheryl; Ghidoni, Stefano; Lumini, Alessandra

    2015-01-01

    We perform an extensive study of the performance of different classification approaches on twenty-five datasets (fourteen image datasets and eleven UCI data mining datasets). The aim is to find General-Purpose (GP) heterogeneous ensembles (requiring little to no parameter tuning) that perform competitively across multiple datasets. The state-of-the-art classifiers examined in this study include the support vector machine, Gaussian process classifiers, random subspace of adaboost, random subspace of rotation boosting, and deep learning classifiers. We demonstrate that a heterogeneous ensemble based on the simple fusion by sum rule of different classifiers performs consistently well across all twenty-five datasets. The most important result of our investigation is demonstrating that some very recent approaches, including the heterogeneous ensemble we propose in this paper, are capable of outperforming an SVM classifier (implemented with LibSVM), even when both kernel selection and SVM parameters are carefully tuned for each dataset.

  1. General-purpose event generators for LHC physics

    CERN Document Server

    Buckley, Andy; Gieseke, Stefan; Grellscheid, David; Hoche, Stefan; Hoeth, Hendrik; Krauss, Frank; Lonnblad, Leif; Nurse, Emily; Richardson, Peter; Schumann, Steffen; Seymour, Michael H; Sjostrand, Torbjorn; Skands, Peter; Webber, Bryan

    2011-01-01

    We review the physics basis, main features and use of general-purpose Monte Carlo event generators for the simulation of proton-proton collisions at the Large Hadron Collider. Topics included are: the generation of hard-scattering matrix elements for processes of interest, at both leading and next-to-leading QCD perturbative order; their matching to approximate treatments of higher orders based on the showering approximation; the parton and dipole shower formulations; parton distribution functions for event generators; non-perturbative aspects such as soft QCD collisions, the underlying event and diffractive processes; the string and cluster models for hadron formation; the treatment of hadron and tau decays; the inclusion of QED radiation and beyond-Standard-Model processes. We describe the principal features of the ARIADNE, Herwig++, PYTHIA 8 and SHERPA generators, together with the Rivet and Professor validation and tuning tools, and discuss the physics philosophy behind the proper use of these generators ...

  2. Using a cognitive architecture for general purpose service robot control

    Science.gov (United States)

    Puigbo, Jordi-Ysard; Pumarola, Albert; Angulo, Cecilio; Tellez, Ricardo

    2015-04-01

    A humanoid service robot equipped with a set of simple action skills including navigating, grasping, recognising objects or people, among others, is considered in this paper. By using those skills the robot should complete a voice command expressed in natural language encoding a complex task (defined as the concatenation of a number of those basic skills). As a main feature, no traditional planner has been used to decide skills to be activated, as well as in which sequence. Instead, the SOAR cognitive architecture acts as the reasoner by selecting which action the robot should complete, addressing it towards the goal. Our proposal allows to include new goals for the robot just by adding new skills (without the need to encode new plans). The proposed architecture has been tested on a human-sized humanoid robot, REEM, acting as a general purpose service robot.

  3. The ATLAS Trigger Algorithms for General Purpose Graphics Processor Units

    CERN Document Server

    Tavares Delgado, Ademar; The ATLAS collaboration

    2016-01-01

    The ATLAS Trigger Algorithms for General Purpose Graphics Processor Units Type: Talk Abstract: We present the ATLAS Trigger algorithms developed to exploit General­ Purpose Graphics Processor Units. ATLAS is a particle physics experiment located on the LHC collider at CERN. The ATLAS Trigger system has two levels, hardware-­based Level 1 and the High Level Trigger implemented in software running on a farm of commodity CPU. Performing the trigger event selection within the available farm resources presents a significant challenge that will increase future LHC upgrades. are being evaluated as a potential solution for trigger algorithms acceleration. Key factors determining the potential benefit of this new technology are the relative execution speedup, the number of GPUs required and the relative financial cost of the selected GPU. We have developed a trigger demonstrator which includes algorithms for reconstructing tracks in the Inner Detector and Muon Spectrometer and clusters of energy deposited in the Cal...

  4. Study on the Multi-Thread Mechanism of Swing Graphic User Interface%Swing可视化组件多线程操作机制研究

    Institute of Scientific and Technical Information of China (English)

    胡家芬

    2012-01-01

      This paper introduces the Swing GUI tool and elaborates the principle of thread when using Swing for developing in⁃terface, analyzes the process of event dispatcher thread and multi-thread method by which Swing do the long time task with the example, and illustrates the operation of SwingWorker.%  介绍了Swing可视化组件,探讨了使用Swing组件开发界面时的线程问题,从原理上阐述Swing的线程机制,分析Swing的事件派发线程的处理过程,对Swing对耗时任务的多线程处理方法进行研究,结合实例给出解决策略,并说明了基于SwingWorker的操作。

  5. 从图形处理器到基于GPU的通用计算%From Graphic Processing Unit to General Purpose Graphic Processing Unit

    Institute of Scientific and Technical Information of China (English)

    刘金硕; 刘天晓; 吴慧; 曾秋梅; 任梦菲; 顾宜淳

    2013-01-01

    对GPU(graphic process unit)、基于GPU的通用计算(general purpose GPU,GPGPU)、基于GPU的编程模型与环境进行了界定;将GPU的发展分为4个阶段,阐述了GPU的架构由非统一的渲染架构到统一的渲染架构,再到新一代的费米架构的变化;通过对基于GPU的通用计算的架构与多核CPU架构、分布式集群架构进行了软硬件的对比.分析表明:当进行中粒度的线程级数据密集型并行运算时,采用多核多线程并行;当进行粗粒度的网络密集型并行运算时,采用集群并行;当进行细粒度的计算密集型并行运算时,采用GPU通用计算并行.最后本文展示了未来的GPGPU的研究热点和发展方向--GPGPU自动并行化、CUDA对多种语言的支持、CUDA的性能优化,并介绍了GPGPU的一些典型应用.%This paper defines the outline of GPU(graphic processing unit) , the general purpose computation, the programming model and the environment for GPU. Besides, it introduces the evolution process from GPU to GPGPU (general purpose graphic processing unit) , and the change from non-uniform render architecture to the unified render architecture and the next Fermi architecture in details. Then it compares GPGPU architecture with multi-core GPU architecture and distributed cluster architecture from the perspective of software and hardware. When doing the middle grain level thread data intensive parallel computing, the multi-core and multi-thread should be utilized. When doing the coarse grain network computing, the cluster computing should be utilized. When doing the fine grained compute intensive parallel computing, the general purpose computation should be adopted. Meanwhile, some classical applications of GPGPU have been mentioned. At last, this paper demonstrates the further developments and research hotspots of GPGPU, which are automatic parallelization of GPGPU, multi-language support and performance optimization of CUDA, and introduces the classic

  6. General-purpose event generators for LHC physics

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, Andy [PPE Group, School of Physics and Astronomy, University of Edinburgh, EH25 9PN (United Kingdom); Butterworth, Jonathan [Department of Physics and Astronomy, University College London, WC1E 6BT (United Kingdom); Gieseke, Stefan [Institute for Theoretical Physics, Karlsruhe Institute of Technology, D-76128 Karlsruhe (Germany); Grellscheid, David [Institute for Particle Physics Phenomenology, Durham University, DH1 3LE (United Kingdom); Hoeche, Stefan [SLAC National Accelerator Laboratory, Menlo Park, CA 94025 (United States); Hoeth, Hendrik; Krauss, Frank [Institute for Particle Physics Phenomenology, Durham University, DH1 3LE (United Kingdom); Loennblad, Leif [Department of Astronomy and Theoretical Physics, Lund University (Sweden); PH Department, TH Unit, CERN, CH-1211 Geneva 23 (Switzerland); Nurse, Emily [Department of Physics and Astronomy, University College London, WC1E 6BT (United Kingdom); Richardson, Peter [Institute for Particle Physics Phenomenology, Durham University, DH1 3LE (United Kingdom); Schumann, Steffen [Institute for Theoretical Physics, University of Heidelberg, 69120 Heidelberg (Germany); Seymour, Michael H. [School of Physics and Astronomy, University of Manchester, M13 9PL (United Kingdom); Sjoestrand, Torbjoern [Department of Astronomy and Theoretical Physics, Lund University (Sweden); Skands, Peter [PH Department, TH Unit, CERN, CH-1211 Geneva 23 (Switzerland); Webber, Bryan, E-mail: webber@hep.phy.cam.ac.uk [Cavendish Laboratory, J.J. Thomson Avenue, Cambridge CB3 0HE (United Kingdom)

    2011-07-15

    We review the physics basis, main features and use of general-purpose Monte Carlo event generators for the simulation of proton-proton collisions at the Large Hadron Collider. Topics included are: the generation of hard scattering matrix elements for processes of interest, at both leading and next-to-leading QCD perturbative order; their matching to approximate treatments of higher orders based on the showering approximation; the parton and dipole shower formulations; parton distribution functions for event generators; non-perturbative aspects such as soft QCD collisions, the underlying event and diffractive processes; the string and cluster models for hadron formation; the treatment of hadron and tau decays; the inclusion of QED radiation and beyond Standard Model processes. We describe the principal features of the ARIADNE, Herwig++, PYTHIA 8 and SHERPA generators, together with the Rivet and Professor validation and tuning tools, and discuss the physics philosophy behind the proper use of these generators and tools. This review is aimed at phenomenologists wishing to understand better how parton-level predictions are translated into hadron-level events as well as experimentalists seeking a deeper insight into the tools available for signal and background simulation at the LHC.

  7. High-Speed General Purpose Genetic Algorithm Processor.

    Science.gov (United States)

    Hoseini Alinodehi, Seyed Pourya; Moshfe, Sajjad; Saber Zaeimian, Masoumeh; Khoei, Abdollah; Hadidi, Khairollah

    2016-07-01

    In this paper, an ultrafast steady-state genetic algorithm processor (GAP) is presented. Due to the heavy computational load of genetic algorithms (GAs), they usually take a long time to find optimum solutions. Hardware implementation is a significant approach to overcome the problem by speeding up the GAs procedure. Hence, we designed a digital CMOS implementation of GA in [Formula: see text] process. The proposed processor is not bounded to a specific application. Indeed, it is a general-purpose processor, which is capable of performing optimization in any possible application. Utilizing speed-boosting techniques, such as pipeline scheme, parallel coarse-grained processing, parallel fitness computation, parallel selection of parents, dual-population scheme, and support for pipelined fitness computation, the proposed processor significantly reduces the processing time. Furthermore, by relying on a built-in discard operator the proposed hardware may be used in constrained problems that are very common in control applications. In the proposed design, a large search space is achievable through the bit string length extension of individuals in the genetic population by connecting the 32-bit GAPs. In addition, the proposed processor supports parallel processing, in which the GAs procedure can be run on several connected processors simultaneously.

  8. General-purpose event generators for LHC physics

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, Andy; /Edinburgh U.; Butterworth, Jonathan; /University Coll. London; Gieseke, Stefan; /Karlsruhe U., ITP; Grellscheid, David; /Durham U., IPPP; Hoche, Stefan; /SLAC; Hoeth, Hendrik; Krauss, Frank; /Durham U., IPPP; Lonnblad, Leif; /Lund U., Dept. Theor. Phys. /CERN; Nurse, Emily; /University Coll. London; Richardson, Peter; /Durham U., IPPP; Schumann, Steffen; /Heidelberg U.; Seymour, Michael H.; /Manchester U.; Sjostrand, Torbjorn; /Lund U., Dept. Theor. Phys.; Skands, Peter; /CERN; Webber, Bryan; /Cambridge U.

    2011-03-03

    We review the physics basis, main features and use of general-purpose Monte Carlo event generators for the simulation of proton-proton collisions at the Large Hadron Collider. Topics included are: the generation of hard-scattering matrix elements for processes of interest, at both leading and next-to-leading QCD perturbative order; their matching to approximate treatments of higher orders based on the showering approximation; the parton and dipole shower formulations; parton distribution functions for event generators; non-perturbative aspects such as soft QCD collisions, the underlying event and diffractive processes; the string and cluster models for hadron formation; the treatment of hadron and tau decays; the inclusion of QED radiation and beyond-Standard-Model processes. We describe the principal features of the Ariadne, Herwig++, Pythia 8 and Sherpa generators, together with the Rivet and Professor validation and tuning tools, and discuss the physics philosophy behind the proper use of these generators and tools. This review is aimed at phenomenologists wishing to understand better how parton-level predictions are translated into hadron-level events as well as experimentalists wanting a deeper insight into the tools available for signal and background simulation at the LHC.

  9. Foam A General Purpose Cellular Monte Carlo Event Generator

    CERN Document Server

    Jadach, Stanislaw

    2003-01-01

    A general purpose, self-adapting, Monte Carlo (MC) event generator (simulator) is described. The high efficiency of the MC, that is small maximum weight or variance of the MC weight is achieved by means of dividing the integration domain into small cells. The cells can be $n$-dimensional simplices, hyperrectangles or Cartesian product of them. The grid of cells, called ``foam'', is produced in the process of the binary split of the cells. The choice of the next cell to be divided and the position/direction of the division hyper-plane is driven by the algorithm which optimizes the ratio of the maximum weight to the average weight or (optionally) the total variance. The algorithm is able to deal, in principle, with an arbitrary pattern of the singularities in the distribution. As any MC generator, it can also be used for the MC integration. With the typical personal computer CPU, the program is able to perform adaptive integration/simulation at relatively small number of dimensions ($\\leq 16$). With the continu...

  10. Use of general purpose graphics processing units with MODFLOW.

    Science.gov (United States)

    Hughes, Joseph D; White, Jeremy T

    2013-01-01

    To evaluate the use of general-purpose graphics processing units (GPGPUs) to improve the performance of MODFLOW, an unstructured preconditioned conjugate gradient (UPCG) solver has been developed. The UPCG solver uses a compressed sparse row storage scheme and includes Jacobi, zero fill-in incomplete, and modified-incomplete lower-upper (LU) factorization, and generalized least-squares polynomial preconditioners. The UPCG solver also includes options for sequential and parallel solution on the central processing unit (CPU) using OpenMP. For simulations utilizing the GPGPU, all basic linear algebra operations are performed on the GPGPU; memory copies between the central processing unit CPU and GPCPU occur prior to the first iteration of the UPCG solver and after satisfying head and flow criteria or exceeding a maximum number of iterations. The efficiency of the UPCG solver for GPGPU and CPU solutions is benchmarked using simulations of a synthetic, heterogeneous unconfined aquifer with tens of thousands to millions of active grid cells. Testing indicates GPGPU speedups on the order of 2 to 8, relative to the standard MODFLOW preconditioned conjugate gradient (PCG) solver, can be achieved when (1) memory copies between the CPU and GPGPU are optimized, (2) the percentage of time performing memory copies between the CPU and GPGPU is small relative to the calculation time, (3) high-performance GPGPU cards are utilized, and (4) CPU-GPGPU combinations are used to execute sequential operations that are difficult to parallelize. Furthermore, UPCG solver testing indicates GPGPU speedups exceed parallel CPU speedups achieved using OpenMP on multicore CPUs for preconditioners that can be easily parallelized. Published 2013. This article is a U.S. Government work and is in the public domain in the USA.

  11. SNAP: A General Purpose Network Analysis and Graph Mining Library.

    Science.gov (United States)

    Leskovec, Jure; Sosič, Rok

    2016-10-01

    Large networks are becoming a widely used abstraction for studying complex systems in a broad set of disciplines, ranging from social network analysis to molecular biology and neuroscience. Despite an increasing need to analyze and manipulate large networks, only a limited number of tools are available for this task. Here, we describe Stanford Network Analysis Platform (SNAP), a general-purpose, high-performance system that provides easy to use, high-level operations for analysis and manipulation of large networks. We present SNAP functionality, describe its implementational details, and give performance benchmarks. SNAP has been developed for single big-memory machines and it balances the trade-off between maximum performance, compact in-memory graph representation, and the ability to handle dynamic graphs where nodes and edges are being added or removed over time. SNAP can process massive networks with hundreds of millions of nodes and billions of edges. SNAP offers over 140 different graph algorithms that can efficiently manipulate large graphs, calculate structural properties, generate regular and random graphs, and handle attributes and meta-data on nodes and edges. Besides being able to handle large graphs, an additional strength of SNAP is that networks and their attributes are fully dynamic, they can be modified during the computation at low cost. SNAP is provided as an open source library in C++ as well as a module in Python. We also describe the Stanford Large Network Dataset, a set of social and information real-world networks and datasets, which we make publicly available. The collection is a complementary resource to our SNAP software and is widely used for development and benchmarking of graph analytics algorithms.

  12. CLOUDCLOUD : general-purpose instrument monitoring and data managing software

    Science.gov (United States)

    Dias, António; Amorim, António; Tomé, António

    2016-04-01

    An effective experiment is dependent on the ability to store and deliver data and information to all participant parties regardless of their degree of involvement in the specific parts that make the experiment a whole. Having fast, efficient and ubiquitous access to data will increase visibility and discussion, such that the outcome will have already been reviewed several times, strengthening the conclusions. The CLOUD project aims at providing users with a general purpose data acquisition, management and instrument monitoring platform that is fast, easy to use, lightweight and accessible to all participants of an experiment. This work is now implemented in the CLOUD experiment at CERN and will be fully integrated with the experiment as of 2016. Despite being used in an experiment of the scale of CLOUD, this software can also be used in any size of experiment or monitoring station, from single computers to large networks of computers to monitor any sort of instrument output without influencing the individual instrument's DAQ. Instrument data and meta data is stored and accessed via a specially designed database architecture and any type of instrument output is accepted using our continuously growing parsing application. Multiple databases can be used to separate different data taking periods or a single database can be used if for instance an experiment is continuous. A simple web-based application gives the user total control over the monitored instruments and their data, allowing data visualization and download, upload of processed data and the ability to edit existing instruments or add new instruments to the experiment. When in a network, new computers are immediately recognized and added to the system and are able to monitor instruments connected to them. Automatic computer integration is achieved by a locally running python-based parsing agent that communicates with a main server application guaranteeing that all instruments assigned to that computer are

  13. SPIDR, a general-purpose readout system for pixel ASICs

    Science.gov (United States)

    van der Heijden, B.; Visser, J.; van Beuzekom, M.; Boterenbrood, H.; Kulis, S.; Munneke, B.; Schreuder, F.

    2017-02-01

    The SPIDR (Speedy PIxel Detector Readout) system is a flexible general-purpose readout platform that can be easily adapted to test and characterize new and existing detector readout ASICs. It is originally designed for the readout of pixel ASICs from the Medipix/Timepix family, but other types of ASICs or front-end circuits can be read out as well. The SPIDR system consists of an FPGA board with memory and various communication interfaces, FPGA firmware, CPU subsystem and an API library on the PC . The FPGA firmware can be adapted to read out other ASICs by re-using IP blocks. The available IP blocks include a UDP packet builder, 1 and 10 Gigabit Ethernet MAC's and a "soft core" CPU . Currently the firmware is targeted at the Xilinx VC707 development board and at a custom board called Compact-SPIDR . The firmware can easily be ported to other Xilinx 7 series and ultra scale FPGAs. The gap between an ASIC and the data acquisition back-end is bridged by the SPIDR system. Using the high pin count VITA 57 FPGA Mezzanine Card (FMC) connector only a simple chip carrier PCB is required. A 1 and a 10 Gigabit Ethernet interface handle the connection to the back-end. These can be used simultaneously for high-speed data and configuration over separate channels. In addition to the FMC connector, configurable inputs and outputs are available for synchronization with other detectors. A high resolution (≈ 27 ps bin size) Time to Digital converter is provided for time stamping events in the detector. The SPIDR system is frequently used as readout for the Medipix3 and Timepix3 ASICs. Using the 10 Gigabit Ethernet interface it is possible to read out a single chip at full bandwidth or up to 12 chips at a reduced rate. Another recent application is the test-bed for the VeloPix ASIC, which is developed for the Vertex Detector of the LHCb experiment. In this case the SPIDR system processes the 20 Gbps scrambled data stream from the VeloPix and distributes it over four 10 Gigabit

  14. 面向多线程程序基于效用的Cache优化策略%A Utility Based Cache Optimization Mechanism for Multi-Thread Workloads

    Institute of Scientific and Technical Information of China (English)

    唐轶轩; 吴俊敏; 陈国良; 隋秀峰; 黄景

    2013-01-01

    Modern multi-core processors usually employ shared level 2 cache to support fast data access among concurrent threads. However, under the pressure of high resource demand, the commonly used LRU policy may result in interferences among threads and degrades the overall performance. Partitioning the shared cache is a relatively flexible resource allocation method, but most previous partition approaches aimed at multi programmed workloads and they ignored the difference between shared and private data access patterns of multi-threaded workloads, leading to the utility decrease of the shared data. Most traditional cache partitioning methods aim at single memory access pattern, and neglect the frequency and recency information of cachelines. In this paper, we study the access characteristics of private and shared data in multi-thread workloads, and propose a utility-based pseudo partition cache partitioning mechanism (UPP). UPP dynamically collects utility information of each thread and shared data, and takes the overall marginal utility as the metric of cache partitioning. Besides, UPP exploits both frequency and recency information of a workload simultaneously, in order to evict dead cachelines early and filter less reused blocks through dynamic insertion and promotion mechanism.%为了提供高速的数据访问,多核处理器常使用Cache划分机制来分配二级Cache资源,但传统的共享Cache划分算法大多是面向多道程序的,忽略了多线程负载中共享和私有数据访问模式的差别,使得共享数据的使用效率降低.提出了一种面向多线程程序的Cache管理机制UPP,它通过监控Cache中共享、私有数据的效用信息,为每个线程以及共享数据分配Cache空间,使得各个线程以及共享数据的边际效用最大化,从而提高负载的整体性能.另外,UPP还考虑了程序中数据的使用频率以及临近性信息,通过提升、动态插入策略过滤低重用数据,从而使得高频数据块

  15. Efficient methods for implementation of multi-level nonrigid mass-preserving image registration on GPUs and multi-threaded CPUs.

    Science.gov (United States)

    Ellingwood, Nathan D; Yin, Youbing; Smith, Matthew; Lin, Ching-Long

    2016-04-01

    Faster and more accurate methods for registration of images are important for research involved in conducting population-based studies that utilize medical imaging, as well as improvements for use in clinical applications. We present a novel computation- and memory-efficient multi-level method on graphics processing units (GPU) for performing registration of two computed tomography (CT) volumetric lung images. We developed a computation- and memory-efficient Diffeomorphic Multi-level B-Spline Transform Composite (DMTC) method to implement nonrigid mass-preserving registration of two CT lung images on GPU. The framework consists of a hierarchy of B-Spline control grids of increasing resolution. A similarity criterion known as the sum of squared tissue volume difference (SSTVD) was adopted to preserve lung tissue mass. The use of SSTVD consists of the calculation of the tissue volume, the Jacobian, and their derivatives, which makes its implementation on GPU challenging due to memory constraints. The use of the DMTC method enabled reduced computation and memory storage of variables with minimal communication between GPU and Central Processing Unit (CPU) due to ability to pre-compute values. The method was assessed on six healthy human subjects. Resultant GPU-generated displacement fields were compared against the previously validated CPU counterpart fields, showing good agreement with an average normalized root mean square error (nRMS) of 0.044±0.015. Runtime and performance speedup are compared between single-threaded CPU, multi-threaded CPU, and GPU algorithms. Best performance speedup occurs at the highest resolution in the GPU implementation for the SSTVD cost and cost gradient computations, with a speedup of 112 times that of the single-threaded CPU version and 11 times over the twelve-threaded version when considering average time per iteration using a Nvidia Tesla K20X GPU. The proposed GPU-based DMTC method outperforms its multi-threaded CPU version in terms

  16. InfVis--platform-independent visual data mining of multidimensional chemical data sets.

    Science.gov (United States)

    Oellien, Frank; Ihlenfeldt, Wolf-Dietrich; Gasteiger, Johann

    2005-01-01

    The tremendous increase of chemical data sets, both in size and number, and the simultaneous desire to speed up the drug discovery process has resulted in an increasing need for a new generation of computational tools that assist in the extraction of information from data and allow for rapid and in-depth data mining. During recent years, visual data mining has become an important tool within the life sciences and drug discovery area with the potential to help avoiding data analysis from turning into a bottleneck. In this paper, we present InfVis, a platform-independent visual data mining tool for chemists, who usually only have little experience with classical data mining tools, for the visualization, exploration, and analysis of multivariate data sets. InfVis represents multidimensional data sets by using intuitive 3D glyph information visualization techniques. Interactive and dynamic tools such as dynamic query devices allow real-time, interactive data set manipulations and support the user in the identification of relationships and patterns. InfVis has been implemented in Java and Java3D and can be run on a broad range of platforms and operating systems. It can also be embedded as an applet in Web-based interfaces. We will present in this paper examples detailing the analysis of a reaction database that demonstrate how InfVis assists chemists in identifying and extracting hidden information.

  17. HDBStat!: A platform-independent software suite for statistical analysis of high dimensional biology data

    Directory of Open Access Journals (Sweden)

    Brand Jacob PL

    2005-04-01

    Full Text Available Abstract Background Many efforts in microarray data analysis are focused on providing tools and methods for the qualitative analysis of microarray data. HDBStat! (High-Dimensional Biology-Statistics is a software package designed for analysis of high dimensional biology data such as microarray data. It was initially developed for the analysis of microarray gene expression data, but it can also be used for some applications in proteomics and other aspects of genomics. HDBStat! provides statisticians and biologists a flexible and easy-to-use interface to analyze complex microarray data using a variety of methods for data preprocessing, quality control analysis and hypothesis testing. Results Results generated from data preprocessing methods, quality control analysis and hypothesis testing methods are output in the form of Excel CSV tables, graphs and an Html report summarizing data analysis. Conclusion HDBStat! is a platform-independent software that is freely available to academic institutions and non-profit organizations. It can be downloaded from our website http://www.soph.uab.edu/ssg_content.asp?id=1164.

  18. Development of a platform-independent receiver control system for SISIFOS

    Science.gov (United States)

    Lemke, Roland; Olberg, Michael

    1998-05-01

    Up to now receiver control software was a time consuming development usually written by receiver engineers who had mainly the hardware in mind. We are presenting a low-cost and very flexible system which uses a minimal interface to the real hardware, and which makes it easy to adapt to new receivers. Our system uses Tcl/Tk as a graphical user interface (GUI), SpecTcl as a GUI builder, Pgplot as plotting software, a simple query language (SQL) database for information storage and retrieval, Ethernet socket to socket communication and SCPI as a command control language. The complete system is in principal platform independent but for cost saving reasons we are using it actually on a PC486 running Linux 2.0.30, which is a copylefted Unix. The only hardware dependent part are the digital input/output boards, analog to digital and digital to analog convertors. In the case of the Linux PC we are using a device driver development kit to integrate the boards fully into the kernel of the operating system, which indeed makes them look like an ordinary device. The advantage of this system is firstly the low price and secondly the clear separation between the different software components which are available for many operating systems. If it is not possible, due to CPU performance limitations, to run all the software in a single machine,the SQL-database or the graphical user interface could be installed on separate computers.

  19. Multi-thread Programming for Image Display of Navigational Radar%导航雷达图像显示的多线程编程

    Institute of Scientific and Technical Information of China (English)

    祝宏涛; 姚萌; 蔡天艳; 项凌云

    2011-01-01

    The software of displaying the navigational radar images on SoPC platform was designed. The task of code was divided according to the system running engineering and radar image display task which includes radar circle and character data. The divided code task includes hardware initialization, data sampling, range set, window display and system shutdown.Corresponding to the divided task, a method af multi-thread programming in Windows OS was taken to compile the software for the radar image display. The results of the software ruuning on the SoPC platform indicates that it can meet the needs of the navigational radar image display.%设计了导航雷达图像在SoPC硬件平台上显示的软件系统,根据雷达图像显示任务和系统的运行工程对代码的任务进行划分,雷达图像显示任务包括雷达圆和文字数据两部分,划分后的代码任务包括硬件初始化、开始数据接收处理、量程设置、窗口显示、关机等5个任务,对应划分后的任务采用了Windows下的多线程方法编写雷达图像显示的软件系统.经过在SoPC硬件平台上调试运行,效果显示能够满足导航雷达图像显示的各项要求.

  20. A Comprehensive Toolset for General-Purpose Private Computing and Outsourcing

    Science.gov (United States)

    2016-12-08

    AFRL-AFOSR-VA-TR-2016-0368 A COMPREHENSIVE TOOLSET FOR GENERAL-PURPOSE PRIVATE COMPUTING AND OUTSOURCING Marina Blanton UNIVERSITY OF NOTRE DAME DU...2013 to 31 Aug 2016 4. TITLE AND SUBTITLE A COMPREHENSIVE TOOLSET FOR GENERAL-PURPOSE PRIVATE COMPUTING AND OUTSOURCING 5a.  CONTRACT NUMBER 5b...necessary tools and techniques for supporting general-purpose secure computation and outsourcing . The three main thrusts of the project are: (i

  1. 基于硬件多线程网络处理器功耗可控无线局域网MAC协议实现%Power controllable WLAN MAC protocol implementation based on hardware multi-threaded network processor

    Institute of Scientific and Technical Information of China (English)

    王磊; 张晓彤; 张艳丽; 王沁

    2012-01-01

    针对在如何在提高网络吞吐率并满足实时性需求的同时消耗更少的功耗的问题,以硬件多线程网络处理为平台,以IEEE 802.11 MAC层协议为例,通过对MAC层数据流的模式、数据流上的操作行为以及时间约束进行建模并测试分析,提出一种多线程化网络协议的软件实现方法;配合动态功耗可控的多线程网络处理器能够根据流量和实时性自适应地调整系统的性能.实验结果证明,异构多线程结构程序在实时性任务时五个软件线程需四个硬件线程支持,而无实时性任务只需两个硬件线程支持.提出的多线程MAC层协议编程模型能够达到根据网络负载特征动态控制处理器性能的目的.%How to improve network throughput and meet the real-time while consuming less power process are key concerns during network processor designing. Hardware multi-threaded network processor as a platform, IEEE 802. 11MAC layer proto-col as an example, modeling based on MAC layer data stream model, data stream operation behavior and time constraints and test the model. This paper presented a multi-threaded network protocol software implementation method. This method could adjust system performance based on traffic and real-time with dynamic power controlled multi-threaded network processor, thereby reducing power consumption when the processor was running. The result shows that real-time tasks require 4 hard threads on multi-threaded processor while only 2 are required for non-realtime tasks. This programming model provided proces-sors the ability to dynamically adapt network workload characteristics.

  2. 翻车机在线监测多线程数据处理系统设计%Rotary Dump Online Monitoring System Design Based on Multi-thread Technology

    Institute of Scientific and Technical Information of China (English)

    李景松; 刘泉

    2009-01-01

    The rotary dump is one of the most complex and important equipments in the dump system, and its effectiveness has directly influence on the efficiency of dump work. This paper researches Multi-thread technology in windows operation sys-tem, utilizes extend of Multi-thread and parallel execute of code, implements real time data collection and paocession of Rotary Dump Monitor points, investigates a data acquisition and analysis flowchart, and designs a system of Rotary Dump Health On-line Monitoring.%翻车机是装卸系统中最复杂、最关键的设备之一,其健康状况直接影响系统作业效率的高低.通过研究Win-dows操作系统中的多线程技术,利用多任务的扩展和代码的并行执行,实现了翻车机监测点信息的实时采集与处理,设计了基于多线程技术的数据采集与数据分析处理流程,开发了翻车机健康状况在线监测系统.

  3. 基于P2P网络的多线程下载技术研究%Technique of Multi-thread Download Based on P2P Network

    Institute of Scientific and Technical Information of China (English)

    姬涛

    2014-01-01

    The path introduces what is P2P and multi-thread,and describes the relevant theory, technical features,application scope of P2P.some typical P2P file-sharing system models are analyzed,such as Napster,Gnutella and Kazaa.Based on this,the file sharing and multi-threaded transfer mechanism are improved based on the Gnutella net mode.%介绍了什么是P2P和多线程,并对其相关理论、技术特点、应用范围等进行了简单描述;分析了Napster、Gnutella、Kazaa这几种典型的P2P文件共享系统模型。在此基础上,提出基于Gnutella模型改进的针对文件共享与多线程传输的机制。

  4. The Graphical Implementation of Sorting Algorithm Based on Multi-thread in .NET%基于.NET 多线程的几种排序算法的图形化实现

    Institute of Scientific and Technical Information of China (English)

    郭忠南

    2012-01-01

      多线程技术与 GDI+技术是.NET 中的重点与难点之一.通过设计实例阐述在.NET 多线程机制中实现排序图形化的主要步骤和技巧,介绍了多线程技术与 GDI+技术.图形模拟排序过程中的画线过程会增加计算机的负担,导致实例排序效率与理论排序效率不一致.根据实际教学需要,可以从不同的角度改进实例.%  Multi -thread technology and GDI + technology are important and difficult points in .NET. The paper illus-trates main steps and skills of sorting graphically by one ex-ample, and introduces multi-thread and GDI+. The process of drawing line increases burden to computer,which causes inconsistency between the theoretical and practical sorting. The real condition can be improved from different angles based on the requirement of teaching.

  5. A general purpose exact Rayleigh scattering look-up table for ocean color remote sensing

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    The current exact Rayleigh scattering calculation of ocean color remote sensing uses the look-up table (LUT), which is usually created for a special remote sensor and cannot be applied to other sensors. For practical application, a general purpose Rayleigh scattering LUT which can be applied to all ocean color remote sensors is generated. An adding-doubling method to solve the vector radiative transfer equation in the plane-parallel atmosphere is deduced in detail. Compared with the exact Rayleigh scattering radiance derived from the MODIS exact Rayleigh scattering LUT, it is proved that the relative error of Rayleigh scattering calculation with the adding-doubling method is less than 0.25%, which meets the required accuracy of the atmospheric correction of ocean color remote sensing. Therefore,the adding-doubling method can be used to generate the exact Rayleigh scattering LUT for the ocean color remote sensors. Finally, the general purpose exact Rayleigh scattering LUT is generated using the adding-doubling method. On the basis of the general purpose LUT, the calculated Rayleigh scattering radiance is tested by comparing with the LUTs of MODIS, SeaWiFS and the other ocean color sensors, showing that the relative errors are all less than 0.5%, and this general purpose LUT can be applied to all ocean color remote sensors.

  6. Apple-CORE: harnessing general-purpose many-cores with hardware concurrency management

    NARCIS (Netherlands)

    Poss, R.; Lankamp, M.; Yang, Q.; Fu, J.; van Tol, M.W.; Uddin, I.; Jesshope, C.

    2013-01-01

    To harness the potential of CMPs for scalable, energy-efficient performance in general-purpose computers, the Apple-CORE project has co-designed a general machine model and concurrency control interface with dedicated hardware support for concurrency management across multiple cores. Its SVP interfa

  7. 78 FR 65300 - Notice of Availability (NOA) for General Purpose Warehouse and Information Technology Center...

    Science.gov (United States)

    2013-10-31

    ...: Federal Docket Management System Office, 4800 Mark Center Drive, East Tower, 2nd floor, Suite 02G09... of the Secretary Notice of Availability (NOA) for General Purpose Warehouse and Information... Purpose Warehouse and Information Technology Center at Defense Distribution Depot San Joaquin,...

  8. Implementing the 2-D Wavelet Transform on SIMD-Enhanced General-Purpose Processors

    NARCIS (Netherlands)

    Shahbahrami, A.; Juurlink, B.; Vassiliadis, S.

    2008-01-01

    The 2-D Discrete Wavelet Transform (DWT) consumes up to 68% of the JPEG2000 encoding time. In this paper, we develop efficient implementations of this important kernel on general-purpose processors (GPPs), in particular the Pentium 4 (P4). Efficient implementations of the 2-D DWT on the P4 must addr

  9. JETSET: Physics at LEAR with an Internal Gas Jet Target and an Advanced General Purpose Detector

    CERN Multimedia

    2002-01-01

    This experiment involves an internal gas cluster jet target surrounded by a compact general-purpose detector. The LEAR beam and internal jet target provide several important experimental features: high luminosity $ ( 10 ^{3} ^0 $ cm$^- ^{2} $ sec$^- ^{1} ) $, excellent mass resolution ($\\Delta

  10. 多线程微机控制配料系统软件的设计与实现%Design and Implementation of Software for Mircrocomputer Controlled Batching System Based on Multi-Threading

    Institute of Scientific and Technical Information of China (English)

    郑家辉; 万东梅

    2011-01-01

    In the computer control ingredients system, working principle and computer control theory of the dual-scale batching system are discussed. A flow chart of software design and implementation of multi-threading control program code are given. This method is now applied to the feed production and played a crucial role in improving production speed and productivity.%讨论了在微机控制配料系统中,双秤配料系统的工作原理与计算机控制原理,并给出了软件设计流程图及实现多线程控制的程序代码.该方法目前已经应用到饲料加工生产中,对提高生产速度和生产率起到了至关重要的作用.

  11. 21 CFR 862.2050 - General purpose laboratory equipment labeled or promoted for a specific medical use.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false General purpose laboratory equipment labeled or... TOXICOLOGY DEVICES Clinical Laboratory Instruments § 862.2050 General purpose laboratory equipment labeled or promoted for a specific medical use. (a) Identification. General purpose laboratory equipment labeled...

  12. General Purpose Satellites: a concept for affordable low earth orbit vehicles

    OpenAIRE

    Boyd, Austin W.; Fuhs, Allen E.

    1997-01-01

    A general purpose satellite has been designed which will be launched from the Space Shuttle using a NASA Get-Away-Special (GAS) canister. The design is based upon the use of a new extended GAS canister and a low profile launch mechanism. The satellite is cylindrical. measuring 19 inches in diameter and 35 inches long. The maximum vehicle weight is 250 pounds, of which 50 pounds is dedicated to user payloads. The remaining 200 pounds encompasses the satellite structure and support ...

  13. Comparison of progressive addition lenses for general purpose and for computer vision: an office field study.

    Science.gov (United States)

    Jaschinski, Wolfgang; König, Mirjam; Mekontso, Tiofil M; Ohlendorf, Arne; Welscher, Monique

    2015-05-01

    Two types of progressive addition lenses (PALs) were compared in an office field study: 1. General purpose PALs with continuous clear vision between infinity and near reading distances and 2. Computer vision PALs with a wider zone of clear vision at the monitor and in near vision but no clear distance vision. Twenty-three presbyopic participants wore each type of lens for two weeks in a double-masked four-week quasi-experimental procedure that included an adaptation phase (Weeks 1 and 2) and a test phase (Weeks 3 and 4). Questionnaires on visual and musculoskeletal conditions as well as preferences regarding the type of lenses were administered. After eight more weeks of free use of the spectacles, the preferences were assessed again. The ergonomic conditions were analysed from photographs. Head inclination when looking at the monitor was significantly lower by 2.3 degrees with the computer vision PALs than with the general purpose PALs. Vision at the monitor was judged significantly better with computer PALs, while distance vision was judged better with general purpose PALs; however, the reported advantage of computer vision PALs differed in extent between participants. Accordingly, 61 per cent of the participants preferred the computer vision PALs, when asked without information about lens design. After full information about lens characteristics and additional eight weeks of free spectacle use, 44 per cent preferred the computer vision PALs. On average, computer vision PALs were rated significantly better with respect to vision at the monitor during the experimental part of the study. In the final forced-choice ratings, approximately half of the participants preferred either the computer vision PAL or the general purpose PAL. Individual factors seem to play a role in this preference and in the rated advantage of computer vision PALs. © 2015 The Authors. Clinical and Experimental Optometry © 2015 Optometry Australia.

  14. Design of low-cost general purpose microcontroller based neuromuscular stimulator.

    Science.gov (United States)

    Koçer, S; Rahmi Canal, M; Güler, I

    2000-04-01

    In this study, a general purpose, low-cost, programmable, portable and high performance stimulator is designed and implemented. For this purpose, a microcontroller is used in the design of the stimulator. The duty cycle and amplitude of the designed system can be controlled using a keyboard. The performance test of the system has shown that the results are reliable. The overall system can be used as the neuromuscular stimulator under safe conditions.

  15. Biomarker discovery by CE-MS enables sequence analysis via MS/MS with platform-independent separation.

    Science.gov (United States)

    Zürbig, Petra; Renfrow, Matthew B; Schiffer, Eric; Novak, Jan; Walden, Michael; Wittke, Stefan; Just, Ingo; Pelzing, Matthias; Neusüss, Christian; Theodorescu, Dan; Root, Karen E; Ross, Mark M; Mischak, Harald

    2006-06-01

    CE-MS is a successful proteomic platform for the definition of biomarkers in different body fluids. Besides the biomarker defining experimental parameters, CE migration time and molecular weight, especially biomarker's sequence identity is an indispensable cornerstone for deeper insights into the pathophysiological pathways of diseases or for made-to-measure therapeutic drug design. Therefore, this report presents a detailed discussion of different peptide sequencing platforms consisting of high performance separation method either coupled on-line or off-line to different MS/MS devices, such as MALDI-TOF-TOF, ESI-IT, ESI-QTOF and Fourier transform ion cyclotron resonance, for sequencing indicative peptides. This comparison demonstrates the unique feature of CE-MS technology to serve as a reliable basis for the assignment of peptide sequence data obtained using different separation MS/MS methods to the biomarker defining parameters, CE migration time and molecular weight. Discovery of potential biomarkers by CE-MS enables sequence analysis via MS/MS with platform-independent sample separation. This is due to the fact that the number of basic and neutral polar amino acids of biomarkers sequences distinctly correlates with their CE-MS migration time/molecular weight coordinates. This uniqueness facilitates the independent entry of different sequencing platforms for peptide sequencing of CE-MS-defined biomarkers from highly complex mixtures.

  16. Generalized Fluid System Simulation Program (GFSSP) Version 6 - General Purpose Thermo-Fluid Network Analysis Software

    Science.gov (United States)

    Majumdar, Alok; Leclair, Andre; Moore, Ric; Schallhorn, Paul

    2011-01-01

    GFSSP stands for Generalized Fluid System Simulation Program. It is a general-purpose computer program to compute pressure, temperature and flow distribution in a flow network. GFSSP calculates pressure, temperature, and concentrations at nodes and calculates flow rates through branches. It was primarily developed to analyze Internal Flow Analysis of a Turbopump Transient Flow Analysis of a Propulsion System. GFSSP development started in 1994 with an objective to provide a generalized and easy to use flow analysis tool for thermo-fluid systems.

  17. Real-time radar signal processing using GPGPU (general-purpose graphic processing unit)

    Science.gov (United States)

    Kong, Fanxing; Zhang, Yan Rockee; Cai, Jingxiao; Palmer, Robert D.

    2016-05-01

    This study introduces a practical approach to develop real-time signal processing chain for general phased array radar on NVIDIA GPUs(Graphical Processing Units) using CUDA (Compute Unified Device Architecture) libraries such as cuBlas and cuFFT, which are adopted from open source libraries and optimized for the NVIDIA GPUs. The processed results are rigorously verified against those from the CPUs. Performance benchmarked in computation time with various input data cube sizes are compared across GPUs and CPUs. Through the analysis, it will be demonstrated that GPGPUs (General Purpose GPU) real-time processing of the array radar data is possible with relatively low-cost commercial GPUs.

  18. General purpose pulse shape analysis for fast scintillators implemented in digital readout electronics

    Science.gov (United States)

    Asztalos, Stephen J.; Hennig, Wolfgang; Warburton, William K.

    2016-01-01

    Pulse shape discrimination applied to certain fast scintillators is usually performed offline. In sufficiently high-event rate environments data transfer and storage become problematic, which suggests a different analysis approach. In response, we have implemented a general purpose pulse shape analysis algorithm in the XIA Pixie-500 and Pixie-500 Express digital spectrometers. In this implementation waveforms are processed in real time, reducing the pulse characteristics to a few pulse shape analysis parameters and eliminating time-consuming waveform transfer and storage. We discuss implementation of these features, their advantages, necessary trade-offs and performance. Measurements from bench top and experimental setups using fast scintillators and XIA processors are presented.

  19. Development of a Real-Time General-Purpose Digital Signal Processing Laboratory System.

    Science.gov (United States)

    1983-12-01

    for AF/LE computer support (receiving the Air Force Meritorious Service Medal for his performance) and ,I as a White House Social Aide. He entered AFIT...8217 " ’"".i2" ",. , , ,: ., 2 22 : Abstract This investigation resulted in the design and implementation of software to support a real-time, general...purpose digital signal processing (DSP) system. The major design aims for the system were that it: be easy to use, support a wide variety of DSP

  20. Platform-Independent Cirrus and Spectralis Thickness Measurements in Eyes with Diabetic Macular Edema Using Fully Automated Software.

    Science.gov (United States)

    Willoughby, Alex S; Chiu, Stephanie J; Silverman, Rachel K; Farsiu, Sina; Bailey, Clare; Wiley, Henry E; Ferris, Frederick L; Jaffe, Glenn J

    2017-02-01

    We determine whether the automated segmentation software, Duke Optical Coherence Tomography Retinal Analysis Program (DOCTRAP), can measure, in a platform-independent manner, retinal thickness on Cirrus and Spectralis spectral domain optical coherence tomography (SD-OCT) images in eyes with diabetic macular edema (DME) under treatment in a clinical trial. Automatic segmentation software was used to segment the internal limiting membrane (ILM), inner retinal pigment epithelium (RPE), and Bruch's membrane (BM) in SD-OCT images acquired by Cirrus and Spectralis commercial systems, from the same eye, on the same day during a clinical interventional DME trial. Mean retinal thickness differences were compared across commercial and DOCTRAP platforms using intraclass correlation (ICC) and Bland-Altman plots. The mean 1 mm central subfield thickness difference (standard error [SE]) comparing segmentation of Spectralis images with DOCTRAP versus HEYEX was 0.7 (0.3) μm (0.2 pixels). The corresponding values comparing segmentation of Cirrus images with DOCTRAP versus Cirrus software was 2.2 (0.7) μm. The mean 1 mm central subfield thickness difference (SE) comparing segmentation of Cirrus and Spectralis scan pairs with DOCTRAP using BM as the outer retinal boundary was -2.3 (0.9) μm compared to 2.8 (0.9) μm with inner RPE as the outer boundary. DOCTRAP segmentation of Cirrus and Spectralis images produces validated thickness measurements that are very similar to each other, and very similar to the values generated by the corresponding commercial software in eyes with treated DME. This software enables automatic total retinal thickness measurements across two OCT platforms, a process that is impractical to perform manually.

  1. Platform-Independent Cirrus and Spectralis Thickness Measurements in Eyes with Diabetic Macular Edema Using Fully Automated Software

    Science.gov (United States)

    Willoughby, Alex S.; Chiu, Stephanie J.; Silverman, Rachel K.; Farsiu, Sina; Bailey, Clare; Wiley, Henry E.; Ferris, Frederick L.; Jaffe, Glenn J.

    2017-01-01

    Purpose We determine whether the automated segmentation software, Duke Optical Coherence Tomography Retinal Analysis Program (DOCTRAP), can measure, in a platform-independent manner, retinal thickness on Cirrus and Spectralis spectral domain optical coherence tomography (SD-OCT) images in eyes with diabetic macular edema (DME) under treatment in a clinical trial. Methods Automatic segmentation software was used to segment the internal limiting membrane (ILM), inner retinal pigment epithelium (RPE), and Bruch's membrane (BM) in SD-OCT images acquired by Cirrus and Spectralis commercial systems, from the same eye, on the same day during a clinical interventional DME trial. Mean retinal thickness differences were compared across commercial and DOCTRAP platforms using intraclass correlation (ICC) and Bland-Altman plots. Results The mean 1 mm central subfield thickness difference (standard error [SE]) comparing segmentation of Spectralis images with DOCTRAP versus HEYEX was 0.7 (0.3) μm (0.2 pixels). The corresponding values comparing segmentation of Cirrus images with DOCTRAP versus Cirrus software was 2.2 (0.7) μm. The mean 1 mm central subfield thickness difference (SE) comparing segmentation of Cirrus and Spectralis scan pairs with DOCTRAP using BM as the outer retinal boundary was −2.3 (0.9) μm compared to 2.8 (0.9) μm with inner RPE as the outer boundary. Conclusions DOCTRAP segmentation of Cirrus and Spectralis images produces validated thickness measurements that are very similar to each other, and very similar to the values generated by the corresponding commercial software in eyes with treated DME. Translational Relevance This software enables automatic total retinal thickness measurements across two OCT platforms, a process that is impractical to perform manually. PMID:28180033

  2. A Parallel Algorithm for HEVC Entropy Coding Based on Syntax-level Grouping and Multi-thread Processing%基于Syntax级分组和多线程处理的HEVC熵编码并行算法

    Institute of Scientific and Technical Information of China (English)

    邸金红; 张克新; 祁跻; 张鑫明

    2014-01-01

    As a new generation video coding standard, High Efficiency Video Coding ( HEVC ) achieves higher compression efficiency,but also requires a large amount of calculation. The parallel HEVC encoding algorithm is important to improve the encoding speed,therefore the design of parallel coding algorithm suit-able for multi-core processors is of great significance to meet the requirements of High Density( HD) video real-time transmission and large-scale sharing. In this paper,a parallel entropy-coding framework based on Syntax-level grouping and multi-thread processing is presented. Firstly,the coding messages of a Cod-ing Tree Unit( CTU) in HEVC are grouped according to the syntax elements;secondly,the parallel encoder based on Syntax-level is built according to the correlation between the data blocks;finally,frame-level par-allel coding in HEVC is achieved by combining with the multi-threaded calculation. Experimental results show that the encoding speed of the proposed solution is 65% ~70% faster than that of the traditional seri-al architecture,meanwhile the video quality doesn’t lose much.%新一代视频编码标准HEVC获得了较高的编码效率,但是同时需要较大的计算量。 HEVC并行算法能够提高编码速度,如何开发适用于多核处理器的并行编码算法对于满足高清视频实时传输和大规模共享具有十分重要的意义。提出了一种基于Syntax级分组和多线程处理的HEVC熵编码并行算法。该算法首先将HEVC中一个编码树单元的编码信息按照语法元素进行分组;其次,根据编码块数据间的相关性构建Syntax级并行编码器;然后结合多线程技术实现HEVC帧级编码的并行计算。实验结果表明,在编码图像的主客观质量上没有太大损失的情况下,该并行算法框架与传统的串行算法框架相比具有65%~70%的加速效果。

  3. Knowledge Management Systems as an Interdisciplinary Communication and Personalized General-Purpose Technology

    Directory of Open Access Journals (Sweden)

    Ulrich Schmitt

    2015-10-01

    Full Text Available As drivers of human civilization, Knowledge Management (KM processes have co-evolved in line with General-Purpose-Technologies (GPT, such as writing, printing, and information and communication systems. As evidenced by the recent shift from information scarcity to abundance, GPTs are capable of drastically altering societies due to their game-changing impact on our spheres of work and personal development. This paper looks at the prospect of whether a novel Personal Knowledge Management (PKM concept supported by a prototype system has got what it takes to grow into a transformative General-Purpose-Technology. Following up on a series of papers, the KM scenario of a decentralizing revolution where individuals and self-organized groups yield more power and autonomy is examined according to a GPT's essential characteristics, including a wide scope for improvement and elaboration (in people's private, professional and societal life, applicability across a broad range of uses in a wide variety of products and processes (in multi-disciplinary educational and work contexts, and strong complementarities with existing or potential new technologies (like organizational KM Systems and a proposed World Heritage of Memes Repository. The result portrays the PKM concept as a strong candidate due to its personal, autonomous, bottom-up, collaborative, interdisciplinary, and creativity-supporting approach destined to advance the availability, quantity, and quality of the world extelligence and to allow for a wider sharing and faster diffusion of ideas across current disciplinary and opportunity divides.

  4. General Purpose Data-Driven Online System Health Monitoring with Applications to Space Operations

    Science.gov (United States)

    Iverson, David L.; Spirkovska, Lilly; Schwabacher, Mark

    2010-01-01

    Modern space transportation and ground support system designs are becoming increasingly sophisticated and complex. Determining the health state of these systems using traditional parameter limit checking, or model-based or rule-based methods is becoming more difficult as the number of sensors and component interactions grows. Data-driven monitoring techniques have been developed to address these issues by analyzing system operations data to automatically characterize normal system behavior. System health can be monitored by comparing real-time operating data with these nominal characterizations, providing detection of anomalous data signatures indicative of system faults, failures, or precursors of significant failures. The Inductive Monitoring System (IMS) is a general purpose, data-driven system health monitoring software tool that has been successfully applied to several aerospace applications and is under evaluation for anomaly detection in vehicle and ground equipment for next generation launch systems. After an introduction to IMS application development, we discuss these NASA online monitoring applications, including the integration of IMS with complementary model-based and rule-based methods. Although the examples presented in this paper are from space operations applications, IMS is a general-purpose health-monitoring tool that is also applicable to power generation and transmission system monitoring.

  5. General-purpose and dedicated regimes in the use of telescopes

    CERN Document Server

    Lamy, Jerome

    2009-01-01

    We propose a sociohistorical framework for better understanding the evolution in the use of telescopes. We define two regimes of use : a general-purpose (or survey) one, where the telescope governs research, and a dedicated one, in which the telescope is tailored to a specific project which includes a network of other tools. This conceptual framework is first applied to the history of the 80-cm telescope of Toulouse Observatory, which is initially anchored in a general-purpose regime linked to astrometry. After a transition in the 1930s, it is integrated in a dedicated regime centered on astrophysics. This evolution is compared to that of a very similar instrument, the 80-cm telescope of Marseille Observatory, which converts early on to the dedicated regime with the Fabry-Perot interferometer around 1910, and, after a period of idleness, is again used in the survey mode after WWII. To further validate our new concept, we apply it to the telescopes of Washburn Observatory, of Dominion Astrophysical Observatory...

  6. Neural Networks for Computer Vision: A Framework for Specifications of a General Purpose Vision System

    Science.gov (United States)

    Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.

    1989-03-01

    The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.

  7. 基于多线程的不等概率的随机抽取算法%The Random sample Algorithm of unequal probability Based on Multi Thread

    Institute of Scientific and Technical Information of China (English)

    容飞龙

    2015-01-01

    Random sample of unequal probability based on variable parameters has been widely used in the society,for example the lottery according to the amount of consumption and random play of music player. This paper presents a algorithm of unequal probability based on multi thread. The core of the algorithm is:sleep() of thread in proportion to the variable parameter. This algorithm can meet the needs of a variety of complex.%基于可变参数的不等概率的随机抽取已经在社会上广泛应用,比如根据消费金额的抽奖和音乐播放器的随机播放。本文提出了基于多线程的不等概率随机抽取的算法。该方法的核心在于:线程的sleep()的时间和可变参数成比例。可以满足多种复杂多变的需要。

  8. 基于宏与全局变量Floyd并行算法的性能对比%Difference of multi-thread parallel Floyd algorithm with macro parameter and global variable

    Institute of Scientific and Technical Information of China (English)

    李超燕; 裴林滔

    2014-01-01

    在Ubuntu操作系统上,实现多线程并行的Floyd算法。对实验数据分析表明,基于全局变量定义代价矩阵A大小的并行程序所获得的并行性能要优于基于宏参数定义矩阵A大小的并行程序的性能。这与相应的用宏参数定义矩阵A大小的串行程序性能要更优的结果相反。%A multi-thread parallel Floyd algorithm is achieved on the Ubuntu operating system. Analysis of experimental data shows that the parallel performance of the parallel program with matrix A whose size is defined with global variable is better than the parallel program with matrix A whose size is defined with macro parameter. This is contrary to the much better performance of the serial program with matrix A whose size is defined with global variable.

  9. An evaluation of alternate production methods for Pu-238 general purpose heat source pellets

    Energy Technology Data Exchange (ETDEWEB)

    Mark Borland; Steve Frank

    2009-06-01

    For the past half century, the National Aeronautics and Space Administration (NASA) has used Radioisotope Thermoelectric Generators (RTG) to power deep space satellites. Fabricating heat sources for RTGs, specifically General Purpose Heat Sources (GPHSs), has remained essentially unchanged since their development in the 1970s. Meanwhile, 30 years of technological advancements have been made in the applicable fields of chemistry, manufacturing and control systems. This paper evaluates alternative processes that could be used to produce Pu 238 fueled heat sources. Specifically, this paper discusses the production of the plutonium-oxide granules, which are the input stream to the ceramic pressing and sintering processes. Alternate chemical processes are compared to current methods to determine if alternative fabrication processes could reduce the hazards, especially the production of respirable fines, while producing an equivalent GPHS product.

  10. General Purpose Graphics Processing Unit Based High-Rate Rice Decompression and Reed-Solomon Decoding.

    Energy Technology Data Exchange (ETDEWEB)

    Loughry, Thomas A.

    2015-02-01

    As the volume of data acquired by space-based sensors increases, mission data compression/decompression and forward error correction code processing performance must likewise scale. This competency development effort was explored using the General Purpose Graphics Processing Unit (GPGPU) to accomplish high-rate Rice Decompression and high-rate Reed-Solomon (RS) decoding at the satellite mission ground station. Each algorithm was implemented and benchmarked on a single GPGPU. Distributed processing across one to four GPGPUs was also investigated. The results show that the GPGPU has considerable potential for performing satellite communication Data Signal Processing, with three times or better performance improvements and up to ten times reduction in cost over custom hardware, at least in the case of Rice Decompression and Reed-Solomon Decoding.

  11. Real-time traffic sign recognition based on a general purpose GPU and deep-learning

    Science.gov (United States)

    Hong, Yongwon; Choi, Yeongwoo; Byun, Hyeran

    2017-01-01

    We present a General Purpose Graphics Processing Unit (GPGPU) based real-time traffic sign detection and recognition method that is robust against illumination changes. There have been many approaches to traffic sign recognition in various research fields; however, previous approaches faced several limitations when under low illumination or wide variance of light conditions. To overcome these drawbacks and improve processing speeds, we propose a method that 1) is robust against illumination changes, 2) uses GPGPU-based real-time traffic sign detection, and 3) performs region detecting and recognition using a hierarchical model. This method produces stable results in low illumination environments. Both detection and hierarchical recognition are performed in real-time, and the proposed method achieves 0.97 F1-score on our collective dataset, which uses the Vienna convention traffic rules (Germany and South Korea). PMID:28264011

  12. Simrank: Rapid and sensitive general-purpose k-mer search tool

    Energy Technology Data Exchange (ETDEWEB)

    DeSantis, T.Z.; Keller, K.; Karaoz, U.; Alekseyenko, A.V; Singh, N.N.S.; Brodie, E.L; Pei, Z.; Andersen, G.L; Larsen, N.

    2011-04-01

    Terabyte-scale collections of string-encoded data are expected from consortia efforts such as the Human Microbiome Project (http://nihroadmap.nih.gov/hmp). Intra- and inter-project data similarity searches are enabled by rapid k-mer matching strategies. Software applications for sequence database partitioning, guide tree estimation, molecular classification and alignment acceleration have benefited from embedded k-mer searches as sub-routines. However, a rapid, general-purpose, open-source, flexible, stand-alone k-mer tool has not been available. Here we present a stand-alone utility, Simrank, which allows users to rapidly identify database strings the most similar to query strings. Performance testing of Simrank and related tools against DNA, RNA, protein and human-languages found Simrank 10X to 928X faster depending on the dataset. Simrank provides molecular ecologists with a high-throughput, open source choice for comparing large sequence sets to find similarity.

  13. Developing wearable bio-feedback systems: a general-purpose platform.

    Science.gov (United States)

    Bianchi, Luigi; Babiloni, Fabio; Cincotti, Febo; Arrivas, Marco; Bollero, Patrizio; Marciani, Maria Grazia

    2003-06-01

    Microprocessors, even those in PocketPCs, have adequate power for many real-time biofeedback applications for disabled people. This power allows design of portable or wearable devices that are smaller and lighter, and that have longer battery life compared to notebook-based systems. In this paper, we discuss a general-purpose hardware/software solution based on industrial or consumer devices and a C++ framework. Its flexibility and modularity make it adaptable to a wide range of situations. Moreover, its design minimizes system requirements and programming effort, thus allowing efficient systems to be built quickly and easily. Our design has been used to build two brain computer interface systems that were easily ported from the Win32 platform.

  14. Upgrade of the Cellular General Purpose Monte Carlo Tool FOAM to version 2.06

    CERN Document Server

    Jadach, Stanislaw

    2006-01-01

    FOAM-2.06 is an upgraded version of FOAM, a general purpose, self-adapting Monte Carlo event generator. In comparison with FOAM-2.05, it has two important improvements. New interface to random numbers lets the user to choose from the three "state of the art" random number generators. Improved algorithms for simplical grid need less computer memory; the problem of the prohibitively large memory allocation required for the large number ($>10^6$) of simplical cells is now eliminated -- the new version can handle such cases even on the average desktop computers. In addition, generation of the Monte Carlo events, in case of large number of cells, may be even significantly faster.

  15. Design and Implementation of 3D Model Database for General-Purpose 3D GIS

    Institute of Scientific and Technical Information of China (English)

    XU Weiping; ZHU Qing; DU Zhiqiang; ZHANG Yeting

    2010-01-01

    To improve the reusability of three-dimensional (3D) models and simplify the complexity of natural scene reconstruction, this paper presents a 3D model database for universal 3D GIS. After the introduction of its extensible function architecture,accompanied by the conclusion of implicit spatial-temporal hierarchy of models in any reconstructed scene of 3D GIS for general purpose, several key issues are discussed in detail, such as the storage and management of 3D models and related retrieval and load method, as well as the interfaces for further on-demand development. Finally, the validity and feasibility of this model database are proved through its application in the development of 3D visualization system of railway operation.

  16. GPACC program cost work breakdown structure-dictionary. General purpose aft cargo carrier study, volume 3

    Science.gov (United States)

    1985-01-01

    The results of detailed cost estimates and economic analysis performed on the updated Model 101 configuration of the general purpose Aft Cargo Carrier (ACC) are given. The objective of this economic analysis is to provide the National Aeronautics and Space Administration (NASA) with information on the economics of using the ACC on the Space Transportation System (STS). The detailed cost estimates for the ACC are presented by a work breakdown structure (WBS) to ensure that all elements of cost are considered in the economic analysis and related subsystem trades. Costs reported by WBS provide NASA with a basis for comparing competing designs and provide detailed cost information that can be used to forecast phase C/D planning for new projects or programs derived from preliminary conceptual design studies. The scope covers all STS and STS/ACC launch vehicle cost impacts for delivering payloads to a 160 NM low Earth orbit (LEO).

  17. Literature Review: Weldability of Iridium DOP-26 Alloy for General Purpose Heat Source

    Energy Technology Data Exchange (ETDEWEB)

    Burgardt, Paul [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pierce, Stanley W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-10-19

    The basic purpose of this paper is to provide a literature review relative to fabrication of the General Purpose Heat Source (GPHS) that is used to provide electrical power for deep space missions of NASA. The particular fabrication operation to be addressed here is arc welding of the GPHS encapsulation. A considerable effort was made to optimize the fabrication of the fuel pellets and of other elements of the encapsulation; that work will not be directly addressed in this paper. This report consists of three basic sections: 1) a brief description of the GPHS will be provided as background information for the reader; 2) mechanical properties and the optimization thereof as relevant to welding will be discussed; 3) a review of the arc welding process development and optimization will be presented. Since the welding equipment must be upgraded for future production, some discussion of the historical establishment of relevant welding variables and possible changes thereto will also be discussed.

  18. A General Purpose Connections type CTI Server Based on SIP Protocol and Its Implementation

    Science.gov (United States)

    Watanabe, Toru; Koizumi, Hisao

    In this paper, we propose a general purpose connections type CTI (Computer Telephony Integration) server that provides various CTI services such as voice logging where the CTI server communicates with IP-PBX using the SIP (Session Initiation Protocol), and accumulates voice packets of external line telephone call flowing between an IP telephone for extension and a VoIP gateway connected to outside line networks. The CTI server realizes CTI services such as voice logging, telephone conference, or IVR (interactive voice response) with accumulating and processing voice packets sampled. Furthermore, the CTI server incorporates a web server function which can provide various CTI services such as a Web telephone directory via a Web browser to PCs, cellular telephones or smart-phones in mobile environments.

  19. Fully implicit mixed-hybrid finite-element discretization for general purpose subsurface reservoir simulation

    Science.gov (United States)

    Abushaikha, Ahmad S.; Voskov, Denis V.; Tchelepi, Hamdi A.

    2017-10-01

    We present a new fully-implicit, mixed-hybrid, finite-element (MHFE) discretization scheme for general-purpose compositional reservoir simulation. The locally conservative scheme solves the coupled momentum and mass balance equations simultaneously, and the fluid system is modeled using a cubic equation-of-state. We introduce a new conservative flux approach for the mass balance equations for this fully-implicit approach. We discuss the nonlinear solution procedure for the proposed approach, and we present extensive numerical tests to demonstrate the convergence and accuracy of the MHFE method using tetrahedral elements. We also compare the method to other advanced discretization schemes for unstructured meshes and tensor permeability. Finally, we illustrate the applicability and robustness of the method for highly heterogeneous reservoirs with unstructured grids.

  20. Litrani a General Purpose Monte-Carlo Program Simulating Light Propagation In Isotropic or Anisotropic Media

    CERN Document Server

    Gentit, François-Xavier

    2001-01-01

    Litrani is a general purpose Monte-Carlo program simulating light propagation in any type of setup describable by the shapes provided by ROOT. Each shape may be made of a different material. Dielectric constant, absorption length and diffusion length of materials may depend upon wavelength. Dielectric constant and absorption length may be anisotropic. Each face of a volume is either partially or totally in contact with a face of another volume, or covered with some wrapping having defined characteristics of absorption, reflection and diffusion. When in contact with another face of another volume, the possibility exists to have a thin slice of width d and index n between the 2 faces. The program has various sources of light: spontaneous photons, photons coming from an optical fibre, photons generated by the crossing of particles or photons generated by an electromagnetic shower. The time and wavelength spectra of emitted photons may reproduce any scintillation spectrum. As detectors, phototubes, APD, or any ge...

  1. A general purpose subroutine for fast fourier transform on a distributed memory parallel machine

    Science.gov (United States)

    Dubey, A.; Zubair, M.; Grosch, C. E.

    1992-01-01

    One issue which is central in developing a general purpose Fast Fourier Transform (FFT) subroutine on a distributed memory parallel machine is the data distribution. It is possible that different users would like to use the FFT routine with different data distributions. Thus, there is a need to design FFT schemes on distributed memory parallel machines which can support a variety of data distributions. An FFT implementation on a distributed memory parallel machine which works for a number of data distributions commonly encountered in scientific applications is presented. The problem of rearranging the data after computing the FFT is also addressed. The performance of the implementation on a distributed memory parallel machine Intel iPSC/860 is evaluated.

  2. A General Purpose Feature Extractor for Light Detection and Ranging Data

    Directory of Open Access Journals (Sweden)

    Edwin B. Olson

    2010-11-01

    Full Text Available Feature extraction is a central step of processing Light Detection and Ranging (LIDAR data. Existing detectors tend to exploit characteristics of specific environments: corners and lines from indoor (rectilinear environments, and trees from outdoor environments. While these detectors work well in their intended environments, their performance in different environments can be poor. We describe a general purpose feature detector for both 2D and 3D LIDAR data that is applicable to virtually any environment. Our method adapts classic feature detection methods from the image processing literature, specifically the multi-scale Kanade-Tomasi corner detector. The resulting method is capable of identifying highly stable and repeatable features at a variety of spatial scales without knowledge of environment, and produces principled uncertainty estimates and corner descriptors at same time. We present results on both software simulation and standard datasets, including the 2D Victoria Park and Intel Research Center datasets, and the 3D MIT DARPA Urban Challenge dataset.

  3. A Real-Time Programmer's Tour of General-Purpose L4 Microkernels

    Directory of Open Access Journals (Sweden)

    Ruocco Sergio

    2008-01-01

    Full Text Available Abstract L4-embedded is a microkernel successfully deployed in mobile devices with soft real-time requirements. It now faces the challenges of tightly integrated systems, in which user interface, multimedia, OS, wireless protocols, and even software-defined radios must run on a single CPU. In this paper we discuss the pros and cons of L4-embedded for real-time systems design, focusing on the issues caused by the extreme speed optimisations it inherited from its general-purpose ancestors. Since these issues can be addressed with a minimal performance loss, we conclude that, overall, the design of real-time systems based on L4-embedded is possible, and facilitated by a number of design features unique to microkernels and the L4 family.

  4. Generic functional requirements for a NASA general-purpose data base management system

    Science.gov (United States)

    Lohman, G. M.

    1981-01-01

    Generic functional requirements for a general-purpose, multi-mission data base management system (DBMS) for application to remotely sensed scientific data bases are detailed. The motivation for utilizing DBMS technology in this environment is explained. The major requirements include: (1) a DBMS for scientific observational data; (2) a multi-mission capability; (3) user-friendly; (4) extensive and integrated information about data; (5) robust languages for defining data structures and formats; (6) scientific data types and structures; (7) flexible physical access mechanisms; (8) ways of representing spatial relationships; (9) a high level nonprocedural interactive query and data manipulation language; (10) data base maintenance utilities; (11) high rate input/output and large data volume storage; and adaptability to a distributed data base and/or data base machine configuration. Detailed functions are specified in a top-down hierarchic fashion. Implementation, performance, and support requirements are also given.

  5. General-purpose heat source safety verification test series: SVT-7 through SVT-10

    Science.gov (United States)

    George, T. G.; Pavone, D.

    1985-09-01

    The General-Purpose Heat Source (GPHS) is a modular component of the radioisotope thermoelectric generator that will supply power for the Galileo and Ulysses (formerly ISPM) space missions. The GPHS provides power by transmitting the heat of (238)PuO2 (ALPHA)-decay to an array of thermoelectric elements. Because the possibility of an orbital abort always exists, the heat source was designed and constructed to minimize plutonia release in any accident environment. The Safety Verification Test (SVT) series was formulated to evaluate the effectiveness of GPHS plutonia containment after atmospheric reentry and Earth impact. The first report (covering SVT-1 through SVT-6) described the results of flat and side-on module impacts. This report describes module impacts at angles of 15(0) and 30(0).

  6. General-purpose heat source safety verification test series: SVT-11 through SVT-13

    Science.gov (United States)

    George, T. G.; Pavone, D.

    1986-05-01

    The General-Purpose Heat Source (GPHS) is a modular component of the radioisotope thermoelectric generator that will provide power for the Galileo and Ulysses (formerly ISPM) space missions. The GPHS provides power by transmitting the heat of Pu -decay to an array of thermoelectric elements. Because the possibility of an orbital abort always exists, the heat source was designed and constructed to minimize plutonia release in any accident environment. The Safety Verification Test (SVT) series was formulated to evaluate the effectiveness of GPHS plutonia containment after atmospheric reentry and Earth impact. The first two reports (covering SVT-1 through SVT-10) described the results of flat, side-on, and angular module impacts against steel targets at 54 m/s. This report describes flat-on module impacts against concrete and granite targets, at velocities equivalent to or higher than previous SVTs.

  7. General Purpose Graphics Processing Unit Based High-Rate Rice Decompression and Reed-Solomon Decoding

    Energy Technology Data Exchange (ETDEWEB)

    Loughry, Thomas A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    As the volume of data acquired by space-based sensors increases, mission data compression/decompression and forward error correction code processing performance must likewise scale. This competency development effort was explored using the General Purpose Graphics Processing Unit (GPGPU) to accomplish high-rate Rice Decompression and high-rate Reed-Solomon (RS) decoding at the satellite mission ground station. Each algorithm was implemented and benchmarked on a single GPGPU. Distributed processing across one to four GPGPUs was also investigated. The results show that the GPGPU has considerable potential for performing satellite communication Data Signal Processing, with three times or better performance improvements and up to ten times reduction in cost over custom hardware, at least in the case of Rice Decompression and Reed-Solomon Decoding.

  8. A Parallel General Purpose Mulit-Objective Optimization Framework, with Application to Beam Dynamics

    CERN Document Server

    Ineichen, Y; Kolano, A; Bekas, C; Curioni, A; Arbenz, P

    2013-01-01

    Particle accelerators are invaluable tools for research in the basic and applied sciences, in fields such as materials science, chemistry, the biosciences, particle physics, nuclear physics and medicine. The design, commissioning, and operation of accelerator facilities is a non-trivial task, due to the large number of control parameters and the complex interplay of several conflicting design goals. We propose to tackle this problem by means of multi-objective optimization algorithms which also facilitate a parallel deployment. In order to compute solutions in a meaningful time frame we require a fast and scalable software framework. In this paper, we present the implementation of such a general-purpose framework for simulation based multi-objective optimization methods that allows the automatic investigation of optimal sets of machine parameters. The implementation is based on a master/slave paradigm, employing several masters that govern a set of slaves executing simulations and performing optimization task...

  9. Design of the SLAC RCE Platform: A General Purpose ATCA Based Data Acquisition System

    Energy Technology Data Exchange (ETDEWEB)

    Herbst, R. [SLAC National Accelerator Laboratory, Menlo Park, CA (United States). Research Engineering Div.; Claus, R. [SLAC National Accelerator Laboratory, Menlo Park, CA (United States). Research Engineering Div.; Freytag, M. [SLAC National Accelerator Laboratory, Menlo Park, CA (United States). Research Engineering Div.; Haller, G. [SLAC National Accelerator Laboratory, Menlo Park, CA (United States). Research Engineering Div.; Huffer, M. [SLAC National Accelerator Laboratory, Menlo Park, CA (United States). Research Engineering Div.; Maldonado, S. [SLAC National Accelerator Laboratory, Menlo Park, CA (United States). Research Engineering Div.; Nishimura, K. [SLAC National Accelerator Laboratory, Menlo Park, CA (United States). Research Engineering Div.; O' Grady, C. [SLAC National Accelerator Laboratory, Menlo Park, CA (United States). Research Engineering Div.; Panetta, J. [SLAC National Accelerator Laboratory, Menlo Park, CA (United States). Research Engineering Div.; Perazzo, A. [SLAC National Accelerator Laboratory, Menlo Park, CA (United States). Research Engineering Div.; Reese, B. [SLAC National Accelerator Laboratory, Menlo Park, CA (United States). Research Engineering Div.; Ruckman, L. [SLAC National Accelerator Laboratory, Menlo Park, CA (United States). Research Engineering Div.; Thayer, J. G. [SLAC National Accelerator Laboratory, Menlo Park, CA (United States). Research Engineering Div.; Weaver, M. [SLAC National Accelerator Laboratory, Menlo Park, CA (United States). Research Engineering Div.

    2015-01-23

    The SLAC RCE platform is a general purpose clustered data acquisition system implemented on a custom ATCA compliant blade, called the Cluster On Board (COB). The core of the system is the Reconfigurable Cluster Element (RCE), which is a system-on-chip design based upon the Xilinx Zynq family of FPGAs, mounted on custom COB daughter-boards. The Zynq architecture couples a dual core ARM Cortex A9 based processor with a high performance 28nm FPGA. The RCE has 12 external general purpose bi-directional high speed links, each supporting serial rates of up to 12Gbps. 8 RCE nodes are included on a COB, each with a 10Gbps connection to an on-board 24-port Ethernet switch integrated circuit. The COB is designed to be used with a standard full-mesh ATCA backplane allowing multiple RCE nodes to be tightly interconnected with minimal interconnect latency. Multiple shelves can be clustered using the front panel 10-gbps connections. The COB also supports local and inter-blade timing and trigger distribution. An experiment specific Rear Transition Module adapts the 96 high speed serial links to specific experiments and allows an experiment-specific timing and busy feedback connection. This coupling of processors with a high performance FPGA fabric in a low latency, multiple node cluster allows high speed data processing that can be easily adapted to any physics experiment. RTEMS and Linux are both ported to the module. The RCE has been used or is the baseline for several current and proposed experiments (LCLS, HPS, LSST, ATLAS-CSC, LBNE, DarkSide, ILC-SiD, etc).

  10. Color and motion-based particle filter target tracking in a network of overlapping cameras with multi-threading and GPGPU Rastreo de objetivos por medio de filtros de partículas basados en color y movimiento en una red de cámaras con multi-hilo y GPGPU

    Directory of Open Access Journals (Sweden)

    Jorge Francisco Madrigal Díaz

    2013-03-01

    Full Text Available This paper describes an efficient implementation of multiple-target multiple-view tracking in video-surveillance sequences. It takes advantage of the capabilities of multiple core Central Processing Units (CPUs and of graphical processing units under the Compute Unifie Device Arquitecture (CUDA framework. The principle of our algorithm is 1 in each video sequence, to perform tracking on all persons to track by independent particle filters and 2 to fuse the tracking results of all sequences. Particle filters belong to the category of recursive Bayesian filters. They update a Monte-Carlo representation of the posterior distribution over the target position and velocity. For this purpose, they combine a probabilistic motion model, i.e. prior knowledge about how targets move (e.g. constant velocity and a likelihood model associated to the observations on targets. At this first level of single video sequences, the multi-threading library Threading Buildings Blocks (TBB has been used to parallelize the processing of the per-target independent particle filters. Afterwards at the higher level, we rely on General Purpose Programming on Graphical Processing Units (generally termed as GPGPU through CUDA in order to fuse target-tracking data collected on multiple video sequences, by solving the data association problem. Tracking results are presented on various challenging tracking datasets.Este artículo describe una implementación eficiente de un algoritmo de seguimiento de múlti­ples objetivos en múltiples vistas en secuencias de video vigilancia. Aprovecha las capacidades de las Unidades Centrales de Procesamiento (CPUs, por sus siglas en inglés de múltiples núcleos y de las unidades de procesamiento gráfico, bajo el entorno de desarrollo de Arquitec­tura Unificada de Dispositivos de Cómputo (CUDA, por sus siglas en inglés. El principio de nuestro algoritmo es: 1 aplicar el seguimiento visual en cada secuencia de video sobre todas las

  11. Auxiliary subsystems of a General-Purpose IGBT Stack for high-performance laboratory power converters

    Indian Academy of Sciences (India)

    ANIL KUMAR ADAPA; D VENKATRAMANAN; VINOD JOHN

    2017-08-01

    A PWM converter is the prime component in many power electronic applications such as static UPS, electric motor drives, power quality conditioners and renewable-energy-based power generation systems. While there are a number of computer simulation tools available today for studying power electronic systems,the value added by the experience of building a power converter and controlling it to function as desired is unparalleled. A student, in the process, not only understands power electronic concepts better, but also gains insights into other essential engineering aspects of auxiliary subsystems such as start-up, sensing, protection, circuit layout design, mechanical arrangement and system integration. Higher levels of protection features are critical for the converters used in a laboratory environment, as advanced protection schemes could prevent unanticipated failures occurring during the course of research. This paper presents a laboratory-built General-Purpose IGBT Stack (GPIS), which facilitates students to practically realize different power converter topologies. Essential subsystems for a complete power converter system is presented covering details of semiconductor device driving, sensing circuit, protection mechanism, system start-up, relaying and critical PCB layout design, followed by a brief comparison to commercially available IGBT stacks. The results show the high performance that can be obtained by the GPIS converter.

  12. Transforming the ASDEX Upgrade discharge control system to a general-purpose plasma control platform

    Energy Technology Data Exchange (ETDEWEB)

    Treutterer, Wolfgang, E-mail: Wolfgang.Treutterer@ipp.mpg.de [Max-Planck-Institut für Plasmaphysik, Boltzmannstr. 2, 85748 Garching (Germany); Cole, Richard [Unlimited Computer Systems, Seeshaupter Str. 15, 82393 Iffeldorf (Germany); Gräter, Alexander [Max-Planck-Institut für Plasmaphysik, Boltzmannstr. 2, 85748 Garching (Germany); Lüddecke, Klaus [Unlimited Computer Systems, Seeshaupter Str. 15, 82393 Iffeldorf (Germany); Neu, Gregor; Rapson, Christopher; Raupp, Gerhard; Zasche, Dieter; Zehetbauer, Thomas [Max-Planck-Institut für Plasmaphysik, Boltzmannstr. 2, 85748 Garching (Germany)

    2015-10-15

    Highlights: • Control framework split in core and custom part. • Core framework deployable in other fusion device environments. • Adaptible through customizable modules, plug-in support and generic interfaces. - Abstract: The ASDEX Upgrade Discharge Control System DCS is a modern and mature product, originally designed to regulate and supervise ASDEX Upgrade Tokamak plasma operation. In its core DCS is based on a generic, versatile real-time software framework with a plugin architecture that allows to easily combine, modify and extend control function modules in order to tailor the system to required features and let it continuously evolve with the progress of an experimental fusion device. Due to these properties other fusion experiments like the WEST project have expressed interest in adopting DCS. For this purpose, essential parts of DCS must be unpinned from the ASDEX Upgrade environment by exposure or introduction of generalised interfaces. Re-organisation of DCS modules allows distinguishing between intrinsic framework core functions and device-specific applications. In particular, DCS must be prepared for deployment in different system environments with their own realisations for user interface, pulse schedule preparation, parameter server, time and event distribution, diagnostic and actuator systems, network communication and data archiving. The article explains the principles of the revised DCS structure, derives the necessary interface definitions and describes major steps to achieve the separation between general-purpose framework and fusion device specific components.

  13. deconSTRUCT: general purpose protein database search on the substructure level.

    Science.gov (United States)

    Zhang, Zong Hong; Bharatham, Kavitha; Sherman, Westley A; Mihalek, Ivana

    2010-07-01

    deconSTRUCT webserver offers an interface to a protein database search engine, usable for a general purpose detection of similar protein (sub)structures. Initially, it deconstructs the query structure into its secondary structure elements (SSEs) and reassembles the match to the target by requiring a (tunable) degree of similarity in the direction and sequential order of SSEs. Hierarchical organization and judicious use of the information about protein structure enables deconSTRUCT to achieve the sensitivity and specificity of the established search engines at orders of magnitude increased speed, without tying up irretrievably the substructure information in the form of a hash. In a post-processing step, a match on the level of the backbone atoms is constructed. The results presented to the user consist of the list of the matched SSEs, the transformation matrix for rigid superposition of the structures and several ways of visualization, both downloadable and implemented as a web-browser plug-in. The server is available at http://epsf.bmad.bii.a-star.edu.sg/struct_server.html.

  14. Design-for-Testability Features and Test Implementation of a Giga Hertz General Purpose Microprocessor

    Institute of Scientific and Technical Information of China (English)

    Da Wang; Yu Hu; Hua-Wei Li; Xiao-Wei Li

    2008-01-01

    This paper describes the design-for-testability (DFT) features and low-cost testing solutions of a general purpose microprocessor. The optimized DFT features are presented in detail. A hybrid scan compression structure was executed and achieved compression ratio more than ten times. Memory built-in self-test (BIST) circuitries were designed with scan collars instead of bitmaps to reduce area overheads and to improve test and debug efficiency. The implemented DFT framework also utilized internal phase-locked loops (PLL) to provide complex at-speed test clock sequences. Since there are still limitations in this DFT design, the test strategies for this case are quite complex, with complicated automatic test pattern generation (ATPG) and debugging flow. The sample testing results are given in the paper. All the DFT methods discussed in the paper are prototypes for a high-volume manufacturing (HVM) DFT plan to meet high quality test goals as well as slow test power consumption and cost.

  15. Foam Multi-Dimensional General Purpose Monte Carlo Generator With Self-Adapting Symplectic Grid

    CERN Document Server

    Jadach, Stanislaw

    2000-01-01

    A new general purpose Monte Carlo event generator with self-adapting grid consisting of simplices is described. In the process of initialization, the simplex-shaped cells divide into daughter subcells in such a way that: (a) cell density is biggest in areas where integrand is peaked, (b) cells elongate themselves along hyperspaces where integrand is enhanced/singular. The grid is anisotropic, i.e. memory of the axes directions of the primary reference frame is lost. In particular, the algorithm is capable of dealing with distributions featuring strong correlation among variables (like ridge along diagonal). The presented algorithm is complementary to others known and commonly used in the Monte Carlo event generators. It is, in principle, more effective then any other one for distributions with very complicated patterns of singularities - the price to pay is that it is memory-hungry. It is therefore aimed at a small number of integration dimensions (<10). It should be combined with other methods for higher ...

  16. General-Purpose Heat Source development: Safety Verification Test Program. Bullet/fragment test series

    Energy Technology Data Exchange (ETDEWEB)

    George, T.G.; Tate, R.E.; Axler, K.M.

    1985-05-01

    The radioisotope thermoelectric generator (RTG) that will provide power for space missions contains 18 General-Purpose Heat Source (GPHS) modules. Each module contains four /sup 238/PuO/sub 2/-fueled clads and generates 250 W/sub (t)/. Because a launch-pad or post-launch explosion is always possible, we need to determine the ability of GPHS fueled clads within a module to survive fragment impact. The bullet/fragment test series, part of the Safety Verification Test Plan, was designed to provide information on clad response to impact by a compact, high-energy, aluminum-alloy fragment and to establish a threshold value of fragment energy required to breach the iridium cladding. Test results show that a velocity of 555 m/s (1820 ft/s) with an 18-g bullet is at or near the threshold value of fragment velocity that will cause a clad breach. Results also show that an exothermic Ir/Al reaction occurs if aluminum and hot iridium are in contact, a contact that is possible and most damaging to the clad within a narrow velocity range. The observed reactions between the iridium and the aluminum were studied in the laboratory and are reported in the Appendix.

  17. An FFT Performance Model for Optimizing General-Purpose Processor Architecture

    Institute of Scientific and Technical Information of China (English)

    Ling Li; Yun-Ji Chen; Dao-Fu Liu; Cheng Qian; Wei-Wu Hu

    2011-01-01

    General-purpose processor (GPP) is an important platform for fast Fourier transform (FFT),due to its flexibility,reliability and practicality.FFT is a representative application intensive in both computation and memory access,optimizing the FFT performance of a GPP also benefits the performances of many other applications.To facilitate the analysis of FFT,this paper proposes a theoretical model of the FFT processing.The model gives out a tight lower bound of the runtime of FFT on a GPP,and guides the architecture optimization for GPP as well.Based on the model,two theorems on optimization of architecture parameters are deduced,which refer to the lower bounds of register number and memory bandwidth.Experimental results on different processor architectures (including Intel Core i7 and Godson-3B) validate the performance model.The above investigations were adopted in the development of Godson-3B,which is an industrial GPP.The optimization techniques deduced from our performance model improve the FFT performance by about 40%,while incurring only 0.8% additional area cost.Consequently,Godson-3B solves the 1024-point single-precision complex FFT in 0.368 μs with about 40 Watt power consumption,and has the highest performance-per-watt in complex FFT among processors as far as we know.This work could benefit optimization of other GPPs as well.

  18. Practical Implementation of Prestack Kirchhoff Time Migration on a General Purpose Graphics Processing Unit

    Directory of Open Access Journals (Sweden)

    Liu Guofeng

    2016-08-01

    Full Text Available In this study, we present a practical implementation of prestack Kirchhoff time migration (PSTM on a general purpose graphic processing unit. First, we consider the three main optimizations of the PSTM GPU code, i.e., designing a configuration based on a reasonable execution, using the texture memory for velocity interpolation, and the application of an intrinsic function in device code. This approach can achieve a speedup of nearly 45 times on a NVIDIA GTX 680 GPU compared with CPU code when a larger imaging space is used, where the PSTM output is a common reflection point that is gathered as I[nx][ny][nh][nt] in matrix format. However, this method requires more memory space so the limited imaging space cannot fully exploit the GPU sources. To overcome this problem, we designed a PSTM scheme with multi-GPUs for imaging different seismic data on different GPUs using an offset value. This process can achieve the peak speedup of GPU PSTM code and it greatly increases the efficiency of the calculations, but without changing the imaging result.

  19. Practical Implementation of Prestack Kirchhoff Time Migration on a General Purpose Graphics Processing Unit

    Science.gov (United States)

    Liu, Guofeng; Li, Chun

    2016-08-01

    In this study, we present a practical implementation of prestack Kirchhoff time migration (PSTM) on a general purpose graphic processing unit. First, we consider the three main optimizations of the PSTM GPU code, i.e., designing a configuration based on a reasonable execution, using the texture memory for velocity interpolation, and the application of an intrinsic function in device code. This approach can achieve a speedup of nearly 45 times on a NVIDIA GTX 680 GPU compared with CPU code when a larger imaging space is used, where the PSTM output is a common reflection point that is gathered as I[ nx][ ny][ nh][ nt] in matrix format. However, this method requires more memory space so the limited imaging space cannot fully exploit the GPU sources. To overcome this problem, we designed a PSTM scheme with multi-GPUs for imaging different seismic data on different GPUs using an offset value. This process can achieve the peak speedup of GPU PSTM code and it greatly increases the efficiency of the calculations, but without changing the imaging result.

  20. Computing OpenSURF on OpenCL and General Purpose GPU

    Directory of Open Access Journals (Sweden)

    Wanglong Yan

    2013-10-01

    Full Text Available Speeded-Up Robust Feature (SURF algorithm is widely used for image feature detecting and matching in computer vision area. Open Computing Language (OpenCL is a framework for writing programs that execute across heterogeneous platforms consisting of CPUs, GPUs, and other processors. This paper introduces how to implement an open-sourced SURF program, namely OpenSURF, on general purpose GPU by OpenCL, and discusses the optimizations in terms of the thread architectures and memory models in detail. Our final OpenCL implementation of OpenSURF is on average 37% and 64% faster than the OpenCV SURF v2.4.5 CUDA implementation on NVidia's GTX660 and GTX460SE GPUs, repectively. Our OpenCL program achieved real-time performance (>25 Frames Per Second for almost all the input images with different sizes from 320*240 to 1024*768 on NVidia's GTX660 GPU, NVidia's GTX460SE GPU and AMD's Radeon HD 6850 GPU. Our OpenCL approach on NVidia's GTX660 GPU is more than 22.8 times faster than its original CPU version on Intel's Dual-Core E5400 2.7G on average.

  1. Corrosion science general-purpose data model and interface (Ⅲ):Data integration and management environment

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    A brand new Corrosion Data Integration and Management Environment(CDIME) is developed in Java programming language based on general-purpose corrosion data model(GPCDM) and corrosion data markup language(CDML) proposed in the previous works.In general,the functionalities and features of CDIME meet most of design requirements including composition,inheritance,self-contained,relatively independence and so on.An insight tutorial is introduced on the life circle of corrosion data islands from its creation,maintenance,and application like publishing.The template feature makes the building of comprehensive data island as simple as a few mouse clicks.Read-only publishing of data as e-Book and PDF hold their own places.Achieved document can be imported and exported freely on any running CDIME.The achieving feature is addressed in detail because it is critical to data sharing and integration in GPCDM.At the end,a real example is presented to help the understanding of data islands assembling and the advanced features offered by CDIME.

  2. Corrosion science general-purpose data model and interface (Ⅲ): Data integration and management environment

    Institute of Scientific and Technical Information of China (English)

    TANG ZiLong

    2008-01-01

    A brand new Corrosion Data Integration and Management Environment (CDIME) is developed in Java programming language based on general-purpose corrosion data model (GPCDM) and corrosion data markup language (CDML) proposed in the previous works. In general, the functionalities and features of CDIME meet most of design requirements including composition, inheritance, self-contained, relatively independence and so on. An insight tutorial is introduced on the life circle of cor-rosion data islands from its creation, maintenance, and application like publishing. The template feature makes the building of comprehensive data island as simple as a few mouse clicks. Read-only publishing of data as e-Book and PDF hold their own places. Achieved document can be imported and exported freely on any running CDIME. The achieving feature is addressed in detail because it is critical to data sharing and integration in GPCDM. At the end, a real example is presented to help the understanding of data islands assembling and the advanced features offered by CDIME.

  3. Optimization of a general-purpose, actively scanned proton beamline for ocular treatments: Geant4 simulations.

    Science.gov (United States)

    Piersimoni, Pierluigi; Rimoldi, Adele; Riccardi, Cristina; Pirola, Michele; Molinelli, Silvia; Ciocca, Mario

    2015-03-08

    The Italian National Center for Hadrontherapy (CNAO, Centro Nazionale di Adroterapia Oncologica), a synchrotron-based hospital facility, started the treatment of patients within selected clinical trials in late 2011 and 2012 with actively scanned proton and carbon ion beams, respectively. The activation of a new clinical protocol for the irradiation of uveal melanoma using the existing general-purpose proton beamline is foreseen for late 2014. Beam characteristics and patient treatment setup need to be tuned to meet the specific requirements for such a type of treatment technique. The aim of this study is to optimize the CNAO transport beamline by adding passive components and minimizing air gap to achieve the optimal conditions for ocular tumor irradiation. The CNAO setup with the active and passive components along the transport beamline, as well as a human eye-modeled detector also including a realistic target volume, were simulated using the Monte Carlo Geant4 toolkit. The strong reduction of the air gap between the nozzle and patient skin, as well as the insertion of a range shifter plus a patient-specific brass collimator at a short distance from the eye, were found to be effective tools to be implemented. In perspective, this simulation toolkit could also be used as a benchmark for future developments and testing purposes on commercial treatment planning systems.

  4. Computing OpenSURF on OpenCL and General Purpose GPU

    Directory of Open Access Journals (Sweden)

    Wanglong Yan

    2013-10-01

    Full Text Available Speeded-Up Robust Feature (SURF algorithm is widely used for image feature detecting and matching in computer vision area. Open Computing Language (OpenCL is a framework for writing programs that execute across heterogeneous platforms consisting of CPUs, GPUs, and other processors. This paper introduces how to implement an open-sourced SURF program, namely OpenSURF, on general purpose GPU by OpenCL, and discusses the optimizations in terms of the thread architectures and memory models in detail. Our final OpenCL implementation of OpenSURF is on average 37% and 64% faster than the OpenCV SURF v2.4.5 CUDA implementation on NVidia’s GTX660 and GTX460SE GPUs, repectively. Our OpenCL program achieved real-time performance (>25 Frames Per Second for almost all the input images with different sizes from 320*240 to 1024*768 on NVidia’s GTX660 GPU, NVidia’s GTX460SE GPU and AMD’s Radeon HD 6850 GPU. Our OpenCL approach on NVidia’s GTX660 GPU is more than 22.8 times faster than its original CPU version on Intel’s Dual-Core E5400 2.7G on average.

  5. Strong scaling of general-purpose molecular dynamics simulations on GPUs

    CERN Document Server

    Glaser, Jens; Anderson, Joshua A; Lui, Pak; Spiga, Filippo; Millan, Jaime A; Morse, David C; Glotzer, Sharon C

    2014-01-01

    We describe a highly optimized implementation of MPI domain decomposition in a GPU-enabled, general-purpose molecular dynamics code, HOOMD-blue (Anderson and Glotzer, arXiv:1308.5587). Our approach is inspired by a traditional CPU-based code, LAMMPS (Plimpton, J. Comp. Phys. 117, 1995), but is implemented within a code that was designed for execution on GPUs from the start (Anderson et al., J. Comp. Phys. 227, 2008). The software supports short-ranged pair force and bond force fields and achieves optimal GPU performance using an autotuning algorithm. We are able to demonstrate equivalent or superior scaling on up to 3,375 GPUs in Lennard-Jones and dissipative particle dynamics (DPD) simulations of up to 108 million particles. GPUDirect RDMA capabilities in recent GPU generations provide better performance in full double precision calculations. For a representative polymer physics application, HOOMD-blue 1.0 provides an effective GPU vs. CPU node speed-up of 12.5x.

  6. CASPER: Embedding Power Estimation and Hardware-Controlled Power Management in a Cycle-Accurate Micro-Architecture Simulation Platform for Many-Core Multi-Threading Heterogeneous Processors

    Directory of Open Access Journals (Sweden)

    Arun Ravindran

    2012-02-01

    Full Text Available Despite the promising performance improvement observed in emerging many-core architectures in high performance processors, high power consumption prohibitively affects their use and marketability in the low-energy sectors, such as embedded processors, network processors and application specific instruction processors (ASIPs. While most chip architects design power-efficient processors by finding an optimal power-performance balance in their design, some use sophisticated on-chip autonomous power management units, which dynamically reduce the voltage or frequencies of idle cores and hence extend battery life and reduce operating costs. For large scale designs of many-core processors, a holistic approach integrating both these techniques at different levels of abstraction can potentially achieve maximal power savings. In this paper we present CASPER, a robust instruction trace driven cycle-accurate many-core multi-threading micro-architecture simulation platform where we have incorporated power estimation models of a wide variety of tunable many-core micro-architectural design parameters, thus enabling processor architects to explore a sufficiently large design space and achieve power-efficient designs. Additionally CASPER is designed to accommodate cycle-accurate models of hardware controlled power management units, enabling architects to experiment with and evaluate different autonomous power-saving mechanisms to study the run-time power-performance trade-offs in embedded many-core processors. We have implemented two such techniques in CASPER–Chipwide Dynamic Voltage and Frequency Scaling, and Performance Aware Core-Specific Frequency Scaling, which show average power savings of 35.9% and 26.2% on a baseline 4-core SPARC based architecture respectively. This power saving data accounts for the power consumption of the power management units themselves. The CASPER simulation platform also provides users with complete support of SPARCV9

  7. A General-Purpose Optimization Engine for Multi-Disciplinary Design Applications

    Science.gov (United States)

    Patnaik, Surya N.; Hopkins, Dale A.; Berke, Laszlo

    1996-01-01

    A general purpose optimization tool for multidisciplinary applications, which in the literature is known as COMETBOARDS, is being developed at NASA Lewis Research Center. The modular organization of COMETBOARDS includes several analyzers and state-of-the-art optimization algorithms along with their cascading strategy. The code structure allows quick integration of new analyzers and optimizers. The COMETBOARDS code reads input information from a number of data files, formulates a design as a set of multidisciplinary nonlinear programming problems, and then solves the resulting problems. COMETBOARDS can be used to solve a large problem which can be defined through multiple disciplines, each of which can be further broken down into several subproblems. Alternatively, a small portion of a large problem can be optimized in an effort to improve an existing system. Some of the other unique features of COMETBOARDS include design variable formulation, constraint formulation, subproblem coupling strategy, global scaling technique, analysis approximation, use of either sequential or parallel computational modes, and so forth. The special features and unique strengths of COMETBOARDS assist convergence and reduce the amount of CPU time used to solve the difficult optimization problems of aerospace industries. COMETBOARDS has been successfully used to solve a number of problems, including structural design of space station components, design of nozzle components of an air-breathing engine, configuration design of subsonic and supersonic aircraft, mixed flow turbofan engines, wave rotor topped engines, and so forth. This paper introduces the COMETBOARDS design tool and its versatility, which is illustrated by citing examples from structures, aircraft design, and air-breathing propulsion engine design.

  8. The PennBMBI: Design of a General Purpose Wireless Brain-Machine-Brain Interface System.

    Science.gov (United States)

    Liu, Xilin; Zhang, Milin; Subei, Basheer; Richardson, Andrew G; Lucas, Timothy H; Van der Spiegel, Jan

    2015-04-01

    In this paper, a general purpose wireless Brain-Machine-Brain Interface (BMBI) system is presented. The system integrates four battery-powered wireless devices for the implementation of a closed-loop sensorimotor neural interface, including a neural signal analyzer, a neural stimulator, a body-area sensor node and a graphic user interface implemented on the PC end. The neural signal analyzer features a four channel analog front-end with configurable bandpass filter, gain stage, digitization resolution, and sampling rate. The target frequency band is configurable from EEG to single unit activity. A noise floor of 4.69 μVrms is achieved over a bandwidth from 0.05 Hz to 6 kHz. Digital filtering, neural feature extraction, spike detection, sensing-stimulating modulation, and compressed sensing measurement are realized in a central processing unit integrated in the analyzer. A flash memory card is also integrated in the analyzer. A 2-channel neural stimulator with a compliance voltage up to ± 12 V is included. The stimulator is capable of delivering unipolar or bipolar, charge-balanced current pulses with programmable pulse shape, amplitude, width, pulse train frequency and latency. A multi-functional sensor node, including an accelerometer, a temperature sensor, a flexiforce sensor and a general sensor extension port has been designed. A computer interface is designed to monitor, control and configure all aforementioned devices via a wireless link, according to a custom designed communication protocol. Wireless closed-loop operation between the sensory devices, neural stimulator, and neural signal analyzer can be configured. The proposed system was designed to link two sites in the brain, bridging the brain and external hardware, as well as creating new sensory and motor pathways for clinical practice. Bench test and in vivo experiments are performed to verify the functions and performances of the system.

  9. A General-purpose Framework for Parallel Processing of Large-scale LiDAR Data

    Science.gov (United States)

    Li, Z.; Hodgson, M.; Li, W.

    2016-12-01

    Light detection and ranging (LiDAR) technologies have proven efficiency to quickly obtain very detailed Earth surface data for a large spatial extent. Such data is important for scientific discoveries such as Earth and ecological sciences and natural disasters and environmental applications. However, handling LiDAR data poses grand geoprocessing challenges due to data intensity and computational intensity. Previous studies received notable success on parallel processing of LiDAR data to these challenges. However, these studies either relied on high performance computers and specialized hardware (GPUs) or focused mostly on finding customized solutions for some specific algorithms. We developed a general-purpose scalable framework coupled with sophisticated data decomposition and parallelization strategy to efficiently handle big LiDAR data. Specifically, 1) a tile-based spatial index is proposed to manage big LiDAR data in the scalable and fault-tolerable Hadoop distributed file system, 2) two spatial decomposition techniques are developed to enable efficient parallelization of different types of LiDAR processing tasks, and 3) by coupling existing LiDAR processing tools with Hadoop, this framework is able to conduct a variety of LiDAR data processing tasks in parallel in a highly scalable distributed computing environment. The performance and scalability of the framework is evaluated with a series of experiments conducted on a real LiDAR dataset using a proof-of-concept prototype system. The results show that the proposed framework 1) is able to handle massive LiDAR data more efficiently than standalone tools; and 2) provides almost linear scalability in terms of either increased workload (data volume) or increased computing nodes with both spatial decomposition strategies. We believe that the proposed framework provides valuable references on developing a collaborative cyberinfrastructure for processing big earth science data in a highly scalable environment.

  10. General-Purpose Heat Source Development: Safety Test Program. Postimpact evaluation, Design Iteration Test 3

    Energy Technology Data Exchange (ETDEWEB)

    Schonfeld, F.W.; George, T.G.

    1984-07-01

    The General-Purpose Heat Source(GPHS) provides power for space missions by transmitting the heat of /sup 238/PuO/sub 2/ decay to thermoelectric elements. Because of the inevitable return of certain aborted missions, the heat source must be designed and constructed to survive both re-entry and Earth impact. The Design Iteration Test (DIT) series is part of an ongoing test program. In the third test (DIT-3), a full GPHS module was impacted at 58 m/s and 930/sup 0/C. The module impacted the target at an angle of 30/sup 0/ to the pole of the large faces. The four capsules used in DIT-3 survived impact with minimal deformation; no internal cracks other than in the regions indicated by Savannah River Plant (SRP) preimpact nondestructive testing were observed in any of the capsules. The 30/sup 0/ impact orientation used in DIT-3 was considerably less severe than the flat-on impact utilized in DIT-1 and DIT-2. The four capsules used in DIT-1 survived, while two of the capsules used in DIT-2 breached; a small quantity (approx. = 50 ..mu..g) of /sup 238/PuO/sub 2/ was released from the capsules breached in the DIT-2 impact. All of the capsules used in DIT-1 and DIT-2 were severely deformed and contained large internal cracks. Postimpact analyses of the DIT-3 test components are described, with emphasis on weld structure and the behavior of defects identified by SRP nondestructive testing.

  11. Coated Particles Fuel Compact-General Purpose Heat Source for Advanced Radioisotope Power Systems

    Science.gov (United States)

    El-Genk, Mohamed S.; Tournier, Jean-Michel

    2003-01-01

    Coated Particles Fuel Compacts (CPFC) have recently been shown to offer performance advantage for use in Radioisotope Heater Units (RHUs) and design flexibility for integrating at high thermal efficiency with Stirling Engine converters, currently being considered for 100 We. Advanced Radioisotope Power Systems (ARPS). The particles in the compact consist of 238PuO2 fuel kernels with 5-μm thick PyC inner coating and a strong ZrC outer coating, whose thickness depends on the maximum fuel temperature during reentry, the fuel kernel diameter, and the fraction of helium gas released from the kernels and fully contained by the ZrC coating. In addition to containing the helium generated by radioactive decay of 238Pu for up to 10 years before launch and 10-15 years mission lifetime, the kernels are intentionally sized (>= 300 μm in diameter) to prevent any adverse radiological effects on reentry. This paper investigates the advantage of replacing the four iridium-clad 238PuO2 fuel pellets, the two floating graphite membranes, and the two graphite impact shells in current State-Of-The-Art (SOA) General Purpose Heat Source (GPHS) with CPFC. The total mass, thermal power, and specific power of the CPFC-GPHS are calculated as functions of the helium release fraction from the fuel kernels and maximum fuel temperature during reentry from 1500 K to 2400 K. For the same total mass and volume as SOA GPHS, the generated thermal power by single-size particles CPFC-GPHS is 260 W at Beginning-Of-Mission (BOM), versus 231 W for the GPHS. For an additional 10% increase in total mass, the CPFC-GPHS could generate 340 W BOM; 48% higher than SOA GPHS. The corresponding specific thermal power is 214 W/kg, versus 160 W/kg for SOA GPHS; a 34% increase. Therefore, for the same thermal power, the CPFC-GPHS is lighter than SOA GPHS, while it uses the same amount of 238PuO2 fuel and same aeroshell. For the same helium release fraction and fuel temperature, binary-size particles CPFC-GPHS could

  12. Parallelized computation for computer simulation of electrocardiograms using personal computers with multi-core CPU and general-purpose GPU.

    Science.gov (United States)

    Shen, Wenfeng; Wei, Daming; Xu, Weimin; Zhu, Xin; Yuan, Shizhong

    2010-10-01

    Biological computations like electrocardiological modelling and simulation usually require high-performance computing environments. This paper introduces an implementation of parallel computation for computer simulation of electrocardiograms (ECGs) in a personal computer environment with an Intel CPU of Core (TM) 2 Quad Q6600 and a GPU of Geforce 8800GT, with software support by OpenMP and CUDA. It was tested in three parallelization device setups: (a) a four-core CPU without a general-purpose GPU, (b) a general-purpose GPU plus 1 core of CPU, and (c) a four-core CPU plus a general-purpose GPU. To effectively take advantage of a multi-core CPU and a general-purpose GPU, an algorithm based on load-prediction dynamic scheduling was developed and applied to setting (c). In the simulation with 1600 time steps, the speedup of the parallel computation as compared to the serial computation was 3.9 in setting (a), 16.8 in setting (b), and 20.0 in setting (c). This study demonstrates that a current PC with a multi-core CPU and a general-purpose GPU provides a good environment for parallel computations in biological modelling and simulation studies. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  13. Software mechanics for Java multi-threading

    NARCIS (Netherlands)

    Bergstra, J.A.; Loots, M.E.

    For a subset JavaTck (Java Thread Composition Kernel) of Java an empirical semantics has been developed. Special emphasis is put on the role of synchronization features. The validity of empirical semantics is discussed in the light of a number of compiler postulates. A translation of process

  14. Toward a Multi-threaded Glish

    Science.gov (United States)

    Schiebel, D. R.

    Glish is the scripting language at the core of AIPS++ . Glish's asynchronous event handlers are, by nature, independent threads of execution, and AIPS++ developers have used these handlers to create many such threads without the concern typically required for thread creation and deletion. Starting AIPS++ creates over 3000 such threads. Having so many threads created in response to user scripts poses many challenges for the underlying scripting language implementation. The complexity involved warrants synchronization and mutual-exclusion constructs which are at at a higher level than those provided by pthreads (Kleiman et al. 1996) and similar libraries. Since Glish is implemented in C++, these constructs should be centered around objects rather than functions and critical regions. This paper discusses these issues, the initial work toward introducing multiple threads of execution into Glish, and possible tools for achieving object-oriented concurrency. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.

  15. Software mechanics for Java multi-threading

    OpenAIRE

    Bergstra, J. A.; Loots, M.E.

    1999-01-01

    For a subset JavaTck (Java Thread Composition Kernel) of Java an empirical semantics has been developed. Special emphasis is put on the role of synchronization features. The validity of empirical semantics is discussed in the light of a number of compiler postulates. A translation of process algebra with conditions and free merge to Java is used as an example.

  16. Apple-CORE: Microgrids of SVP cores: flexible, general-purpose, fine-grained hardware concurrency management

    NARCIS (Netherlands)

    Poss, R.; Lankamp, M.; Yang, Q.; Fu, J.; van Tol, M.W.; Jesshope, C.; Nair, S.

    2012-01-01

    To harness the potential of CMPs for scalable, energy-efficient performance in general-purpose computers, the Apple-CORE project has co-designed a general machine model and concurrency control interface with dedicated hardware support for concurrency control across multiple cores. Its SVP interface

  17. A Multi-Thread Data Acquisition Module for Vibration Signal of Aero-Engine%一种航空发动机振动信号多线程采集模块设计

    Institute of Scientific and Technical Information of China (English)

    金路; 廖明夫; 黄巍

    2013-01-01

    A multi-thread data acquisition module based on the NI (National Instruments)' s DAQ card is designed for the vibration signal measurement of aero-engine.Data collection and extraction are placed in different threads in order to make them synchronous.This module is designed to be a dynamic link library(DLL).Many test systems programmed with different computer languages could use it.The aero-engine has two kinds of working states:steady state and transient state.The vibration signal acquisitions in these two states have different characteristic,so that two sub-modules are designed for the two states.Due to the widely use of double rotor in aero-engine,double speed acquisition module is programmed and the corresponding data processing method is designed.The module is examined by acquiring vibration signal on site.The results show that the data loss problem in transient process is solved,the rotor speed and vabrition signal in both steady process and transient process are collected accurately and real-time.The module has been sucessfully applied to fieldmeasurement.%开发了Windows环境下基于NI公司数据采集卡的航空发动机振动测试系统通用数据采集模块.该采集模块采用了多线程技术,将数据采集和提取置于不同的线程中,以达到两者同步的目的.将该模块制作为Windows动态链接库(DLL),利用DLL资源共享特点来满足多种测试系统的调用请求.同时根据航空发动机运行过程分为稳态运行及暂态过程的特点,分别设计了对应的振动信号采集子模块,以满足不同运行状态的测试要求.由于航空发动机多为双转子,针对这一特点设计了双转速采集及数据处理程序.经过测试,该模块较好地解决了暂态过程中的数据丢失问题,能够实时准确地采集稳态过程及暂态过程的转速和振动信号,并已成功运用于现场实测.

  18. Analog models of computations \\& Effective Church Turing Thesis: Efficient simulation of Turing machines by the General Purpose Analog Computer

    CERN Document Server

    Pouly, Amaury; Graça, Daniel S

    2012-01-01

    \\emph{Are analog models of computations more powerful than classical models of computations?} From a series of recent papers, it is now clear that many realistic analog models of computations are provably equivalent to classical digital models of computations from a \\emph{computability} point of view. Take, for example, the probably most realistic model of analog computation, the General Purpose Analog Computer (GPAC) model from Claude Shannon, a model for Differential Analyzers, which are analog machines used from 1930s to early 1960s to solve various problems. It is now known that functions computable by Turing machines are provably exactly those that are computable by GPAC. This paper is about next step: understanding if this equivalence also holds at the \\emph{complexity} level. In this paper we show that the realistic models of analog computation -- namely the General Purpose Analog Computer (GPAC) -- can simulate Turing machines in a computationally efficient manner. More concretely we show that, modulo...

  19. Quality of healthcare websites: A comparison of a general-purpose vs. domain-specific search engine.

    Science.gov (United States)

    Abraham, Joanna; Reddy, Madhu

    2007-10-11

    In a pilot study, we had five typical Internet users evaluate the quality of health websites returned by a general-purpose search engine (Google) and a healthcare-specific search engine (Healthfinder). The evaluators used a quality criteria developed by Mitretek/Health Information Technology Institute. Although both search engines provided high quality health websites, we found some important differences between the two types of search engines.

  20. Platform-Independent Courseware Sharing

    Directory of Open Access Journals (Sweden)

    Takao Shimomura

    2013-04-01

    Full Text Available Courseware distribution between different platforms is the major issue of current e-Learning. SCORM (Sharable Content Object Reference Model is one of the solutions for courseware sharing. However, to make SCORM-conformable courseware, some knowledge about HTML and JavaScript is required. This paper presents a SWF (Sharable Web Fragment-based e-Learning system, where courseware is created with sharable Web fragments such as Web pages, images and other resources, and the courseware can be distributed to another platform by export and import facilities. It also demonstrates how to export a subject that contains assignments and problems and how to import the whole subject, only the assignments, or only the problems. The exported meta-information is architecture-independent and provides a model of courseware distribution.

  1. General purpose parallel programing using new generation graphic processors: CPU vs GPU comparative analysis and opportunities research

    Directory of Open Access Journals (Sweden)

    Donatas Krušna

    2013-03-01

    Full Text Available OpenCL, a modern parallel heterogeneous system programming language, enables problems to be partitioned and executed on modern CPU and GPU hardware, this increases performance of such applications considerably. Since GPU's are optimized for floating point and vector operations and specialize in them, they outperform general purpose CPU's in this field greatly. This language greatly simplifies the creation of applications for such heterogeneous system since it's cross-platform, vendor independent and is embeddable , hence letting it be used in any other general purpose programming language via libraries. There is more and more tools being developed that are aimed at low level programmers and scientists or engineers alike, that are developing applications or libraries for CPU’s and GPU’s of today as well as other heterogeneous platforms. The tendency today is to increase the number of cores or CPU‘s in hopes of increasing performance, however the increasing difficulty of parallelizing applications for such systems and the even increasing overhead of communication and synchronization are limiting the potential performance. This means that there is a point at which increasing cores or CPU‘s will no longer increase applications performance, and even can diminish performance. Even though parallel programming and GPU‘s with stream computing capabilities have decreased the need for communication and synchronization (since only the final result needs to be committed to memory, however this still is a weak link in developing such applications.

  2. Multi­-Threaded Algorithms for General purpose Graphics Processor Units in the ATLAS High Level Trigger

    CERN Document Server

    Conde Mui\\~no, Patricia; The ATLAS collaboration

    2016-01-01

    General purpose Graphics Processor Units (GPGPU) are being evaluated for possible future inclusion in an upgraded ATLAS High Level Trigger farm. We have developed a demonstrator including GPGPU implementations of Inner Detector and Muon tracking and Calorimeter clustering within the ATLAS software framework. ATLAS is a general purpose particle physics experiment located on the LHC collider at CERN. The ATLAS Trigger system consists of two levels, with level 1 implemented in hardware and the High Level Trigger implemented in software running on a farm of commodity CPU. The High Level Trigger reduces the trigger rate from the 100 kHz level 1 acceptance rate to 1 kHz for recording, requiring an average per­-event processing time of ~250 ms for this task. The selection in the high level trigger is based on reconstructing tracks in the Inner Detector and Muon Spectrometer and clusters of energy deposited in the Calorimeter. Performing this reconstruction within the available farm resources presents a significant ...

  3. Duplication of complete dentures using general-purpose handheld optical scanner and 3-dimensional printer: Introduction and clinical considerations.

    Science.gov (United States)

    Kurahashi, Kosuke; Matsuda, Takashi; Goto, Takaharu; Ishida, Yuichi; Ito, Teruaki; Ichikawa, Tetsuo

    2017-01-01

    To introduce a new clinical procedure for fabricating duplicates of complete dentures by bite pressure impression using digital technology, and to discuss its clinical significance. The denture is placed on a rotary table and the 3-dimensional form of the denture is digitized using a general-purpose handheld optical scanner. The duplicate denture is made of polylactic acid by a 3-dimensional printer using the 3-dimensional data. This procedure has the advantages of wasting less material, employing less human power, decreasing treatment time at the chair side, lowering the rates of contamination, and being readily fabricated at the time of the treatment visit. Copyright © 2016 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.

  4. Corrosion science general-purpose data model and interface (I): Meanings and issues of design and implementation

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    A brand new design of integrated corrosion information system is introduced to meet the constantly increasing demands of material corrosion information. Two concepts, "general-purpose corrosion data model" and "public corrosion data ex-changing interface", are suggested to integrate a wide variety of corrosion data sources based on detailed analysis on characteristics of each source in order to promote the information sharing and data mining. The architecture of integrated corrosion information environment is blueprinted. The insight analysis is focused on 1) architecture of the system; 2) data flow and information sharing; 3) roles of system players and their interactions; 4) approaches to data integration. Several key issues are addressed in detail including coverage of data model, data source integration and mitigation, and data granularity from system performance and model acceptance points of view. At the end, the design and implementation ap-proach of general corrosion data model is presented based on cutting edge IT techniques.

  5. Feasibility study for the measurement of Bc meson mass and lifetime with the general purpose detector at the LHC

    Institute of Scientific and Technical Information of China (English)

    MENG Xiang-Wei

    2008-01-01

    In this paper a feasibility study of the Bc meson to measure its mass and lifetime is described with the general purpose detector at the LHC.The study solely concentrated on the J/Ψπ+,J/Ψ→μ+μ- decay channel of the Bc and it was concluded that about 120 events can be selected in the first fb-1 of data.With this data sample,the mass resolution was estimated to be 2.0(stat.) MeV/c2 while the cr resolution was found to be 13.1(stat.) μm,I.e.the lifetime resolution to be 0.044(stat.) ps.

  6. Evaluation of Aqueous and Powder Processing Techniques for Production of Pu-238-Fueled General Purpose Heat Sources

    Energy Technology Data Exchange (ETDEWEB)

    2008-06-01

    This report evaluates alternative processes that could be used to produce Pu-238 fueled General Purpose Heat Sources (GPHS) for radioisotope thermoelectric generators (RTG). Fabricating GPHSs with the current process has remained essentially unchanged since its development in the 1970s. Meanwhile, 30 years of technological advancements have been made in the fields of chemistry, manufacturing, ceramics, and control systems. At the Department of Energy’s request, alternate manufacturing methods were compared to current methods to determine if alternative fabrication processes could reduce the hazards, especially the production of respirable fines, while producing an equivalent GPHS product. An expert committee performed the evaluation with input from four national laboratories experienced in Pu-238 handling.

  7. EARTHQUAKE RESPONSE ANALYSIS OF STEEL PORTAL FRAMES BY PSEUDODYNAMIC SIMULATION TECHNIQUE USING A GENERAL-PURPOSE FINITE ELEMENT ANALYSIS PROGRAM

    Science.gov (United States)

    Miki, Toshihiro; Mizusawa, Tomisaku; Yamada, Osamu; Toda, Tomoki

    This paper studies the earthquake response of steel portal frames when the shear collapse occurs at the centre of the beam. The pseudodynamic simulation technique for the earthquake response analysis of the frames is developed in correspondence to the pseudodynamic substructure testing method. For the thin-walled box element under shear force in the middle of beam, the numerical process is utilized by a general-purpose finite element analysis program. The numerical results show the shear collapse behaviour in stiffened box beams and corresponding restoring force - displacement relationship of frames. The advantages of shear collapse of beams for the use in frames during earthquakes are discussed from the point of view of the hysteretic energy dissipated by the column base.

  8. Description of a4-channel FPGA-controlled ADC-based DAQ system for general purpose PMT signals

    Energy Technology Data Exchange (ETDEWEB)

    Conde, Ruben; Salazar, Humberto; Martinez, Oscar [Facultad de Ciencias FIsico Matematicas, BUAP, Puebla (Mexico); Villasenor, L, E-mail: rbn_cnd@hotmail.com [Instituto de Fisica y Matematicas, Universidad Michoacana San Nicolas de Hidalgo, Morelia (Mexico)

    2011-04-01

    We describe a general purpose data acquisition system for PMT signals. Hardware-wise it consists of a 4-channel ADC daughter board, an FPGA mother board, a GPS receiver and an atmospheric pressure sensor and a temperature sensor. The four ADC channels simultaneously sample PMT input signals with a sampling rate of 100MS/s. We have evaluated the noise of our system obtaining less than -48.6dB. This DAQ system includes a firmware suitable for pulse processing in cosmic rays applications. In particular, we describe in detail the way in which this system can be used during the commissioning and early operation phases of the High Altitude Water Cherenkov Observatory (HAWC) currently under construction at Sierra Negra in Mexico.

  9. RUMD: A general purpose molecular dynamics package optimized to utilize GPU hardware down to a few thousand particles

    CERN Document Server

    Bailey, Nicholas P; Hansen, Jesper Schmidt; Veldhorst, Arno A; Bøhling, Lasse; Lemarchand, Claire A; Olsen, Andreas E; Bacher, Andreas K; Larsen, Heine; Dyre, Jeppe C; Schrøder, Thomas B

    2015-01-01

    RUMD is a general purpose, high-performance molecular dynamics (MD) simulation package running on graphical processing units (GPU's). RUMD addresses the challenge of utilizing the many-core nature of modern GPU hardware when simulating small to medium system sizes (roughly from a few thousand up to hundred thousand particles). It has a performance that is comparable to other GPU-MD codes at large system sizes and substantially better at smaller sizes.RUMD is open-source and consists of a library written in C++ and the CUDA extension to C, an easy-to-use Python interface, and a set of tools for set-up and post-simulation data analysis. The paper describes RUMD's main features, optimizations and performance benchmarks.

  10. General-purpose heat source: Research and development program, radioisotope thermoelectric generator/thin fragment impact test

    Energy Technology Data Exchange (ETDEWEB)

    Reimus, M.A.H.; Hinckley, J.E.

    1996-11-01

    The general-purpose heat source provides power for space missions by transmitting the heat of {sup 238}Pu decay to an array of thermoelectric elements in a radioisotope thermoelectric generator (RTG). Because the potential for a launch abort or return from orbit exists for any space mission, the heat source response to credible accident scenarios is being evaluated. This test was designed to provide information on the response of a loaded RTG to impact by a fragment similar to the type of fragment produced by breakup of the spacecraft propulsion module system. The results of this test indicated that impact by a thin aluminum fragment traveling at 306 m/s may result in significant damage to the converter housing, failure of one fueled clad, and release of a small quantity of fuel.

  11. Corrosion science general-purpose data model and interface (I): Meanings and issues of design and implementation

    Institute of Scientific and Technical Information of China (English)

    TANG ZiLong

    2008-01-01

    A brand new design of integrated corrosion information system is introduced to meet the constantly increasing demands of material corrosion information. Two concepts, "general-purpose corrosion data model" and "public corrosion data ex- changing interface", are suggested to integrate a wide variety of corrosion data sources based on detailed analysis on characteristics of each source in order to promote the information sharing and data mining. The architecture of integrated corrosion information environment is blueprinted. The insight analysis is focused on 1) architecture of the system; 2) data flow and information sharing; 3) roles of system players and their interactions; 4) approaches to data integration. Several key issues are addressed in detail including coverage of data model, data source integration and mitigation, and data granularity from system performance and model acceptance points of view. At the end, the design and implementation ap- proach of general corrosion data model is presented based on cutting edge IT techniques.

  12. General-purpose computer networks and resource sharing in ERDA. Volume 3. Remote resource-sharing experience and findings

    Energy Technology Data Exchange (ETDEWEB)

    1977-07-15

    The investigation focused on heterogeneous networks in which a variety of dissimilar computers and operating systems were interconnected nationwide. Homogeneous networks, such as MFE net and SACNET, were not considered since they could not be used for general purpose resource sharing. Issues of privacy and security are of concern in any network activity. However, consideration of privacy and security of sensitive data arise to a much lesser degree in unclassified scientific research than in areas involving personal or proprietary information. Therefore, the existing mechanisms at individual sites for protecting sensitive data were relied on, and no new protection mechanisms to prevent infringement of privacy and security were attempted. Further development of ERDA networking will need to incorporate additional mechanisms to prevent infringement of privacy. The investigation itself furnishes an excellent example of computational resource sharing through a heterogeneous network. More than twenty persons, representing seven ERDA computing sites, made extensive use of both ERDA and non-ERDA computers in coordinating, compiling, and formatting the data which constitute the bulk of this report. Volume 3 analyzes the benefits and barriers encountered in actual resource sharing experience, and provides case histories of typical applications.

  13. MetaboLights--an open-access general-purpose repository for metabolomics studies and associated meta-data.

    Science.gov (United States)

    Haug, Kenneth; Salek, Reza M; Conesa, Pablo; Hastings, Janna; de Matos, Paula; Rijnbeek, Mark; Mahendraker, Tejasvi; Williams, Mark; Neumann, Steffen; Rocca-Serra, Philippe; Maguire, Eamonn; González-Beltrán, Alejandra; Sansone, Susanna-Assunta; Griffin, Julian L; Steinbeck, Christoph

    2013-01-01

    MetaboLights (http://www.ebi.ac.uk/metabolights) is the first general-purpose, open-access repository for metabolomics studies, their raw experimental data and associated metadata, maintained by one of the major open-access data providers in molecular biology. Metabolomic profiling is an important tool for research into biological functioning and into the systemic perturbations caused by diseases, diet and the environment. The effectiveness of such methods depends on the availability of public open data across a broad range of experimental methods and conditions. The MetaboLights repository, powered by the open source ISA framework, is cross-species and cross-technique. It will cover metabolite structures and their reference spectra as well as their biological roles, locations, concentrations and raw data from metabolic experiments. Studies automatically receive a stable unique accession number that can be used as a publication reference (e.g. MTBLS1). At present, the repository includes 15 submitted studies, encompassing 93 protocols for 714 assays, and span over 8 different species including human, Caenorhabditis elegans, Mus musculus and Arabidopsis thaliana. Eight hundred twenty-seven of the metabolites identified in these studies have been mapped to ChEBI. These studies cover a variety of techniques, including NMR spectroscopy and mass spectrometry.

  14. MetaboLights—an open-access general-purpose repository for metabolomics studies and associated meta-data

    Science.gov (United States)

    Haug, Kenneth; Salek, Reza M.; Conesa, Pablo; Hastings, Janna; de Matos, Paula; Rijnbeek, Mark; Mahendraker, Tejasvi; Williams, Mark; Neumann, Steffen; Rocca-Serra, Philippe; Maguire, Eamonn; González-Beltrán, Alejandra; Sansone, Susanna-Assunta; Griffin, Julian L.; Steinbeck, Christoph

    2013-01-01

    MetaboLights (http://www.ebi.ac.uk/metabolights) is the first general-purpose, open-access repository for metabolomics studies, their raw experimental data and associated metadata, maintained by one of the major open-access data providers in molecular biology. Metabolomic profiling is an important tool for research into biological functioning and into the systemic perturbations caused by diseases, diet and the environment. The effectiveness of such methods depends on the availability of public open data across a broad range of experimental methods and conditions. The MetaboLights repository, powered by the open source ISA framework, is cross-species and cross-technique. It will cover metabolite structures and their reference spectra as well as their biological roles, locations, concentrations and raw data from metabolic experiments. Studies automatically receive a stable unique accession number that can be used as a publication reference (e.g. MTBLS1). At present, the repository includes 15 submitted studies, encompassing 93 protocols for 714 assays, and span over 8 different species including human, Caenorhabditis elegans, Mus musculus and Arabidopsis thaliana. Eight hundred twenty-seven of the metabolites identified in these studies have been mapped to ChEBI. These studies cover a variety of techniques, including NMR spectroscopy and mass spectrometry. PMID:23109552

  15. The EB Factory Project I. A Fast, Neural Net Based, General Purpose Light Curve Classifier Optimized for Eclipsing Binaries

    CERN Document Server

    Paegert, M; Burger, D M

    2014-01-01

    We describe a new neural-net based light curve classifier and provide it with documentation as a ready-to-use tool for the community. While optimized for identification and classification of eclipsing binary stars, the classifier is general purpose, and has been developed for speed in the context of upcoming massive surveys such as LSST. A challenge for classifiers in the context of neural-net training and massive data sets is to minimize the number of parameters required to describe each light curve. We show that a simple and fast geometric representation that encodes the overall light curve shape, together with a chi-square parameter to capture higher-order morphology information results in efficient yet robust light curve classification, especially for eclipsing binaries. Testing the classifier on the ASAS light curve database, we achieve a retrieval rate of 98\\% and a false-positive rate of 2\\% for eclipsing binaries. We achieve similarly high retrieval rates for most other periodic variable-star classes,...

  16. Deposition, characterization, and in vivo performance of parylene coating on general-purpose silicone for examining potential biocompatible surface modifications

    Energy Technology Data Exchange (ETDEWEB)

    Chou, Chia-Man [Division of Pediatric Surgery, Department of Surgery, Taichung Veterans General Hospital, 160, Sec. 3, Taichung Port Rd., Taichung 40705, Taiwan, ROC (China); Department of Medicine, National Yang-Ming University, 155, Sec. 2, Linong Street, Taipei 11221, Taiwan, ROC (China); Shiao, Chiao-Ju [Department of Materials Science and Engineering, Feng Chia University, 100, Wen-Hwa Rd., Taichung 40724, Taiwan, ROC (China); Chung, Chi-Jen, E-mail: cjchung@seed.net.tw [Department of Dental Technology and Materials Science, Central Taiwan University of Science and Technology, 666 Buzih Rd., Beitun District, Taichung 40601, Taiwan, ROC (China); He, Ju-Liang [Department of Materials Science and Engineering, Feng Chia University, 100, Wen-Hwa Rd., Taichung 40724, Taiwan, ROC (China)

    2013-12-31

    In this study, a thorough investigation of parylene coatings was conducted, as follows: microstructure (i.e., X-ray diffractometer (XRD) and cold field emission scanning electron microscope (FESEM)), mechanical property (i.e., pencil hardness and cross-cut adhesion test), surface property (i.e., water contact angle measurement, IR, and X-ray photoelectron spectroscopy (XPS)), and biocompatibility tests (i.e., fibroblast cell culture, platelet adhesion, and animal studies). The results revealed that parylene, a crystalline and brittle coating, exhibited satisfactory film adhesion and relative hydrophobicity, thereby contributing to its effective barrier properties. Fibroblast cell culturing on the parylene-deposited specimen demonstrated improved cell proliferation and equivalent to or superior blood compatibility than that of the medical-grade silicone (currently used clinically). In the animal study, parylene coatings exhibited similar subcutaneous inflammatory reactions compared with the medical-grade silicone. Both in vitro and in vivo tests demonstrated the satisfactory biocompatibility of parylene coatings. - Highlights: • A complete investigation to identify the characteristics of parylene coatings on general-purpose silicones. • Microstructures, surface properties and mechanical properties of parylene coatings were examined. • In vitro (Cell culture, platelet adhesion) tests and animal studies revealed satisfactory biocompatibility. • An alternative of medical-grade silicones is expected to be obtained.

  17. GPHS-RTG system explosion test direct course experiment 5000. [General Purpose Heat Source-Radioisotope Thermoelectric Generator

    Energy Technology Data Exchange (ETDEWEB)

    1984-03-01

    The General Purpose Heat Source-Radioisotope Thermoelectric Generator (GPHS-RTG) has been designed and is being built to provide electrical power for spacecrafts to be launched on the Space Shuttle. The objective of the RTG System Explosion Test was to expose a mock-up of the GPHS-RTG with a simulated heat source to the overpressure and impulse representative of a potential upper magnitude explosion of the Space Shuttle. The test was designed so that the heat source module would experience an overpressure at which the survival of the fuel element cladding would be expected to be marginal. Thus, the mock-up was placed where the predicted incident overpressure would be 1300 psi. The mock-up was mounted in an orientation representative of the launch configuration on the spacecraft to be used on the NASA Galileo Mission. The incident overpressure measured was in the range of 1400 to 2100 psi. The mock-up and simulated heat source were destroyed and only very small fragments were recovered. This damage is believed to have resulted from a combination of the overpressure and impact by very high velocity fragments from the ANFO sphere. Post-test analysis indicated that extreme working of the iridium clad material occurred, indicative of intensive impulsive loading on the metal.

  18. General-purpose computer networks and resource sharing in ERDA. Volume 3. Remote resource-sharing experience and findings

    Energy Technology Data Exchange (ETDEWEB)

    1977-07-15

    The investigation focused on heterogeneous networks in which a variety of dissimilar computers and operating systems were interconnected nationwide. Homogeneous networks, such as MFE net and SACNET, were not considered since they could not be used for general purpose resource sharing. Issues of privacy and security are of concern in any network activity. However, consideration of privacy and security of sensitive data arise to a much lesser degree in unclassified scientific research than in areas involving personal or proprietary information. Therefore, the existing mechanisms at individual sites for protecting sensitive data were relied on, and no new protection mechanisms to prevent infringement of privacy and security were attempted. Further development of ERDA networking will need to incorporate additional mechanisms to prevent infringement of privacy. The investigation itself furnishes an excellent example of computational resource sharing through a heterogeneous network. More than twenty persons, representing seven ERDA computing sites, made extensive use of both ERDA and non-ERDA computers in coordinating, compiling, and formatting the data which constitute the bulk of this report. Volume 3 analyzes the benefits and barriers encountered in actual resource sharing experience, and provides case histories of typical applications.

  19. Corrosion science general-purpose data model and interface (Ⅱ): OOD design and corrosion data markup language (CDML)

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    With object oriented design/analysis, a general purpose corrosion data model (GPCDM) and a corrosion data markup language (CDML) are created to meet the increasing demand of multi-source corrosion data integration and sharing. "Cor- rosion data island" is proposed to model the corrosion data of comprehensiveness and self-contained. The island of tree-liked structure contains six first-level child nodes to characterize every important aspect of the corrosion data. Each first-level node holds more child nodes recursively as data containers. The design of data structure inside the island is intended to decrease the learning curve and break the acceptance barrier of GPCDM and CDML. A detailed explanation about the role and meaning of the first-level nodes are presented with examples chosen carefully in order to review the design goals and requirements proposed in the previous paper. Then, CDML tag structure and CDML application programming interface (API) are introduced in logic order. At the end, the roles of GPCDM, CDML and its API in the multi-source corrosion data integration and information sharing are highlighted and projected.

  20. Corrosion science general-purpose data model and interface (Ⅱ): OOD design and corrosion data markup language (CDML)

    Institute of Scientific and Technical Information of China (English)

    TANG ZiLong

    2008-01-01

    With object oriented design/analysis, a general purpose corrosion data model (GPCDM) and a corrosion data markup language (CDML) are created to meet the increasing demand of multi-source corrosion data integration and sharing. "Cor-rosion data island" is proposed to model the corrosion data of comprehensiveness and self-contained. The island of tree-liked structure contains six first-level child nodes to characterize every important aspect of the corrosion data. Each first-level node holds more child nodes recursively as data containers. The design of data structure inside the island is intended to decrease the learning curve and break the acceptance barrier of GPCDM and CDML. A detailed explanation about the role and meaning of the first-level nodes are presented with examples chosen carefully in order to review the design goals and requirements proposed in the previous paper. Then, CDML tag structure and CDML application programming interface (API) are introduced in logic order. At the end, the roles of GPCDM, CDML and its API in the multi-source corrosion data integration and information sharing are highlighted and projected.

  1. Research and Development of the General-Purpose Computation on GPUs%GPGPU技术研究与发展

    Institute of Scientific and Technical Information of China (English)

    林一松; 唐玉华; 唐滔

    2011-01-01

    半导体工艺的发展使得芯片上集成的晶体管数目不断增加,图形处理器的存储和计算能力也越来越强大.目前,GPU的峰值运算能力已经远远超出主流的CPU,它在非图形计算领域,特别是高性能计算领域的潜力已经引起越来越多研究者的关注.本文介绍了GPU用于通用计算的原理以及目前学 术界和产业界关于GPGPU体系结构和编程模型方面的最新研究成果.%With the development of the semiconductor technology? The number of transistors integrated on a chip keeps increasing. Consequently, the computation and memory capacity of graphics processing units improve rapidly. So far, the floating-point computing capacity of GPUs has greatly exceeded that of CPUs, and the potential of GPUs in the non-graphic computing field, especially in high performance computing, has attracted more and more researchers' attention. This paper gives an introduction to the principles of the general purpose computation on GPUs and the latest research results about architecture and the programming model of GPGPU from both the research community and industry.

  2. Cafe Variome: general-purpose software for making genotype-phenotype data discoverable in restricted or open access contexts.

    Science.gov (United States)

    Lancaster, Owen; Beck, Tim; Atlan, David; Swertz, Morris; Thangavelu, Dhiwagaran; Veal, Colin; Dalgleish, Raymond; Brookes, Anthony J

    2015-10-01

    Biomedical data sharing is desirable, but problematic. Data "discovery" approaches-which establish the existence rather than the substance of data-precisely connect data owners with data seekers, and thereby promote data sharing. Cafe Variome (http://www.cafevariome.org) was therefore designed to provide a general-purpose, Web-based, data discovery tool that can be quickly installed by any genotype-phenotype data owner, or network of data owners, to make safe or sensitive content appropriately discoverable. Data fields or content of any type can be accommodated, from simple ID and label fields through to extensive genotype and phenotype details based on ontologies. The system provides a "shop window" in front of data, with main interfaces being a simple search box and a powerful "query-builder" that enable very elaborate queries to be formulated. After a successful search, counts of records are reported grouped by "openAccess" (data may be directly accessed), "linkedAccess" (a source link is provided), and "restrictedAccess" (facilitated data requests and subsequent provision of approved records). An administrator interface provides a wide range of options for system configuration, enabling highly customized single-site or federated networks to be established. Current uses include rare disease data discovery, patient matchmaking, and a Beacon Web service.

  3. In vivo dosimetry in intraoperative electron radiotherapy. microMOSFETs, radiochromic films and a general-purpose linac

    Energy Technology Data Exchange (ETDEWEB)

    Lopez-Tarjuelo, Juan; Marco-Blancas, Noelia de; Santos-Serra, Agustin; Quiros-Higueras, Juan David [Consorcio Hospitalario Provincial de Castellon, Servicio de Radiofisica y Proteccion Radiologica, Castellon de la Plana (Spain); Bouche-Babiloni, Ana; Morillo-Macias, Virginia; Ferrer-Albiach, Carlos [Consorcio Hospitalario Provincial de Castellon, Servicio de Oncologia Radioterapica, Castellon de la Plana (Spain)

    2014-11-15

    In vivo dosimetry is desirable for the verification, recording, and eventual correction of treatment in intraoperative electron radiotherapy (IOERT). Our aim is to share our experience of metal oxide semiconductor field-effect transistors (MOSFETs) and radiochromic films with patients undergoing IOERT using a general-purpose linac. We used MOSFETs inserted into sterile bronchus catheters and radiochromic films that were cut, digitized, and sterilized by means of gas plasma. In all, 59 measurements were taken from 27 patients involving 15 primary tumors (seven breast and eight non-breast tumors) and 12 relapses. Data were subjected to an outliers' analysis and classified according to their compatibility with the relevant doses. Associations were sought regarding the type of detector, breast and non-breast irradiation, and the radiation oncologist's assessment of the difficulty of detector placement. At the same time, 19 measurements were carried out at the tumor bed with both detectors. MOSFET measurements (D = 93.5 %, s{sub D} = 6.5 %) were not significantly shifted from film measurements (D = 96.0 %, s{sub D} = 5.5 %; p = 0.109), and no associations were found (p = 0.526, p = 0.295, and p = 0.501, respectively). As regards measurements performed at the tumor bed with both detectors, MOSFET measurements (D = 95.0 %, s{sub D} = 5.4 %) were not significantly shifted from film measurements (D = 96.4 %, s{sub D} = 5.0 %; p = 0.363). In vivo dosimetry can produce satisfactory results at every studied location with a general-purpose linac. Detector choice should depend on user factors, not on the detector performance itself. Surgical team collaboration is crucial to success. (orig.) [German] Die In-vivo-Dosimetrie ist wuenschenswert fuer die Ueberpruefung, Registrierung und die eventuelle Korrektur der Behandlungen in der IOERT (''Intraoperative Electron Radiation Therapy''). Unser Ziel ist die Veroeffentlichung unserer Erfahrungen beim

  4. Development and validation of a general-purpose ASIC chip for the control of switched reluctance machines

    Energy Technology Data Exchange (ETDEWEB)

    Chen Haijin [National ASIC System Engineering Research Center, Southeast University, Nanjing 210096 (China); Jiang-Su Provincial Key Lab of ASIC Design, Nantong University, Nantong 226019 (China)], E-mail: chen.hj@ntu.edu.cn; Lu Shengli; Shi Longxing [National ASIC System Engineering Research Center, Southeast University, Nanjing 210096 (China)

    2009-03-15

    A general-purpose application specific integrated circuit (ASIC) chip for the control of switched reluctance machines (SRMs) was designed and validated to fill the gap between the microcontroller capability and the controller requirements of high performance switched reluctance drive (SRD) systems. It can be used for the control of SRM running either in low speed or in high-speed, i.e., either in chopped current control (CCC) mode or in angular position control (APC) mode. Main functions of the chip include filtering and cycle calculation of rotor angular position signals, commutation logic according to rotor cycle and turn-on/turn-off angles ({theta}{sub on}/{theta}{sub off}), controllable pulse width modulation (PWM) waveforms generation, chopping control with adjustable delay time, and commutation control with adjustable delay time. All the control parameters of the chip are set online by the microcontroller through a serial peripheral interface (SPI). The chip has been designed with the standard cell based design methodology, and implemented in the central semiconductor manufacturing corporation (CSMC) 0.5 {mu}m complementary metal-oxide-semiconductor (CMOS) process technology. After a successful automatic test equipment (ATE) test using the Nextest's Maverick test system, the chip was further validated through an experimental three-phase 6/2-pole SRD system. Both the ATE test and experimental validation results show that the chip can meet the control requirements of high performance SRD systems, and simplify the controller construction. For a resolution of 0.36 deg. (electrical degree), the chip's maximum processable frequency of the rotor angular position signals is 10 kHz, which is 300,000 rev/min when a three-phase 6/2-pole SRM is concerned.

  5. Development and validation of a general-purpose ASIC chip for the control of switched reluctance machines

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Hai-Jin [National ASIC System Engineering Research Center, Southeast University, Nanjing 210096 (China)]|[Jiang-Su Provincial Key Lab of ASIC Design, Nantong University, Nantong 226019 (China); Lu, Sheng-Li; Shi, Long-Xing [National ASIC System Engineering Research Center, Southeast University, Nanjing 210096 (China)

    2009-03-15

    A general-purpose application specific integrated circuit (ASIC) chip for the control of switched reluctance machines (SRMs) was designed and validated to fill the gap between the microcontroller capability and the controller requirements of high performance switched reluctance drive (SRD) systems. It can be used for the control of SRM running either in low speed or in high-speed, i.e., either in chopped current control (CCC) mode or in angular position control (APC) mode. Main functions of the chip include filtering and cycle calculation of rotor angular position signals, commutation logic according to rotor cycle and turn-on/turn-off angles ({theta}{sub on}/{theta}{sub off}), controllable pulse width modulation (PWM) waveforms generation, chopping control with adjustable delay time, and commutation control with adjustable delay time. All the control parameters of the chip are set online by the microcontroller through a serial peripheral interface (SPI). The chip has been designed with the standard cell based design methodology, and implemented in the central semiconductor manufacturing corporation (CSMC) 0.5 {mu}m complementary metal-oxide-semiconductor (CMOS) process technology. After a successful automatic test equipment (ATE) test using the Nextest's Maverick test system, the chip was further validated through an experimental three-phase 6/2-pole SRD system. Both the ATE test and experimental validation results show that the chip can meet the control requirements of high performance SRD systems, and simplify the controller construction. For a resolution of 0.36 (electrical degree), the chip's maximum processable frequency of the rotor angular position signals is 10 kHz, which is 300,000 rev/min when a three-phase 6/2-pole SRM is concerned. (author)

  6. Time-Cost Scheduler for Technological and Economic Challenges Related to Customized Cores and General Purpose Processors

    Directory of Open Access Journals (Sweden)

    Munesh Singh Chauhan

    2014-01-01

    Full Text Available With the renewed interest in the customization of embedded processors for applications specific needs, it becomes imperative to understand its viability both economically and technologically thus avoiding pitfalls. Customization and scalability are two terms which are often used synonymously to denote add/ subtract of additional functional units or increase/ decrease of ports in memory register banks in processors. The advantage that comes out of customization is in the improved performance, reduced silicon area and power efficiency. With the option of parameterizing the inclusion/ exclusion of functional units the hardware can be made leaner and thus more energy efficient. Removal of redundant units results in shortening of critical path in circuits. Though the above advantages look significant but customization carries its own pitfalls which often are intractable. Firstly, it carries an immense overhead if performed in General Purpose Processors (GPUs. Changes in the hardware architecture results in code mismatch and thus necessitates ISA (Instruction Set Architecture extensions or at times complete overhaul. Besides, users are often reluctant to adapt to the changes in ISA as it involves additional training. The final death knell may come from the limited commercial use of customized processor thus resulting in economic losses due to under-utilization of production units. Hence a new insight is needed that caters to the utilization of present technological advancements in processor customization but at the same time avoiding adverse economic fallout that comes from blindly forcing customization everywhere. A graded and selective use of customization in consonance with market and user needs is suggested. Therefore, predicting the development course of micro processors in general and embedded processors in particular will benefit businesses to correctly focus on the performance and efficiency of systems that use these processors

  7. The ESPAT tool: a general-purpose DSS shell for solving stochastic optimization problems in complex river-aquifer systems

    Science.gov (United States)

    Macian-Sorribes, Hector; Pulido-Velazquez, Manuel; Tilmant, Amaury

    2015-04-01

    Stochastic programming methods are better suited to deal with the inherent uncertainty of inflow time series in water resource management. However, one of the most important hurdles in their use in practical implementations is the lack of generalized Decision Support System (DSS) shells, usually based on a deterministic approach. The purpose of this contribution is to present a general-purpose DSS shell, named Explicit Stochastic Programming Advanced Tool (ESPAT), able to build and solve stochastic programming problems for most water resource systems. It implements a hydro-economic approach, optimizing the total system benefits as the sum of the benefits obtained by each user. It has been coded using GAMS, and implements a Microsoft Excel interface with a GAMS-Excel link that allows the user to introduce the required data and recover the results. Therefore, no GAMS skills are required to run the program. The tool is divided into four modules according to its capabilities: 1) the ESPATR module, which performs stochastic optimization procedures in surface water systems using a Stochastic Dual Dynamic Programming (SDDP) approach; 2) the ESPAT_RA module, which optimizes coupled surface-groundwater systems using a modified SDDP approach; 3) the ESPAT_SDP module, capable of performing stochastic optimization procedures in small-size surface systems using a standard SDP approach; and 4) the ESPAT_DET module, which implements a deterministic programming procedure using non-linear programming, able to solve deterministic optimization problems in complex surface-groundwater river basins. The case study of the Mijares river basin (Spain) is used to illustrate the method. It consists in two reservoirs in series, one aquifer and four agricultural demand sites currently managed using historical (XIV century) rights, which give priority to the most traditional irrigation district over the XX century agricultural developments. Its size makes it possible to use either the SDP or

  8. Ultra-fast digital tomosynthesis reconstruction using general-purpose GPU programming for image-guided radiation therapy.

    Science.gov (United States)

    Park, Justin C; Park, Sung Ho; Kim, Jin Sung; Han, Youngyih; Cho, Min Kook; Kim, Ho Kyung; Liu, Zhaowei; Jiang, Steve B; Song, Bongyong; Song, William Y

    2011-08-01

    The purpose of this work is to demonstrate an ultra-fast reconstruction technique for digital tomosynthesis (DTS) imaging based on the algorithm proposed by Feldkamp, Davis, and Kress (FDK) using standard general-purpose graphics processing unit (GPGPU) programming interface. To this end, the FDK-based DTS algorithm was programmed "in-house" with C language with utilization of 1) GPU and 2) central processing unit (CPU) cards. The GPU card consisted of 480 processing cores (2 x 240 dual chip) with 1,242 MHz processing clock speed and 1,792 MB memory space. In terms of CPU hardware, we used 2.68 GHz clock speed, 12.0 GB DDR3 RAM, on a 64-bit OS. The performance of proposed algorithm was tested on twenty-five patient cases (5 lung, 5 liver, 10 prostate, and 5 head-and-neck) scanned either with a full-fan or half-fan mode on our cone-beam computed tomography (CBCT) system. For the full-fan scans, the projections from 157.5°-202.5° (45°-scan) were used to reconstruct coronal DTS slices, whereas for the half-fan scans, the projections from both 157.5°-202.5° and 337.5°-22.5° (2 x 45°-scan) were used to reconstruct larger FOV coronal DTS slices. For this study, we chose 45°-scan angle that contained ~80 projections for the full-fan and ~160 projections with 2 x 45°-scan angle for the half-fan mode, each with 1024 x 768 pixels with 32-bit precision. Absolute pixel value differences, profiles, and contrast-to-noise ratio (CNR) calculations were performed to compare and evaluate the images reconstructed using GPU- and CPU-based implementations. The time dependence on the reconstruction volume was also tested with (512 x 512) x 16, 32, 64, 128, and 256 slices. In the end, the GPU-based implementation achieved, at most, 1.3 and 2.5 seconds to complete full reconstruction of 512 x 512 x 256 volume, for the full-fan and half-fan modes, respectively. In turn, this meant that our implementation can process > 13 projections-per-second (pps) and > 18 pps for the full

  9. 三元乙丙橡胶对通用树脂的共混改性%EPDM on modification of general purpose resins

    Institute of Scientific and Technical Information of China (English)

    马国维; 方征平; 许承威

    2001-01-01

    总结了三元乙丙橡胶(EPDM)作为抗冲击改性剂对聚氯乙烯、聚乙烯、聚苯乙烯等通用树脂的改性及其机理,探讨了EPDM应用于通用树脂多组分共混的效果和前景,为大品种通用树脂共混改性提供了理论基础。%The methods and mechanisms of ethy-lene-propylene-diene monomer(EPDM) whichwas used as a a good impact modifier on the modi-fication of several general purpose resins,such aspolyvinyl chloride,polyethylene,polystyrene werereviewed. The effects and prospects of EPDM used inmulticomponent general resins blends were dis-cussed. A good theoretical foundation for the modi-fication of mixed general purpose resins was provid-ed.

  10. Bilingual Language Control and General Purpose Cognitive Control among Individuals with Bilingual Aphasia: Evidence Based on Negative Priming and Flanker Tasks

    OpenAIRE

    Tanya Dash; Kar, Bhoomika R.

    2014-01-01

    Background. Bilingualism results in an added advantage with respect to cognitive control. The interaction between bilingual language control and general purpose cognitive control systems can also be understood by studying executive control among individuals with bilingual aphasia. Objectives. The current study examined the subcomponents of cognitive control in bilingual aphasia. A case study approach was used to investigate whether cognitive control and language control are two separate syste...

  11. Bilingual language control and general purpose cognitive control among individuals with bilingual aphasia: evidence based on negative priming and flanker tasks.

    Science.gov (United States)

    Dash, Tanya; Kar, Bhoomika R

    2014-01-01

    Bilingualism results in an added advantage with respect to cognitive control. The interaction between bilingual language control and general purpose cognitive control systems can also be understood by studying executive control among individuals with bilingual aphasia. objectives: The current study examined the subcomponents of cognitive control in bilingual aphasia. A case study approach was used to investigate whether cognitive control and language control are two separate systems and how factors related to bilingualism interact with control processes. Four individuals with bilingual aphasia performed a language background questionnaire, picture description task, and two experimental tasks (nonlinguistic negative priming task and linguistic and nonlinguistic versions of flanker task). A descriptive approach was used to analyse the data using reaction time and accuracy measures. The cumulative distribution function plots were used to visualize the variations in performance across conditions. The results highlight the distinction between general purpose cognitive control and bilingual language control mechanisms. All participants showed predominant use of the reactive control mechanism to compensate for the limited resources system. Independent yet interactive systems for bilingual language control and general purpose cognitive control were postulated based on the experimental data derived from individuals with bilingual aphasia.

  12. 利用GPU进行通用数值计算的研究%Research on General-Purpose Computation Using GPU

    Institute of Scientific and Technical Information of China (English)

    徐品; 蓝善祯; 刘兰兰

    2009-01-01

    近年来,图形处理器(GPU)的发展日益成熟,应用范围不在局限于计算机图形学本身,已逐步扩展到通用数值计算领域.本文介绍了最新GPU用于通用计算的原理和方法,并在图像处理和科学计算方面对GPU和CPU算法进行了计算速度的对比研究,实验结果表明GPU在通用计算领域相对于CPU具有明显优势.%Recently, the development of Graphics Processing Unit (GPU) has become more and more sophisticated. The scope of application of GPU has been expanded to general purpose com-putations, except from those applications in graphics itself. In this paper, a detail introduction is given to the principles and methods of general purpose computation by GPU, and a comparative study of the calculation speed of GPU and CPU algorithm in image processing and scientific com-puting is made, and the experimental results show that GPU has an obvious advantage in general purpose computing compared with CPU.

  13. Modeling business object platform independent model and its completeness%业务对象平台无关模型建模方法及其完备性研究

    Institute of Scientific and Technical Information of China (English)

    冯锦丹; 战德臣; 聂兰顺; 徐晓飞; 李晋; 韩毅斌

    2011-01-01

    To support the well-frame Platform Independent Model (PIM) design of the business object so as to support the model-driven enterprise software and application development, semantic domain and granularity of the business object's concept were extended. Based on studies of Interoperable Configurable Enterprise Model Driven Architecture (ICEMDA), formal definition and business object PIM were presented. From the perspectives of basic elements (data, operation, state and interrelationships), semantic completeness constratints for business object model was provided. Application results showed that the study provided basic theoretical support for platform-independent modeling based on coarse-grained business object.%为支持业务对象平台无关模型的设计,以支撑模型驱动的企业应用软件开发,扩展了业务对象概念的语义范畴和粒度,在可互操作可配置可执行的模型驱动体系结构研究的基础上,给出了业务对象形式化定义和业务对象的平台无关模型.从基本构成要素(数据集、操作集、状态集及其间关系)的角度,给出支持业务对象模型的语义完备性约束.实践表明,研究成果可为大粒度业务对象平台无关建模提供基础的理论支撑.

  14. An object-oriented multi-threaded software beamformation toolbox

    DEFF Research Database (Denmark)

    Hansen, Jens Munk; Hemmsen, Martin Christian; Jensen, Jørgen Arendt

    2011-01-01

    Focusing and apodization are an essential part of signal processing in ultrasound imaging. Although the fun- damental principles are simple, the dramatic increase in computational power of CPUs, GPUs, and FPGAs motivates the development of software based beamformers, which further improves image...... quality (and the accu- racy of velocity estimation). For developing new imaging methods, it is important to establish proof-of-concept before using resources on real-time implementations. With this in mind, an eective and versatile Matlab toolbox written in C++ has been developed to assist in developing...

  15. FORMATION AND EVOLUTION OF A MULTI-THREADED SOLAR PROMINENCE

    Energy Technology Data Exchange (ETDEWEB)

    Luna, M. [CRESST and Space Weather Laboratory NASA/GSFC, Greenbelt, MD 20771 (United States); Karpen, J. T. [NASA/GSFC, Greenbelt, MD 20771 (United States); DeVore, C. R. [Naval Research Laboratory, Washington, DC 20375 (United States)

    2012-02-10

    We investigate the process of formation and subsequent evolution of prominence plasma in a filament channel and its overlying arcade. We construct a three-dimensional time-dependent model of an intermediate quiescent prominence suitable to be compared with observations. We combine the magnetic field structure of a three-dimensional sheared double arcade with one-dimensional independent simulations of many selected flux tubes, in which the thermal nonequilibrium process governs the plasma evolution. We have found that the condensations in the corona can be divided into two populations: threads and blobs. Threads are massive condensations that linger in the flux tube dips. Blobs are ubiquitous small condensations that are produced throughout the filament and overlying arcade magnetic structure, and rapidly fall to the chromosphere. The threads are the principal contributors to the total mass, whereas the blob contribution is small. The total prominence mass is in agreement with observations, assuming reasonable filling factors of order 0.001 and a fixed number of threads. The motion of the threads is basically horizontal, while blobs move in all directions along the field. We have generated synthetic images of the whole structure in an H{alpha} proxy and in two EUV channels of the Atmospheric Imaging Assembly instrument on board Solar Dynamics Observatory, thus showing the plasma at cool, warm, and hot temperatures. The predicted differential emission measure of our system agrees very well with observations in the temperature range log T = 4.6-5.7. We conclude that the sheared-arcade magnetic structure and plasma behavior driven by thermal nonequilibrium fit the abundant observational evidence well for typical intermediate prominences.

  16. Formation and Evolution of a Multi-Threaded Prominence

    Science.gov (United States)

    Luna, M.; Karpen, J. T.; DeVore, C. R.

    2012-01-01

    We investigate the process of formation and subsequent evolution of prominence plasma in a filament channel and its overlying arcade. We construct a three-dimensional time-dependent model of a filament-channel prominence suitable to be compared with observations. We combine this magnetic field structure with one-dimensional independent simulations of many flux tubes. The magnetic structure is a three-dimensional sheared double arcade, and the thermal non-equilibrium process governs the plasma evolution. We have found that the condensations in the corona can be divided into two populations: threads and blobs. Threads are massive condensations that linger in the field line dips. Blobs are ubiquitous small condensations that are produced throughout the filament and overlying arcade magnetic structure, and rapidly fall to the chromosphere. The total prominence mass is in agreement with observations. The threads are the principal contributors to the total mass, whereas the blob contribution is small. The motion of the threads is basically horizontal, while blobs move in all directions along the field. The peak velocities for both populations are comparable, but there is a weak tendency for the velocity to increase with the inclination, and the blobs with motion near vertical have the largest values of the velocity. We have generated synthetic images of the whole structure in an H proxy and in two EUV channels of the AIA instrument aboard SDO. These images show the plasma at cool, warm and hot temperatures. The theoretical differential emission measure of our system agrees very well with observations in the temperature range log T = 4.6-5.7. We conclude that the sheared-arcade magnetic structure and plasma dynamics fit well the abundant observational evidence.

  17. Formation and evolution of a multi-threaded prominence

    CERN Document Server

    Luna, M; DeVore, C R

    2012-01-01

    We investigate the process of formation and subsequent evolution of prominence plasma in a filament channel and its overlying arcade. We construct a three-dimensional time-dependent model of an intermediate quiescent prominence. We combine the magnetic field structure with one-dimensional independent simulations of many flux tubes, of a three-dimensional sheared double arcade, in which the thermal nonequilibrium process governs the plasma evolution. We have found that the condensations in the corona can be divided into two populations: threads and blobs. Threads are massive condensations that linger in the field line dips. Blobs are ubiquitous small condensations that are produced throughout the filament and overlying arcade magnetic structure, and rapidly fall to the chromosphere. The threads are the principal contributors to the total mass. The total prominence mass is in agreement with observations, assuming a reasonable filling factor. The motion of the threads is basically horizontal, while blobs move in...

  18. Formation and Evolution of a Multi-threaded Solar Prominence

    Science.gov (United States)

    Luna, M.; Karpen, J. T.; DeVore, C. R.

    2012-02-01

    We investigate the process of formation and subsequent evolution of prominence plasma in a filament channel and its overlying arcade. We construct a three-dimensional time-dependent model of an intermediate quiescent prominence suitable to be compared with observations. We combine the magnetic field structure of a three-dimensional sheared double arcade with one-dimensional independent simulations of many selected flux tubes, in which the thermal nonequilibrium process governs the plasma evolution. We have found that the condensations in the corona can be divided into two populations: threads and blobs. Threads are massive condensations that linger in the flux tube dips. Blobs are ubiquitous small condensations that are produced throughout the filament and overlying arcade magnetic structure, and rapidly fall to the chromosphere. The threads are the principal contributors to the total mass, whereas the blob contribution is small. The total prominence mass is in agreement with observations, assuming reasonable filling factors of order 0.001 and a fixed number of threads. The motion of the threads is basically horizontal, while blobs move in all directions along the field. We have generated synthetic images of the whole structure in an Hα proxy and in two EUV channels of the Atmospheric Imaging Assembly instrument on board Solar Dynamics Observatory, thus showing the plasma at cool, warm, and hot temperatures. The predicted differential emission measure of our system agrees very well with observations in the temperature range log T = 4.6-5.7. We conclude that the sheared-arcade magnetic structure and plasma behavior driven by thermal nonequilibrium fit the abundant observational evidence well for typical intermediate prominences.

  19. AN MHD AVALANCHE IN A MULTI-THREADED CORONAL LOOP

    Energy Technology Data Exchange (ETDEWEB)

    Hood, A. W.; Cargill, P. J.; Tam, K. V. [School of Mathematics and Statistics, University of St Andrews, St Andrews, Fife, KY16 9SS (United Kingdom); Browning, P. K., E-mail: awh@st-andrews.ac.uk [School of Physics and Astronomy, University of Manchester, Oxford Road, Manchester, M13 9PL (United Kingdom)

    2016-01-20

    For the first time, we demonstrate how an MHD avalanche might occur in a multithreaded coronal loop. Considering 23 non-potential magnetic threads within a loop, we use 3D MHD simulations to show that only one thread needs to be unstable in order to start an avalanche even when the others are below marginal stability. This has significant implications for coronal heating in that it provides for energy dissipation with a trigger mechanism. The instability of the unstable thread follows the evolution determined in many earlier investigations. However, once one stable thread is disrupted, it coalesces with a neighboring thread and this process disrupts other nearby threads. Coalescence with these disrupted threads then occurs leading to the disruption of yet more threads as the avalanche develops. Magnetic energy is released in discrete bursts as the surrounding stable threads are disrupted. The volume integrated heating, as a function of time, shows short spikes suggesting that the temporal form of the heating is more like that of nanoflares than of constant heating.

  20. Quantitative security analysis for multi-threaded programs

    NARCIS (Netherlands)

    Ngo, Tri Minh; Huisman, Marieke

    2013-01-01

    Quantitative theories of information flow give us an approach to relax the absolute confidentiality properties that are difficult to satisfy for many practical programs. The classical information-theoretic approaches for sequential programs, where the program is modeled as a communication channel wi

  1. Parallel Patient Karyotype Information System using Multi-threads

    Directory of Open Access Journals (Sweden)

    Chantana CHANTRAPORNCHAI

    2015-09-01

    Full Text Available Human cytogenetic data are the typical laboratory results from hospitals. Karyogram is used to show the chromosome characteristics. The characteristics are written as karyotype strings. For a particular patient, there may be many records of karyotype strings due to several visits. These data for many patients are increasingly large and must be stored properly for further investigation and analysis. This research introduces the information system for the hospital for keeping the karyotypes of patients and applies the parallel method for searching required karyotypes, extracting related patient information. Particularly, we exploit the technology of Node.js with multithreads while splitting queries to search in parallel. The search method is integrated to the cytogenetic information system which is aimed to use for studying karyotypes of leukemia patients.

  2. A Multi-Threaded Cryptographic Pseudorandom Number Generator Test Suite

    Science.gov (United States)

    2016-09-01

    be a practical attack on the key. More recently, improper initialization of the PRNG led to android digital wallets being hijacked [4]. For military...a practical attack on the key. More 3 recently, improper initialization of a PRNG led to android digital wallets being hijacked [4]. Adopting the...appears to exist differentiating it from random, however, is both intuitive and natural. As a result, statistical test suites have been developed which

  3. Solving Dense Generalized Eigenproblems on Multi-threaded Architectures

    CERN Document Server

    Aliaga, José I; Davidović, Davor; Di Napoli, Edoardo; Igual, Francisco D; Quintana-Ortí, Enrique S

    2011-01-01

    We compare two approaches to compute a portion of the spectrum of dense symmetric definite generalized eigenproblems: one is based on the reduction to tridiagonal form, and the other on the Krylov-subspace iteration. Two large-scale applications, arising in molecular dynamics and material science, are employed to investigate the contributions of the application, architecture, and parallelism of the method to the performance of the solvers. The experimental results on a state-of-the-art 8-core platform, equipped with a graphics processing unit (GPU), reveal that in real applications, iterative Krylov-subspace methods can be a competitive approach also for the solution of dense problems.

  4. Multi-threaded, discrete event simulation of distributed computing systems

    Science.gov (United States)

    Legrand, Iosif; MONARC Collaboration

    2001-10-01

    The LHC experiments have envisaged computing systems of unprecedented complexity, for which is necessary to provide a realistic description and modeling of data access patterns, and of many jobs running concurrently on large scale distributed systems and exchanging very large amounts of data. A process oriented approach for discrete event simulation is well suited to describe various activities running concurrently, as well the stochastic arrival patterns specific for such type of simulation. Threaded objects or "Active Objects" can provide a natural way to map the specific behaviour of distributed data processing into the simulation program. The simulation tool developed within MONARC is based on Java (TM) technology which provides adequate tools for developing a flexible and distributed process oriented simulation. Proper graphics tools, and ways to analyze data interactively, are essential in any simulation project. The design elements, status and features of the MONARC simulation tool are presented. The program allows realistic modeling of complex data access patterns by multiple concurrent users in large scale computing systems in a wide range of possible architectures, from centralized to highly distributed. Comparison between queuing theory and realistic client-server measurements is also presented.

  5. A Multi-threaded Version of Field II

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2014-01-01

    in a plane of 20 x 50 mm (width x depth) with random Gaussian amplitudes were simulated using the command calc scat . Dual Intel Xeon CPU E5-2630 2.60 GHz CPUs were used under Ubuntu Linux 10.02 and Matlab version 2013b. Each CPU holds 6 cores with hyper-threading, corresponding to a total of 24 hyper...

  6. The time-resolved and extreme conditions XAS (TEXAS) facility at the European Synchrotron Radiation Facility: the general-purpose EXAFS bending-magnet beamline BM23

    Energy Technology Data Exchange (ETDEWEB)

    Mathon, O., E-mail: mathon@esrf.fr; Beteva, A.; Borrel, J.; Bugnazet, D.; Gatla, S.; Hino, R.; Kantor, I.; Mairs, T. [European Synchrotron Radiation Facility, CS 40220, 38043 Grenoble Cedex 9 (France); Munoz, M. [European Synchrotron Radiation Facility, CS 40220, 38043 Grenoble Cedex 9 (France); Université Joseph Fourier, 1381 rue de la Piscine, BP 53, 38041 Grenoble Cedex 9 (France); Pasternak, S.; Perrin, F.; Pascarelli, S. [European Synchrotron Radiation Facility, CS 40220, 38043 Grenoble Cedex 9 (France)

    2015-10-17

    BM23 is the general-purpose EXAFS bending-magnet beamline at the ESRF, replacing the former BM29 beamline in the framework of the ESRF upgrade. Its mission is to serve the whole XAS user community by providing access to a basic service in addition to the many specialized instruments available at the ESRF. BM23 offers high-signal-to-noise ratio EXAFS in a large energy range (5–75 keV), continuous energy scanning for quick-EXAFS on the second timescale and a micro-XAS station delivering a spot size of 4 µm × 4 µm FWHM. BM23 is the general-purpose EXAFS bending-magnet beamline at the ESRF, replacing the former BM29 beamline in the framework of the ESRF upgrade. Its mission is to serve the whole XAS user community by providing access to a basic service in addition to the many specialized instruments available at the ESRF. BM23 offers high signal-to-noise ratio EXAFS in a large energy range (5–75 keV), continuous energy scanning for quick-EXAFS on the second timescale and a micro-XAS station delivering a spot size of 4 µm × 4 µm FWHM. It is a user-friendly facility featuring a high degree of automation, online EXAFS data reduction and a flexible sample environment.

  7. Platform independent software framework for smartphones

    OpenAIRE

    Žemaitis, Tomas

    2010-01-01

    Šiomis dienomis labai greitai tobulėja mobilios technologijos. Į išmaniuosius įrenginius montuojamų procesorių taktinis dažnis jau pasiekė 1Ghz, ekranai tapo labai didelės raiškos, jautrūs lietimui bei pasižymintys labai kokybišku spalvų atkūrimu. Dėl daugybės į išmaniuosius telefonus montuojamų papildomų įtaisų, jų panaudojimo sritis vis plečiasi, jų populiarumas auga. Kartu su aparatūros tobulėjimu, tobulėja ir jiems skirta programinė įranga. Per paskutinius kelis metus pasirodė net trys na...

  8. The new versatile general purpose surface-muon instrument (GPS) based on silicon photomultipliers for μSR measurements on a continuous-wave beam

    Science.gov (United States)

    Amato, A.; Luetkens, H.; Sedlak, K.; Stoykov, A.; Scheuermann, R.; Elender, M.; Raselli, A.; Graf, D.

    2017-09-01

    We report on the design and commissioning of a new spectrometer for muon-spin relaxation/rotation studies installed at the Swiss Muon Source (SμS) of the Paul Scherrer Institute (PSI, Switzerland). This new instrument is essentially a new design and replaces the old general-purpose surface-muon (GPS) instrument that has been for long the workhorse of the μSR user facility at PSI. By making use of muon and positron detectors made of plastic scintillators read out by silicon photomultipliers, a time resolution of the complete instrument of about 160 ps (standard deviation) could be achieved. In addition, the absence of light guides, which are needed in traditionally built μSR instrument to deliver the scintillation light to photomultiplier tubes located outside magnetic fields applied, allowed us to design a compact instrument with a detector set covering an increased solid angle compared with the old GPS.

  9. Software & Hardware Architecture of General-Purpose Graphics Processing Unit%GPU通用计算软硬件处理架构研究

    Institute of Scientific and Technical Information of China (English)

    谢建春

    2013-01-01

    现代GPU不仅是功能强劲的图形处理引擎,也是具有强大计算性能和存储带宽的高度并行可编程器件,能够与CPU构建完整的异构处理系统.而将GPU用于图形处理以外的计算,一般称之为GPU通用计算(General-Purpose computing on Graphics Processing Unit,GPGPU).对GPU通用计算的概念及分类、硬件架构及工作机制、软件环境及处理模型进行详细的研究,期望为GPU通用计算在航空嵌入式计算领域的进一步应用提供参考.

  10. High Precision Thermal, Structural and Optical Analysis of an External Occulter Using a Common Model and the General Purpose Multi-Physics Analysis Tool Cielo

    Science.gov (United States)

    Hoff, Claus; Cady, Eric; Chainyk, Mike; Kissil, Andrew; Levine, Marie; Moore, Greg

    2011-01-01

    The efficient simulation of multidisciplinary thermo-opto-mechanical effects in precision deployable systems has for years been limited by numerical toolsets that do not necessarily share the same finite element basis, level of mesh discretization, data formats, or compute platforms. Cielo, a general purpose integrated modeling tool funded by the Jet Propulsion Laboratory and the Exoplanet Exploration Program, addresses shortcomings in the current state of the art via features that enable the use of a single, common model for thermal, structural and optical aberration analysis, producing results of greater accuracy, without the need for results interpolation or mapping. This paper will highlight some of these advances, and will demonstrate them within the context of detailed external occulter analyses, focusing on in-plane deformations of the petal edges for both steady-state and transient conditions, with subsequent optical performance metrics including intensity distributions at the pupil and image plane.

  11. High Precision Thermal, Structural and Optical Analysis of an External Occulter Using a Common Model and the General Purpose Multi-Physics Analysis Tool Cielo

    Science.gov (United States)

    Hoff, Claus; Cady, Eric; Chainyk, Mike; Kissil, Andrew; Levine, Marie; Moore, Greg

    2011-01-01

    The efficient simulation of multidisciplinary thermo-opto-mechanical effects in precision deployable systems has for years been limited by numerical toolsets that do not necessarily share the same finite element basis, level of mesh discretization, data formats, or compute platforms. Cielo, a general purpose integrated modeling tool funded by the Jet Propulsion Laboratory and the Exoplanet Exploration Program, addresses shortcomings in the current state of the art via features that enable the use of a single, common model for thermal, structural and optical aberration analysis, producing results of greater accuracy, without the need for results interpolation or mapping. This paper will highlight some of these advances, and will demonstrate them within the context of detailed external occulter analyses, focusing on in-plane deformations of the petal edges for both steady-state and transient conditions, with subsequent optical performance metrics including intensity distributions at the pupil and image plane.

  12. The design of CMOS general-purpose analog front-end circuit with tunable gain and bandwidth for biopotential signal recording systems.

    Science.gov (United States)

    Chen, Wei-Ming; Yang, Wen-Chia; Tsai, Tzung-Yun; Chiueh, Herming; Wu, Chung-Yu

    2011-01-01

    In this paper an 8-channel CMOS general-purpose analog front-end (AFE) circuit with tunable gain and bandwidth for biopotential signal recording systems is presented. The proposed AFE consists of eight chopper stabilized pre-amplifiers, an 8-to-1 analog multiplexer, and a programmable gain amplifier. It can be used to sense and amplify different kinds of biopotential signals, such as electrocorticogram (ECoG), electrocardiogram (ECG) and electromyogram (EMG). The AFE chip is designed and fabricated in 0.18-μm CMOS technology. The measured maximum gain of AFE is 60.8 dB. The low cutoff frequency can achieve as low as 0.8 Hz and high cutoff frequency can be adjusted from 200 Hz to 10 kHz to suit for different kinds of biopotential signals. The measured input-referred noise is 0.9 μV(rms), with the power consumption of 18μW per channel at 1.8-V power supply. And the noise efficiency factor (NEF) is only 1.3 for pre-amplifier.

  13. Optimization of GATE and PHITS Monte Carlo code parameters for uniform scanning proton beam based on simulation with FLUKA general-purpose code

    Energy Technology Data Exchange (ETDEWEB)

    Kurosu, Keita [Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871 (Japan); Department of Radiation Oncology, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871 (Japan); Takashina, Masaaki; Koizumi, Masahiko [Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871 (Japan); Das, Indra J. [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States); Moskvin, Vadim P., E-mail: vadim.p.moskvin@gmail.com [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States)

    2014-10-01

    Although three general-purpose Monte Carlo (MC) simulation tools: Geant4, FLUKA and PHITS have been used extensively, differences in calculation results have been reported. The major causes are the implementation of the physical model, preset value of the ionization potential or definition of the maximum step size. In order to achieve artifact free MC simulation, an optimized parameters list for each simulation system is required. Several authors have already proposed the optimized lists, but those studies were performed with a simple system such as only a water phantom. Since particle beams have a transport, interaction and electromagnetic processes during beam delivery, establishment of an optimized parameters-list for whole beam delivery system is therefore of major importance. The purpose of this study was to determine the optimized parameters list for GATE and PHITS using proton treatment nozzle computational model. The simulation was performed with the broad scanning proton beam. The influences of the customizing parameters on the percentage depth dose (PDD) profile and the proton range were investigated by comparison with the result of FLUKA, and then the optimal parameters were determined. The PDD profile and the proton range obtained from our optimized parameters list showed different characteristics from the results obtained with simple system. This led to the conclusion that the physical model, particle transport mechanics and different geometry-based descriptions need accurate customization in planning computational experiments for artifact-free MC simulation.

  14. 多用途汉语方言语音数据库的设计%Design of general-purpose Chinese dialect speech database

    Institute of Scientific and Technical Information of China (English)

    高原; 顾明亮; 孙平; 王侠; 张长水

    2012-01-01

    建立了一个多用途汉语方言语音数据库,用于说话人信息处理、方言特征词识别、语音识别等领域的研究.以多通道的方式采集时长106小时的语音数据,包括七种主要的汉语方言区语音,对数据进行预处理.在此基础上提出了汉语方言数据库的设计标准以及实施方案,有助于推动汉语语音库特别是方言语音库的建立.%This paper describes a general-purpose Chinese dialect speech database, which can be applied to speaker information analysis, character-words recognition, speech recognition etc. The speech database, which includes seven kinds of most common Chinese dialects, has reached one hundred and six hours by multi-channel record modes and has already preprocessed. Based on the work, the design criteria and implementation scheme of Chinese dialects speech database are proposed, which is useful for the establishment of Chinese speech database, especially Chinese dialect speech database.

  15. 图形处理器在通用计算中的应用%Application of graphics processing unit in general purpose computation

    Institute of Scientific and Technical Information of China (English)

    张健; 陈瑞

    2009-01-01

    基于图形处理器(GPU)的计算统一设备体系结构(compute unified device architecture,CUDA)构架,阐述了GPU用于通用计算的原理和方法.在Geforce8800GT下,完成了矩阵乘法运算实验.实验结果表明,随着矩阵阶数的递增,无论是GPU还是CPU处理,速度都在减慢.数据增加100倍后,GPU上的运算时间仅增加了3.95倍,而CPU的运算时间增加了216.66倍.%Based on the CUDA (compute unified device architecture) of GPU (graphics processing unit), the technical fundamentals and methods for general purpose computation on GPU are introduced. The algorithm of matrix multiplication is simulated on Geforce8800 GT. With the increasing of matrix order, algorithm speed is slowed either on CPU or on GPU. After the data quantity increases to 100 times, the operation time only increased in 3.95 times on GPU, and 216.66 times on CPU.

  16. Real-Time and Real-Fast Performance of General-Purpose and Real-Time Operating Systems in Multithreaded Physical Simulation of Complex Mechanical Systems

    Directory of Open Access Journals (Sweden)

    Carlos Garre

    2014-01-01

    Full Text Available Physical simulation is a valuable tool in many fields of engineering for the tasks of design, prototyping, and testing. General-purpose operating systems (GPOS are designed for real-fast tasks, such as offline simulation of complex physical models that should finish as soon as possible. Interfacing hardware at a given rate (as in a hardware-in-the-loop test requires instead maximizing time determinism, for which real-time operating systems (RTOS are designed. In this paper, real-fast and real-time performance of RTOS and GPOS are compared when simulating models of high complexity with large time steps. This type of applications is usually present in the automotive industry and requires a good trade-off between real-fast and real-time performance. The performance of an RTOS and a GPOS is compared by running a tire model scalable on the number of degrees-of-freedom and parallel threads. The benchmark shows that the GPOS present better performance in real-fast runs but worse in real-time due to nonexplicit task switches and to the latency associated with interprocess communication (IPC and task switch.

  17. Adapting machine learning techniques to censored time-to-event health record data: A general-purpose approach using inverse probability of censoring weighting.

    Science.gov (United States)

    Vock, David M; Wolfson, Julian; Bandyopadhyay, Sunayan; Adomavicius, Gediminas; Johnson, Paul E; Vazquez-Benitez, Gabriela; O'Connor, Patrick J

    2016-06-01

    Models for predicting the probability of experiencing various health outcomes or adverse events over a certain time frame (e.g., having a heart attack in the next 5years) based on individual patient characteristics are important tools for managing patient care. Electronic health data (EHD) are appealing sources of training data because they provide access to large amounts of rich individual-level data from present-day patient populations. However, because EHD are derived by extracting information from administrative and clinical databases, some fraction of subjects will not be under observation for the entire time frame over which one wants to make predictions; this loss to follow-up is often due to disenrollment from the health system. For subjects without complete follow-up, whether or not they experienced the adverse event is unknown, and in statistical terms the event time is said to be right-censored. Most machine learning approaches to the problem have been relatively ad hoc; for example, common approaches for handling observations in which the event status is unknown include (1) discarding those observations, (2) treating them as non-events, (3) splitting those observations into two observations: one where the event occurs and one where the event does not. In this paper, we present a general-purpose approach to account for right-censored outcomes using inverse probability of censoring weighting (IPCW). We illustrate how IPCW can easily be incorporated into a number of existing machine learning algorithms used to mine big health care data including Bayesian networks, k-nearest neighbors, decision trees, and generalized additive models. We then show that our approach leads to better calibrated predictions than the three ad hoc approaches when applied to predicting the 5-year risk of experiencing a cardiovascular adverse event, using EHD from a large U.S. Midwestern healthcare system.

  18. Investigation of inflammation and tissue patterning in the gut using a Spatially Explicit General-purpose Model of Enteric Tissue (SEGMEnT.

    Directory of Open Access Journals (Sweden)

    Chase Cockrell

    2014-03-01

    Full Text Available The mucosa of the intestinal tract represents a finely tuned system where tissue structure strongly influences, and is turn influenced by, its function as both an absorptive surface and a defensive barrier. Mucosal architecture and histology plays a key role in the diagnosis, characterization and pathophysiology of a host of gastrointestinal diseases. Inflammation is a significant factor in the pathogenesis in many gastrointestinal diseases, and is perhaps the most clinically significant control factor governing the maintenance of the mucosal architecture by morphogenic pathways. We propose that appropriate characterization of the role of inflammation as a controller of enteric mucosal tissue patterning requires understanding the underlying cellular and molecular dynamics that determine the epithelial crypt-villus architecture across a range of conditions from health to disease. Towards this end we have developed the Spatially Explicit General-purpose Model of Enteric Tissue (SEGMEnT to dynamically represent existing knowledge of the behavior of enteric epithelial tissue as influenced by inflammation with the ability to generate a variety of pathophysiological processes within a common platform and from a common knowledge base. In addition to reproducing healthy ileal mucosal dynamics as well as a series of morphogen knock-out/inhibition experiments, SEGMEnT provides insight into a range of clinically relevant cellular-molecular mechanisms, such as a putative role for Phosphotase and tensin homolog/phosphoinositide 3-kinase (PTEN/PI3K as a key point of crosstalk between inflammation and morphogenesis, the protective role of enterocyte sloughing in enteric ischemia-reperfusion and chronic low level inflammation as a driver for colonic metaplasia. These results suggest that SEGMEnT can serve as an integrating platform for the study of inflammation in gastrointestinal disease.

  19. 4DCAPTURE: a general purpose software package for capturing and analyzing two- and three-dimensional motion data acquired from video sequences

    Science.gov (United States)

    Walton, James S.; Hodgson, Peter; Hallamasek, Karen; Palmer, Jake

    2003-07-01

    4DVideo is creating a general purpose capability for capturing and analyzing kinematic data from video sequences in near real-time. The core element of this capability is a software package designed for the PC platform. The software ("4DCapture") is designed to capture and manipulate customized AVI files that can contain a variety of synchronized data streams -- including audio, video, centroid locations -- and signals acquired from more traditional sources (such as accelerometers and strain gauges.) The code includes simultaneous capture or playback of multiple video streams, and linear editing of the images (together with the ancilliary data embedded in the files). Corresponding landmarks seen from two or more views are matched automatically, and photogrammetric algorithms permit multiple landmarks to be tracked in two- and three-dimensions -- with or without lens calibrations. Trajectory data can be processed within the main application or they can be exported to a spreadsheet where they can be processed or passed along to a more sophisticated, stand-alone, data analysis application. Previous attempts to develop such applications for high-speed imaging have been limited in their scope, or by the complexity of the application itself. 4DVideo has devised a friendly ("FlowStack") user interface that assists the end-user to capture and treat image sequences in a natural progression. 4DCapture employs the AVI 2.0 standard and DirectX technology which effectively eliminates the file size limitations found in older applications. In early tests, 4DVideo has streamed three RS-170 video sources to disk for more than an hour without loss of data. At this time, the software can acquire video sequences in three ways: (1) directly, from up to three hard-wired cameras supplying RS-170 (monochrome) signals; (2) directly, from a single camera or video recorder supplying an NTSC (color) signal; and (3) by importing existing video streams in the AVI 1.0 or AVI 2.0 formats. The

  20. A feasibility study of a prototype PET insert device to convert a general-purpose animal PET scanner to higher resolution.

    Science.gov (United States)

    Wu, Heyu; Pal, Debashish; O'Sullivan, Joseph A; Tai, Yuan-Chuan

    2008-01-01

    We developed a prototype system to evaluate the feasibility of using a PET insert device to achieve higher resolution from a general-purpose animal PET scanner. The system consists of a high-resolution PET detector, a computer-controlled rotation stage, and a custom mounting plate. The detector consists of a cerium-doped lutetium oxyorthosilicate array (12 x 12 crystals, 0.8 x 1.66 x 3.75 mm(3) each) directly coupled to a position-sensitive photomultiplier tube (PS-PMT). The detector signals were fed into the scanner electronics to establish coincidences between the 2 systems. The detector was mounted to a rotation stage that is attached to the scanner via the custom mounting plate after removing the transmission source holder. The rotation stage was concentric with the center of the scanner. The angular offset of the insert detector was calibrated via optimizing point-source images. In all imaging experiments, coincidence data were collected from 9 angles to provide 180 degrees sampling. A (22)Na point source was imaged at different offsets from the center to characterize the in-plane resolution of the insert system. A (68)Ge point source was stepped across the axial field of view to measure the sensitivity of the system. A 23.2-g mouse was injected with 38.5 MBq of (18)F-fluoride and imaged at 3 h after injection for 2 h. The transverse image resolution of the PET insert device ranges from 1.1- to 1.4-mm full width at half maximum (FWHM) without correction for the point-source dimension. This corresponds to approximately 33% improvement over the resolution of the original scanner (1.7- to 1.8-mm FWHM) in 2 of the 3 directions. The sensitivity of the device is 0.064% at the center of the field, 46-fold lower than the sensitivity of an existing animal PET scanner. The mouse bone scan had improved image resolution using the PET insert device over that of the existing animal PET scanner alone. We have demonstrated the feasibility of using a high-resolution insert

  1. Research of general purpose computing technology based on graphic processing unit%基于图形处理器的通用计算技术的研究

    Institute of Scientific and Technical Information of China (English)

    戴长江; 张尤赛

    2013-01-01

    In order to research the general purpose computing technology of GPU based on PC, the classic GPU general pur-pose computing method base on texture mapping technology was adopted, and the experiments of discrete convolution of 2D images and volume rendering based on 3D texture mapping were carried out. The experiment result indicates that, on the basis of a suitable algorithm design, the classic GPU general purpose computing technology can significantly enhance the program run-ning performance. In this article, it is concluded that the CPU+GPU heterogeneous computing mode will become a choice for high-performance computation, and the further development of the general purpose computing technology based on GPU is prospected.%为了研究基于PC的图形处理器(GPU)的通用计算技术,采用了基于纹理映射的经典GPU通用计算方法,进行了二维图像离散卷积和三维纹理映射体绘制的实验.实验证明了经典GPU通用计算技术在合适的算法设计基础上能够显著提升程序的运算速度,得出了基于CPU+GPU的异构计算模式可以成为高性能计算的一种选择的结论,展望了基于图形处理器的通用计算技术在未来的发展.

  2. General purpose modeling languages for configuration

    DEFF Research Database (Denmark)

    Queva, Matthieu Stéphane Benoit

    In the later years, there has been an important need for companies to reduce their costs while proposing highly customized products. Indeed, today's customers demand products with lower prices, higher quality and faster delivery, but they also want products customized to match their unique needs....

  3. Can Universities Profit from General Purpose Inventions?

    DEFF Research Database (Denmark)

    Barirani, Ahmad; Beaudry, Catherine; Agard, Bruno

    2017-01-01

    The lack of control over downstream assets can hinder universities’ ability to extract rents from their inventive activities. We explore this possibility by assessing the relationship between invention generality and renewal decisions for a sample of Canadian nanotechnology patents. Our results s...

  4. Photovoltaics module interface: General purpose primers

    Science.gov (United States)

    Boerio, J.

    1985-01-01

    The interfacial chemistry established between ethylene vinyl acetate (EVA) and the aluminized back surface of commercial solar cells was observed experimentally. The technique employed is called Fourier Transform Infrared (FTIR) spectroscopy, with the infrared signal being reflected back from the aluminum surface through the EVA film. Reflection infrared (IR) spectra are given and attention is drawn to the specific IR peak at 1080/cm which forms on hydrolytic aging of the EVA/aluminum system. With this fundamental finding, and the workable experimental techniques, candidate silane coupling agents are employed at the interface, and their effects on eliminating or slowing hydrolytic aging of the EVA/aluminum interface are monitored.

  5. Evaluation of Prototype General Purpose Visor Concepts

    Science.gov (United States)

    2006-03-01

    particles, laser, solar and UV radiation (ballistic eyewear) and to protect the eyes and face from high-energy fragments (ballistic visor). A Human...deux parties pour protéger les yeux contre les fragments, les particules et les rayons laser, solaires et UV à faible énergie, (lunettes balistiques...low energy fragments, particles, laser, solar and UV radiation (ballistic eyewear) and to protect the eyes and face from high-energy fragments

  6. A General Purpose Ionospheric Ray Tracing Procedure

    Science.gov (United States)

    1993-08-01

    PUBUC RELEASE UNCLASSIFIED i IIt ii 11 UNCLASSIFIED DST0O A U S T R A L I A SURVEILLANCE RESEARCH LABORATORY ZDMC QUAIrrY High Frequency Radar ...tol = tolerance (Kms) at each step of raytracing (a value of * * l.d-6 is sufficient in most cases) * * CHARACTER * * cha - ’y’ if magnetic fields...Director, Surveillance Research Laboratory 1 Chief High Frequency Radar Division 1 Research Leader, Jindalee Operational Radar Network 1 Head, Radar

  7. General Purpose Ground Forces: What Purpose?

    Science.gov (United States)

    1993-04-06

    PEACEKEEPING CONTINGENCY eSTRATEGIC RESERVE " ACTIVEARMY DIVlSIO S 1 Note a 1 5 Note b 2 Note c ! ~ESERVE ARMY _ S , EAVY]CADRE REGIS... designed to perform traditional domestic missions and those overseas humanitarian and peacekeeping assignments that carry litt].e risk of combat

  8. State of the art and future research on general purpose computation of Graphics Processing Unit%图形处理器通用计算的研究综述

    Institute of Scientific and Technical Information of China (English)

    陈庆奎; 王海峰; 那丽春; 霍欢; 郝聚涛; 刘伯成

    2012-01-01

    从2004年开始,图形处理器GPU的通用计算成为一个新研究热点,此后GPGPU( General-Purpose Graphics Processing Unit)在最近几年中取得长足发展.从介绍GPGPU硬件体系结构的改变和软件技术的发展开始,阐述GPGPU主要应用领域中的研究成果及最新发展.针对各种应用领域中计算数据大规模增加的趋势,出现单个GPU计算节点无法克服的硬件限制问题,为解决该问题出现多GPU计算和GPU集群的解决方案.详细地讨论通用计算GPU集群的研究进展和应用技术,包括GPU集群硬件异构性的问题和软件框架的三个研究趋势,对几种典型的软件框架Glift、Zippy、CUDASA的特性和缺点进行较详细的分析.最后,总结GPU通用计算研究发展中存在的问题和未来的挑战.%The general purpose computation of graphic processing unit became a new research field since 2004. GPGPU has been developing rapidly in recent years at a high speed. Starting from an introduction to the development of the architecture of GPU for general-purpose computation and software technology, the study and development of GPU for general-purpose computation are introduced. Aiming at the large scale data of various application fields, GPU cluster is proposed to overcome the limitation of single GPU. So the development and application technologies of GPGPU cluster are discussed and include the issue of heterogeneous cluster and the trend of software for GPU cluster. Several frameworks for GPU cluster are analyzed in detailed, such as Glift, Zippy, and CUDASA. Finally, the unsolved problems and the new challenge in this subject are proposed.

  9. Synthesis of [1-{sup 11}C]octanoic acid, [{sup 11}C]raclopride and [{sup 11}C]nicergoline with a general-purpose automated synthesis apparatus of {sup 11}C-labeled radiopharmaceuticals

    Energy Technology Data Exchange (ETDEWEB)

    Yajima, Kazuyosi; Kawashima, Hidefumi; Cui, Ying-she; Hashimoto, Naoto; Miyake, Yoshihiro [National Cardiovascular Center, Suita, Osaka (Japan)

    1997-06-01

    We have developed a general-purpose automated synthesis apparatus of {sup 11}C-labeled radiopharmaceuticals for PET, which can be adopted to both one-pot and two-or-more-pot reactions. The features of the apparatus were shown in the successful preparation of [(1-{sup 11})C]octanoic acid in a one-pot reaction and [{sup 11}C]raclopride and [{sup 11}C]nicergoline in two-pot reactions, the latter being a novel radiopharmaceutical. (author).

  10. Study on the Modified of General-purpose Antistatic Agent to High Polymer Material%通用型抗静电剂对高聚物材料的改性研究

    Institute of Scientific and Technical Information of China (English)

    张玉广; 刘生满; 黄犇犇; 张景昌

    2009-01-01

    This paper introduces the polymerization technology and principle of general-purpose antistatic agent. The process, the temperature, the vacuum, and the time of reaction influence on polymerization. The material rate influences on the antistatic properties and anti-washing properties. It introduces the modification process of high polymer material and polyester-cotton fabric. After general-purpose antistatic agent dealing with polyester-cotton fabric,surface resistance of polyester-cotton fabric is still 3.1 108Ω and polyester fabric is 5.5 × 108Ω for 50 standard washing , half-period of the fabricless than0.5s.%研究了通用型抗静电聚合的机理及聚合的工艺流程分析了温度、真空度和反应时间对聚合质量的影响以及物料比对抗静电性能和耐洗性的影响,探讨了高聚物、涤棉面料的改性工艺.实验表明,经通用型抗静电剂处理的涤棉面料标准洗涤50次后表面电阻仍为3.1×108Ω,涤卡面料标准洗涤50次后表面电阻仍为5.5×108Ω,半衰期均小于0.5 s.

  11. Platform Independent Launch Vehicle Avionics with GPS Metric Tracking Project

    Data.gov (United States)

    National Aeronautics and Space Administration — For this award, Tyvak proposes to develop a complete suite of avionics for a Nano-Launch Vehicle (NLV) based on the architecture determinations performed during...

  12. Towards Platform Independent Database Modelling in Enterprise Systems

    OpenAIRE

    Ellison, Martyn Holland; Calinescu, Radu; Paige, Richard F.

    2016-01-01

    Enterprise software systems are prevalent in many organisations, typically they are data-intensive and manage customer, sales, or other important data. When an enterprise system needs to be modernised or migrated (e.g. to the cloud) it is necessary to understand the structure of this data and how it is used. We have developed a tool-supported approach to model database structure, query patterns, and growth patterns. Compared to existing work, our tool offers increased system support and exten...

  13. 基于DSP控制的全数字直流PWM调速系统%Research on Software Platform of General- Purpose Digital Speed Regulation System Based on DSP Controller

    Institute of Scientific and Technical Information of China (English)

    刘龙江; 边鑫

    2012-01-01

    针对现存电压闭环和电流闭环直流调速系统各自的不足,首先对其结构进行了改进,分析研究了一种电压/电流双闭环直流调速系统.随后设计了一种基于DSP控制器的通用电机调速系统,并将其应用于电动车驱动系统电机控制中,实验结果表明,系统工作可靠稳定.%Firstly,according to the deficiency of the voltage closed - loop speed regulation system with the end current, a new kind of direct current speed regulation system based on voltage control strategy is put forward in this thesis, that is voltage/current double - closed - loop direct current speed regulation system. Secondly, in order to speed up the developing process of all sort of speed regulation system on the basis of DSP, a software platform of the general - purpose digital speed regulation system is devised. It is used successfully in the current closed - loop DC speed regulation system of EV.

  14. Shadow-Bitcoin: Scalable Simulation via Direct Execution of Multi-Threaded Applications

    Science.gov (United States)

    2015-08-10

    getaddr.bitnodes.io/, which performs daily crawls of the net- work, recent versions of Satoshi accounts for 83% of the reachable nodes. BitcoinJ is likely...models do not account for the ob- served network structure [33]. However, we stress that our primary goal is to demonstrate the flexibility we have in...precisely model the real network. Providing initial blockchain state. Each node in the Bitcoin network typically maintains its own copy of the entire

  15. Multi-Threaded Evolution of the Data-Logging System of the ATLAS Experiment at CERN

    CERN Document Server

    Colombo, T; The ATLAS collaboration

    2011-01-01

    The ATLAS experiment is currently observing proton-proton collisions delivered by the LHC accelerator at a centre of mass energy of 7 TeV with a peak luminosity of ~1033 cm-2 s-1. The ATLAS Trigger and Data Acquisition (TDAQ) system selects interesting events on-line in a three-level trigger system in order to store them at a budgeted rate of ~200 Hz for an event size of ~1.5 MB. This paper focuses on the TDAQ data-logging system. Its purpose is to receive events from the third level trigger, process them and stream the results into different raw data files according to the trigger decision. The data files are subsequently moved to the central mass storage facility at CERN. The system currently in production has been commissioned in 2007 and has been working smoothly since then. It is however based on an essentially single-threaded design that is anticipated not to cope with the increase in event rate and event size that is foreseen as part of the ATLAS and LHC upgrade programs. This design also severely limi...

  16. Multi-Threaded Evolution of the Data-Logging System of the ATLAS Experiment at CERN

    CERN Document Server

    Colombo, T; The ATLAS collaboration

    2011-01-01

    The ATLAS experiment observes proton-proton collisions delivered by the LHC accelerator at a centre of mass energy of 7 TeV with a peak luminosity of ~ 10^33 cm^-2 s^-1 in 2011. The ATLAS Trigger and Data Acquisition (TDAQ) system selects interesting events on-line in a three-level trigger system in order to store them at a budgeted average rate of ~ 400 Hz for an event size of ~1.2 MB. This paper focuses on the TDAQ data-logging system. Its purpose is to receive events from the third level trigger, process them and stream the data into different raw files according to the trigger decision. The system currently in production is based on an essentially single-threaded design that is anticipated not to cope with the increase in event rate and event size foreseen as part of the ATLAS and LHC upgrade programs. This design also severely limits the possibility of performing additional CPU-intensive tasks. Therefore, a novel design able to exploit the full power of multi-core architecture is needed. The main challen...

  17. MT-ADRES: multi-threading on coarse-grained reconfigurable architecture

    DEFF Research Database (Denmark)

    Wu, Kehuai; Kanstein, Andreas; Madsen, Jan

    2008-01-01

    in multiple smaller arrays that can execute threads in parallel. Because the partition can be changed dynamically, this extension provides more flexibility than a multi-core approach. This article presents details of the enhanced architecture and results obtained from an MPEG-2 decoder implementation...

  18. Permission-based separation logic for multi-threaded Java programs

    NARCIS (Netherlands)

    Amighi, A.; Haack, Christian; Huisman, Marieke; Hurlin, C.

    2015-01-01

    This paper presents a program logic for reasoning about multithreaded Java-like programs with concurrency primitives such as dynamic thread creation, thread joining and reentrant object monitors. The logic is based on concurrent separation logic. It is the first detailed adaptation of concurrent sep

  19. FODEM: A Multi-Threaded Research and Development Method for Educational Technology

    Science.gov (United States)

    Suhonen, Jarkko; de Villiers, M. Ruth; Sutinen, Erkki

    2012-01-01

    Formative development method (FODEM) is a multithreaded design approach that was originated to support the design and development of various types of educational technology innovations, such as learning tools, and online study programmes. The threaded and agile structure of the approach provides flexibility to the design process. Intensive…

  20. Implementation of the ATLAS trigger within the multi-threaded software framework AthenaMT

    CERN Document Server

    Wynne, Benjamin; The ATLAS collaboration

    2016-01-01

    We present an implementation of the ATLAS High Level Trigger that provides parallel execution of trigger algorithms within the ATLAS multi­threaded software framework, AthenaMT. This development will enable the ATLAS High Level Trigger to meet future challenges due to the evolution of computing hardware and upgrades of the Large Hadron Collider, LHC, and ATLAS Detector. During the LHC data­taking period starting in 2021, luminosity will reach up to three times the original design value. Luminosity will increase further, to up to 7.5 times the design value, in 2026 following LHC and ATLAS upgrades. This includes an upgrade of the ATLAS trigger architecture that will result in an increase in the High Level Trigger input rate by a factor of 4 to 10 compared to the current maximum rate of 100 kHz. The current ATLAS multiprocess framework, AthenaMP, manages a number of processes that process events independently, executing algorithms sequentially in each process. AthenaMT will provide a fully multi­threaded env...

  1. Hardware based redundant multi-threading inside a GPU for improved reliability

    Science.gov (United States)

    Sridharan, Vilas; Gurumurthi, Sudhanva

    2015-05-05

    A system and method for verifying computation output using computer hardware are provided. Instances of computation are generated and processed on hardware-based processors. As instances of computation are processed, each instance of computation receives a load accessible to other instances of computation. Instances of output are generated by processing the instances of computation. The instances of output are verified against each other in a hardware based processor to ensure accuracy of the output.

  2. Qualitative and quantitative information flow analysis for multi-thread programs

    NARCIS (Netherlands)

    Ngo, Tri Minh

    2014-01-01

    In today's information-based society, guaranteeing information security plays an important role in all aspects of life: communication between citizens and governments, military, companies, financial information systems, web-based services etc. With the increasing popularity of computer systems with

  3. Permission-based separation logic for multi-threaded Java programs

    NARCIS (Netherlands)

    Amighi, A.; Haack, Christian; Huisman, Marieke; Hurlin, C.

    This paper presents a program logic for reasoning about multithreaded Java-like programs with concurrency primitives such as dynamic thread creation, thread joining and reentrant object monitors. The logic is based on concurrent separation logic. It is the first detailed adaptation of concurrent

  4. Investigating multi-thread utilization as a software defence mechanism against side channel attacks

    CSIR Research Space (South Africa)

    Frieslaar, Ibraheem

    2016-11-01

    Full Text Available as random precharging, masking, hiding and shuffling. Random precharging can be carried out at a software level by flooding the datapath with a random operand instruction before and after an important value is used [21]. A well known approach to defend...

  5. LUNA: Hard Real-Time, Multi-Threaded, CSP-Capable Execution Framework

    NARCIS (Netherlands)

    Bezemer, M.M.; Wilterdink, R.J.W.; Broenink, J.F.; Welch, Peter H.; Sampson, Adam T.; Pedersen, Jan B.; Kerridge, Jon M.; Broenink, Jan F.; Barnes, Frederick R.M.

    2011-01-01

    Modern embedded systems have multiple cores available. The CTC++ library is not able to make use of these cores, so a new framework is required to control the robotic setups in our lab. This paper first looks into the available frameworks and compares them to the requirements for controlling the set

  6. A multi-threaded approach to using asynchronous C libraries with Java

    Science.gov (United States)

    Gates, John; Deich, William

    2014-07-01

    It is very common to write device drivers and code that access low level operation system functions in C or C+ +. There are also many powerful C and C++ libraries available for a variety of tasks. Java is a programming language that is meant to be system independent and is arguably much simpler to code than C/C++. However, Java has minimal support for talking to native libraries, which results in interesting challenges when using C/C++ libraries with Java code. Part of the problem is that Java's standard mechanism for communicating with C libraries, Java Native Interface, requires a significant amount of effort to do fairly simple things, such as copy structure data from C to a class in Java. This is largely solved by using the Java Native Access Library, which provides a reasonable way of transferring data between C structures and Java classes and calling C functions from Java. A more serious issue is that there is no mechanism for a C/C++ library loaded by a Java program to call a Java function in the Java program, as this is a major issue with any library that uses callback functions. A solution to this problem was found using a moderate amount of C code and multiple threads in Java. The Keck Task Language API (KTL) is used as a primary means of inter-process communication at Keck and Lick Observatory. KTL is implemented in a series or C libraries and uses callback functions for asynchronous communication. It is a good demonstration of how to use a C library within a Java program.

  7. SISSY: An example of a multi-threaded, networked, object-oriented databased application

    Energy Technology Data Exchange (ETDEWEB)

    Scipioni, B.; Liu, D.; Song, T.

    1993-05-01

    The Systems Integration Support SYstem (SISSY) is presented and its capabilities and techniques are discussed. It is fully automated data collection and analysis system supporting the SSCL`s systems analysis activities as they relate to the Physics Detector and Simulation Facility (PDSF). SISSY itself is a paradigm of effective computing on the PDSF. It uses home-grown code (C++), network programming (RPC, SNMP), relational (SYBASE) and object-oriented (ObjectStore) DBMSs, UNIX operating system services (IRIX threads, cron, system utilities, shells scripts, etc.), and third party software applications (NetCentral Station, Wingz, DataLink) all of which act together as a single application to monitor and analyze the PDSF.

  8. Parallel Algorithm Based on General Purpose Computing on GPU and the Implementation of Calculation Framework%基于GPU通用计算的并行算法和计算框架的实现

    Institute of Scientific and Technical Information of China (English)

    朱宇兰

    2016-01-01

    GPU通用计算是近几年来迅速发展的一个计算领域,以其强大的并行处理能力为密集数据单指令型计算提供了一个绝佳的解决方案,但受限制于芯片的制造工艺,其运算能力遭遇瓶颈。本文从GPU通用计算的基础——图形API开始,分析GPU并行算法特征、运算的过程及特点,并抽象出了一套并行计算框架。通过计算密集行案例,演示了框架的使用方法,并与传统GPU通用计算的实现方法比较,证明了本框架具有代码精简、与图形学无关的特点。%GPGPU(General Purpose Computing on Graphics Processing Unit) is a calculation mothed that develops quiet fast in recent years, it provide an optimal solution for the intensive data calculation of a single instruction with a powerful treatment, however it is restricted in CPU making process to lead to entounter the bottleneck of hardware manufacture. This paper started from GPGPU by Graphics API to analyze the featuers, progress and characteristics of GPU parallel algorithm and obtained a set of computing framework to demonstrate it by an intensive line calculation and compared between the traditional GPU and the parallel computing framework to turn out to show that there was a simplified code and had nothing to do with graphics.

  9. A Review on Foreign Research in General Purpose Technologies and Economic Growth%一般通用技术与经济增长的国外研究综述

    Institute of Scientific and Technical Information of China (English)

    潘维军

    2012-01-01

    技术进步是经济增长的源泉,但是以往研究关注的只是微小的、增量式的技术进步,这使得经济增长理论的解释能力受到局限。一般通用技术理论是20世纪90年代以来在经济增长领域发展起来的一种增长理论,与传统的经济增长理论不同,它关注具有广泛应用而且能够推动其他部门技术进步的重大技术进步。经济学家使用博弈论、DGE等方法对一般通用技术进行研究,并将其应用到其他领域,对经济增长停滞与波动、工资不平等等问题提出了全新而且相当有说服力的解释。通过对一般通用技术文献的回顾,本文一方面总结了一般通用技术理论研究取得的成果及在不同领域的应用;另一方面,在此基础上指出了一般通用技术研究中存在的理论问题以及进一步的发展方向。%It' s a common sense for economists that technological progress is the source of economic growth, but previous studies only care the micro and incremental technological progress, which makes the explanatory power of the economic growth theory limited. The general purpose technology (GPT) theory is a growth theory which was developed in 1990s. Being differ from the traditional economic growth theory,GPT theory pays close attention to those major technological progresses which are widely used to promote other sectors' technological progresses. Economists study the GPT by using game theory and DGE model, etc. , and apply it in other fields to put forward it a new and quite persuasive theory to explain the economic stagnation and fluctuation,wage inequality,etc. Through the review of GPT literatures, we summarize the GPT theory achievement and its application in different fields; on the other hand, based on generalizing the literatures,we point out the flaws in GPT literatures and the further development of the GPT theory.

  10. 7 CFR 271.1 - General purpose and scope.

    Science.gov (United States)

    2010-01-01

    ..., in part: Congress hereby finds that the limited food purchasing power of low-income households... households to obtain a more nutritious diet through normal channels of trade by increasing food purchasing power for all eligible households who apply for participation. (b) Scope of the regulations. Part 271...

  11. General purpose flow solver applied to flow over hills

    Energy Technology Data Exchange (ETDEWEB)

    Soerensen, N.N.

    1995-09-01

    The present report describes the development a 2D and 3D finite-volume code in general curvilinear coordinates using the Basis 2D/3D platform by Michelsen. The codes are based on the Reynolds averaged incompressible isothermal Navier-Stokes equations and use primitive variables (U, V, W and P). The turbulence is modelled by the high Reynolds number {kappa} - {epsilon} model. Cartesian velocity components are used in a non-staggered arrangement following the methodology of Rhie. The equation system is solved using the SIMPLE method of Patankar and Spalding. Solution of the transport equations is obtained by a successive application of a TDMA solver in alternating direction. The solution of the pressure correction equation is accelerated using the multigrid tools from the Basis 2D/3D platform. Additionally a three-level grid sequence is implemented in order to minimize the overall solution time. Higher-order schemes (SUDS and QUICK) are implemented as explicit corrections to a first-order upwind difference scheme. In both the 2D and the 3D code it is possible to handle multiblock configurations. This feature is added in order to obtain a greater geometric flexibility. To mesh natural terrain in connection with atmospheric flow over complex terrain, a two- and a three-dimensional hyperbolic mesh generator are constructed. Additionally, a two- and a three-dimensional mesh generator based on a simple version of the transfinite interpolation technique are implemented. Several two-dimensional test cases are calculated e.g. laminar flow over a circular cylinder, turbulent channel flow, and turbulent flow over a backward facing step, all with satisfying results. In order to illustrate the application of the codes to atmospheric flow two cases are calculated, flow over a cube in a thick turbulent boundary-layer, and the atmospheric flow over the Askervein hill. (au) 13 tabs., 75 ills., 66 refs.

  12. [General-purpose microcomputer for medical laboratory instruments].

    Science.gov (United States)

    Vil'ner, G A; Dudareva, I E; Kurochkin, V E; Opalev, A A; Polek, A M

    1984-01-01

    Presented in the paper is the microcomputer based on the KP580 microprocessor set. Debugging of the hardware and the software by using the unique debugging stand developed on the basis of microcomputer "Electronica-60" is discussed.

  13. GASP: A general-purpose program for environmental alpha spectra

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez, A.M.; Tome, F.V.; Vargas, M.J. (Dept. de Fisica, Univ. de Extremadura, Badajoz (Spain))

    1992-02-01

    A computer program to study general environmental alpha-particle emission spectra obtained by using semiconductor detectors is described. Each alpha-emitting nuclide is analysed following a method which is suited to its case. Low-energy tail and branching-ratio corrections are included so that the area corresponding to each nuclide in the spectrum is obtained separately. Calculations are not iterative, and so the program is economic in its use of computer time and memory. (orig.).

  14. DYNSYL: a general-purpose dynamic simulator for chemical processes

    Energy Technology Data Exchange (ETDEWEB)

    Patterson, G.K.; Rozsa, R.B.

    1978-09-05

    Lawrence Livermore Laboratory is conducting a safeguards program for the Nuclear Regulatory Commission. The goal of the Material Control Project of this program is to evaluate material control and accounting (MCA) methods in plants that handle special nuclear material (SNM). To this end we designed and implemented the dynamic chemical plant simulation program DYNSYL. This program can be used to generate process data or to provide estimates of process performance; it simulates both steady-state and dynamic behavior. The MCA methods that may have to be evaluated range from sophisticated on-line material trackers such as Kalman filter estimators, to relatively simple material balance procedures. This report describes the overall structure of DYNSYL and includes some example problems. The code is still in the experimental stage and revision is continuing.

  15. Building a General Purpose Beowulf Cluster for Astrophysics Research

    Science.gov (United States)

    Phelps, M. W. L.

    2005-12-01

    The challenges of designing and deploying a high performance, Linux based, Beowulf cluster for use by many departments and projects are covered. Considerations include hardware, infrastructure (space, cooling, networking, etc.), and software; particularly scheduling systems.

  16. A general purpose characterization system for rooftop hybrid microconcentrators

    Science.gov (United States)

    Middleton, Robert; Jones, Christopher; Thomsen, Elizabeth; Diez, Vicente Munoz; Harvey, J.; Everett, Vernie; Blakers, Andrew

    2014-09-01

    A versatile characterization system for hybrid thermal and photovoltaic solar receivers is presented and demonstrated. The characterization of the thermal loss and effective area of a novel hybrid receiver is presented.

  17. GPMIMD2: General Purpose Multiple Instruction Multiple Data Machines 2

    CERN Document Server

    TexLab Media production

    1996-01-01

    An initiative within the European ESPRIT III Programme. Demonstration of a European scalable parallel supercomputer in production environments. A scalable parallel computer based on European High Performance Computing (HPC) technology has been installed in the CERN Computer Centre since July 1994. The initiative to support this development came from the European Union's (EU) Esprit Programme (European Strategic Programme for Research and Development in Information Technology). CERN was the lead partner and co-ordinator of this project. Other partners were: CERFACS, Meiko, Parsys, Telmat Multinode, and Alenia Spazio / QSW. The GPMIMD2 project started in March 1993 and terminated at the end of August 1996.

  18. GPUs: An Emerging Platform for General-Purpose Computation

    Science.gov (United States)

    2007-08-01

    generations from NVIDIA and ATI (the major players in this part of the market ) are expected to support 64-bit floating- point precision (9, 10). The...get the answer as a 1x1 array float_pi = Pi.read_scalar(); // convert answer to a simple float printf(“Value of Pi = %f\

  19. 7 CFR 1485.10 - General purpose and scope.

    Science.gov (United States)

    2010-01-01

    ... the Market Access Program (MAP), and a subcomponent of that program, the Export Incentive Program... develop, maintain or expand commercial export markets for U.S. agricultural commodities and products. MAP... one entity gains an undue advantage. The MAP and EIP/MAP are administered by personnel of the...

  20. 7 CFR 227.1 - General purpose and scope.

    Science.gov (United States)

    2010-01-01

    ... carry out a nutrition information and education program through a system of grants to State agencies to provide for (a) the nutritional training of educational and foodservice personnel, (b) the...

  1. 7 CFR 249.1 - General purpose and scope.

    Science.gov (United States)

    2010-01-01

    ... Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE CHILD NUTRITION PROGRAMS SENIOR FARMERS' MARKET NUTRITION PROGRAM (SFMNP) General § 249.1 General... carry out the Senior Farmers' Market Nutrition Program (SFMNP). The purposes of the SFMNP are to:...

  2. General Purpose Data-Driven System Monitoring for Space Operations

    Data.gov (United States)

    National Aeronautics and Space Administration — Modern space propulsion and exploration system designs are becoming increasingly sophisticated and complex. Determining the health state of these systems using...

  3. 7 CFR 248.1 - General purpose and scope.

    Science.gov (United States)

    2010-01-01

    ... Special Supplemental Nutrition Program for Women, Infants and Children (WIC) or are on the waiting list... Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE CHILD NUTRITION PROGRAMS WIC FARMERS' MARKET NUTRITION PROGRAM (FMNP) General § 248.1...

  4. 7 CFR 250.1 - General purpose and scope.

    Science.gov (United States)

    2010-01-01

    ... Supplemental Food Program, the Special Supplemental Nutrition Program for Women, Infants, and Children, the... Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF... Department by Federal, State and private agencies for use in any State in child nutrition programs,...

  5. 7 CFR 246.1 - General purpose and scope.

    Science.gov (United States)

    2010-01-01

    ... Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE CHILD NUTRITION PROGRAMS SPECIAL SUPPLEMENTAL NUTRITION PROGRAM FOR WOMEN, INFANTS AND CHILDREN... Agriculture shall carry out the Special Supplemental Nutrition Program for Women, Infants and Children...

  6. Large General Purpose Frame for Studying Force Vectors

    Science.gov (United States)

    Heid, Christy; Rampolla, Donald

    2011-01-01

    Many illustrations and problems on the vector nature of forces have weights and forces in a vertical plane. One of the common devices for studying the vector nature of forces is a horizontal "force table," in which forces are produced by weights hanging vertically and transmitted to cords in a horizontal plane. Because some students have…

  7. ArrayD: A general purpose software for Microarray design

    Directory of Open Access Journals (Sweden)

    Sharma Vineet K

    2004-10-01

    Full Text Available Abstract Background Microarray is a high-throughput technology to study expression of thousands of genes in parallel. A critical aspect of microarray production is the design aimed at space optimization while maximizing the number of gene probes and their replicates to be spotted. Results We have developed a software called 'ArrayD' that offers various alternative design solutions for an array given a set of user requirements. The user feeds the following inputs: type of source plates to be used, number of gene probes to be printed, number of replicates and number of pins to be used for printing. The solutions are stored in a text file. The choice of a design solution to be used will be governed by the spotting chemistry to be used and the accuracy of the robot. Conclusions ArrayD is a software for standard cartesian robots. The software aids users in preparing a judicious and elegant design. ArrayD is universally applicable and is available at http://www.igib.res.in/scientists/arrayd/arrayd.html.

  8. General Purpose Segmentation for Microorganisms in Microscopy Images

    DEFF Research Database (Denmark)

    Jensen, Sebastian H. Nesgaard; Moeslund, Thomas B.; Rankl, Christian

    2014-01-01

    In this paper, we propose an approach for achieving generalized segmentation of microorganisms in mi- croscopy images. It employs a pixel-wise classification strategy based on local features. Multilayer percep- trons are utilized for classification of the local features and is trained for each...... specific segmentation problem using supervised learning. This approach was tested on five different segmentation problems in bright field, differential interference contrast, fluorescence and laser confocal scanning microscopy. In all instance good results were achieved with the segmentation quality...

  9. RoboCon: A general purpose telerobotic control center

    Energy Technology Data Exchange (ETDEWEB)

    Draper, J.V.; Noakes, M.W. [Oak Ridge National Lab., TN (United States). Robotics and Process Systems Div.; Schempf, H. [Carnegie Mellon Univ., Pittsburgh, PA (United States); Blair, L.M. [Human Machine Interfaces, Inc., Knoxville, TN (United States)

    1997-02-01

    This report describes human factors issues involved in the design of RoboCon, a multi-purpose control center for use in US Department of Energy remote handling applications. RoboCon is intended to be a flexible, modular control center capable of supporting a wide variety of robotic devices.

  10. PROSOFT: a general purpose software in protein chemistry.

    Science.gov (United States)

    Petrilli, P

    1988-04-01

    Applesoft and 6502 Assembler software was designed to quickly perform operations commonly encountered in protein chemistry. It was not designed for a specific application but can be conveniently used to speed up the determination of protein primary structure.

  11. standalone general purpose data logger design and implementation ...

    African Journals Online (AJOL)

    eobe

    The circuit takes an input range of 15 – 30V DC in addition. 30V DC in addition. 30V DC in addition; an in-built. 9V rechargeable battery provides backup power in the absence of an external ..... chip has a size of 256 Kbits, which is equal to:.

  12. Managing RFID Sensors Networks with a General Purpose RFID Middleware

    Science.gov (United States)

    Abad, Ismael; Cerrada, Carlos; Cerrada, Jose A.; Heradio, Rubén; Valero, Enrique

    2012-01-01

    RFID middleware is anticipated to one of the main research areas in the field of RFID applications in the near future. The Data EPC Acquisition System (DEPCAS) is an original proposal designed by our group to transfer and apply fundamental ideas from System and Data Acquisition (SCADA) systems into the areas of RFID acquisition, processing and distribution systems. In this paper we focus on how to organize and manage generic RFID sensors (edge readers, readers, PLCs, etc…) inside the DEPCAS middleware. We denote by RFID Sensors Networks Management (RSNM) this part of DEPCAS, which is built on top of two new concepts introduced and developed in this work: MARC (Minimum Access Reader Command) and RRTL (RFID Reader Topology Language). MARC is an abstraction layer used to hide heterogeneous devices inside a homogeneous acquisition network. RRTL is a language to define RFID Reader networks and to describe the relationship between them (concentrator, peer to peer, master/submaster). PMID:22969370

  13. General purpose photoneutron production in MCNP4A

    Energy Technology Data Exchange (ETDEWEB)

    Gallmeier, F.X.

    1995-08-01

    A photoneutron production option was implemented in the MCNP4A code, mainly to supply a tool for reactor shielding calculations in beryllium and heavy water environments of complicated three-dimensional geometries. Photoneutron production cross sections for deuterium and beryllium were created. Subroutines were developed to calculate the probability of photoneutron production at photon collision sites and the energy and flight direction of the created photoneutrons. These subroutines were implemented into MCNP4A. Some small program changes were necessary for processing the input to read the photoneutron production cross sections and to install a photoneutron switch. Some arrays were installed or extended to sample photoneutron creation and loss information, and output routines were changed to give the appropriate summary tables. To verify and validate the photoneutron production data and the MCNP4A implementations, the yields of photoneutron sources were calculated and compared with experiments. In the case of deuterium-based photoneutron sources, the calculations agreed well with the experiments; the beryuium-based photoneutron source calculations were up to 30% higher compared with the measurements. More accurate beryllium photoneutron cross sections would be desirable. To apply the developed method to a real shielding problem, the fast neutron fluxes in the heavy-water-filled reflector vessel of the Advanced Neutron Source reactor were investigated and compared with published DORT calculations. Considering the complete independence between the calculations, the merely 10 to 20% lower fluxes obtained with MCNP4A, compared against the DORT results, were more than satisfactory, as the discrepancy is based primarily on differences in the calculated thermal neutron fluxes.

  14. Recent technical advances in general purpose mobile Satcom aviation terminals

    Science.gov (United States)

    Sydor, John T.

    A second general aviation amplitude companded single sideband (ACSSB) aeronautical terminal was developed for use with the Ontario Air Ambulance Service (OAAS). This terminal is designed to have automatic call set up and take down and to interface with the Public Service Telephone Network (PSTN) through a ground earth station hub controller. The terminal has integrated RF and microprocessor hardware which allows such functions as beam steering and automatic frequency control to be software controlled. The terminal uses a conformal patch array system to provide almost full azimuthal coverage. Antenna beam steering is executed without relying on aircraft supplied orientation information.

  15. Recent technical advances in general purpose mobile Satcom aviation terminals

    Science.gov (United States)

    Sydor, John T.

    1990-01-01

    A second general aviation amplitude companded single sideband (ACSSB) aeronautical terminal was developed for use with the Ontario Air Ambulance Service (OAAS). This terminal is designed to have automatic call set up and take down and to interface with the Public Service Telephone Network (PSTN) through a ground earth station hub controller. The terminal has integrated RF and microprocessor hardware which allows such functions as beam steering and automatic frequency control to be software controlled. The terminal uses a conformal patch array system to provide almost full azimuthal coverage. Antenna beam steering is executed without relying on aircraft supplied orientation information.

  16. Evaluation of the NASTRAN General Purpose Computer Program.

    Science.gov (United States)

    1980-08-01

    element, the bending behavior using the quintic transverse displacement TRPLT1 element, and membrane-bending coupling using Novozhilov shallow - shell theory . This...of shallow shell theory . 7.3.3. Evaluation Results Although Narayanaswami presented two numerical examples (spherical cap, Scordelis-Lo cylindrical

  17. Managing RFID Sensors Networks with a General Purpose RFID Middleware

    Directory of Open Access Journals (Sweden)

    Enrique Valero

    2012-06-01

    Full Text Available RFID middleware is anticipated to one of the main research areas in the field of RFID applications in the near future. The Data EPC Acquisition System (DEPCAS is an original proposal designed by our group to transfer and apply fundamental ideas from System and Data Acquisition (SCADA systems into the areas of RFID acquisition, processing and distribution systems. In this paper we focus on how to organize and manage generic RFID sensors (edge readers, readers, PLCs, etc… inside the DEPCAS middleware. We denote by RFID Sensors Networks Management (RSNM this part of DEPCAS, which is built on top of two new concepts introduced and developed in this work: MARC (Minimum Access Reader Command and RRTL (RFID Reader Topology Language. MARC is an abstraction layer used to hide heterogeneous devices inside a homogeneous acquisition network. RRTL is a language to define RFID Reader networks and to describe the relationship between them (concentrator, peer to peer, master/submaster.

  18. Dynamic Transparent General Purpose Process Migration for Linux

    Directory of Open Access Journals (Sweden)

    Amirreza Zarrabi

    2013-01-01

    Full Text Available Process migration refers to the act of transferring a process in the middle of its execution from one machine to another in a network. In this paper, we proposed a process migration framework for Linux OS. It is a multilayer architecture to confine every functionality independent section of the system in separate layer. This architecture is capable of supporting diverse applications due to generic user space interface and dynamic structure that can be modified according to demands.

  19. PD5: a general purpose library for primer design software.

    Science.gov (United States)

    Riley, Michael C; Aubrey, Wayne; Young, Michael; Clare, Amanda

    2013-01-01

    Complex PCR applications for large genome-scale projects require fast, reliable and often highly sophisticated primer design software applications. Presently, such applications use pipelining methods to utilise many third party applications and this involves file parsing, interfacing and data conversion, which is slow and prone to error. A fully integrated suite of software tools for primer design would considerably improve the development time, the processing speed, and the reliability of bespoke primer design software applications. The PD5 software library is an open-source collection of classes and utilities, providing a complete collection of software building blocks for primer design and analysis. It is written in object-oriented C(++) with an emphasis on classes suitable for efficient and rapid development of bespoke primer design programs. The modular design of the software library simplifies the development of specific applications and also integration with existing third party software where necessary. We demonstrate several applications created using this software library that have already proved to be effective, but we view the project as a dynamic environment for building primer design software and it is open for future development by the bioinformatics community. Therefore, the PD5 software library is published under the terms of the GNU General Public License, which guarantee access to source-code and allow redistribution and modification. The PD5 software library is downloadable from Google Code and the accompanying Wiki includes instructions and examples: http://code.google.com/p/primer-design.

  20. A General Purpose Digital System for Field Vibration Testing

    DEFF Research Database (Denmark)

    Brincker, Rune; Larsen, Jesper Abildgaard; Ventura, Carlos

    2007-01-01

    This paper describes the development and concept implementation of a highly sensitive digital recording system for seismic applications and vibration measurements on large Civil Engineering structures. The system is based on highly sensitive motion transducers that have been used by seismologists...... and geophysicists for decades. The conventional geophone's ratio of cost to performance, including noise, linearity and dynamic range is unmatched by advanced modern accelerometers. The unit comprises six independent sensor elements that can be used in two different configurations for noise reduction and extended...

  1. JUNO: a General Purpose Experiment for Neutrino Physics

    CERN Document Server

    Grassi, Marco

    2016-01-01

    JUNO is a 20 kt Liquid Scintillator Antineutrino Detector currently under construction in the south of China. This report reviews JUNO's physics programme related to all neutrino sources but reactor antineutrinos, namely neutrinos from supernova burst, solar neutrinos and geoneutrinos.

  2. Determining PACAF Transportation Alternatives to the General Purpose Vehicle

    Science.gov (United States)

    2005-03-01

    equipped with specified headlamps, stop lamps, turn signal lamps, reflex reflectors, parking brakes, rear view mirrors, windshields, seat belts, and...Lighting: Quartz-halogen headlamps, front and rear turn signals , high-mount rear brake and taillamps with a 20 second safety delay after vehicle is

  3. Use of Java MultiThreaded Program in Web Item%Web项目中Java多线程的使用

    Institute of Scientific and Technical Information of China (English)

    刘刚

    2012-01-01

    介绍了Java平台下多线程程序的使用,并以开发Java-Web应用为例,探讨了多线程程序的开发过程,同时提供了重要的样例代码,对相关的项目实践有一定的实用价值和借鉴意义.

  4. Performance Analysis of MTD64, our Tiny Multi-Threaded DNS64 Server Implementation: Proof of Concept

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2016-07-01

    In this paper, the performance of MTD64 is measured and compared to that of the industry standard BIND in order to check the correctness of the design concepts of MTD64, especially of the one that we use a new thread for each request. For the performance measurements, our earlier proposed dns64perf program is enhanced as dns64perf2, which one is also documented in this paper. We found that MTD64 seriously outperformed BIND and hence our design principles may be useful for the design of a high performance production class DNS64 server. As an additional test, we have also examined the effect of dynamic CPU frequency scaling to the performance of the implementations.

  5. Application of multi-thread computing and domain decomposition to the 3-D neutronics Fem code Cronos

    Energy Technology Data Exchange (ETDEWEB)

    Ragusa, J.C. [CEA Saclay, Direction de l' Energie Nucleaire, Service d' Etudes des Reacteurs et de Modelisations Avancees (DEN/SERMA), 91 - Gif sur Yvette (France)

    2003-07-01

    The purpose of this paper is to present the parallelization of the flux solver and the isotopic depletion module of the code, either using Message Passing Interface (MPI) or OpenMP. Thread parallelism using OpenMP was used to parallelize the mixed dual FEM (finite element method) flux solver MINOS. Investigations regarding the opportunity of mixing parallelism paradigms will be discussed. The isotopic depletion module was parallelized using domain decomposition and MPI. An attempt at using OpenMP was unsuccessful and will be explained. This paper is organized as follows: the first section recalls the different types of parallelism. The mixed dual flux solver and its parallelization are then presented. In the third section, we describe the isotopic depletion solver and its parallelization; and finally conclude with some future perspectives. Parallel applications are mandatory for fine mesh 3-dimensional transport and simplified transport multigroup calculations. The MINOS solver of the FEM neutronics code CRONOS2 was parallelized using the directive based standard OpenMP. An efficiency of 80% (resp. 60%) was achieved with 2 (resp. 4) threads. Parallelization of the isotopic depletion solver was obtained using domain decomposition principles and MPI. Efficiencies greater than 90% were reached. These parallel implementations were tested on a shared memory symmetric multiprocessor (SMP) cluster machine. The OpenMP implementation in the solver MINOS is only the first step towards fully using the SMPs cluster potential with a mixed mode parallelism. Mixed mode parallelism can be achieved by combining message passing interface between clusters with OpenMP implicit parallelism within a cluster.

  6. Co Modeling and Co Synthesis of Safety Critical Multi threaded Embedded Software forMulti Core Embedded Platforms

    Science.gov (United States)

    2017-03-20

    RESPONSIBLE PERSON WENDY HARRISON a. REPORT U b. ABSTRACT U c. THIS PAGE U U 19b. TELEPHONE NUMBER (include area code) +44(0)18956161 DISTRIBUTION A...Wendy Harrison and James Lawton, from the USAF Office of Scientific Research, for supporting this collaborative research. DISTRIBUTION A. Approved

  7. A Platform Independent Game Technology Model for Model Driven Serious Games Development

    Science.gov (United States)

    Tang, Stephen; Hanneghan, Martin; Carter, Christopher

    2013-01-01

    Game-based learning (GBL) combines pedagogy and interactive entertainment to create a virtual learning environment in an effort to motivate and regain the interest of a new generation of "digital native" learners. However, this approach is impeded by the limited availability of suitable "serious" games and high-level design…

  8. Musrfit: A Free Platform-Independent Framework for μSR Data Analysis

    Science.gov (United States)

    Suter, A.; Wojek, B. M.

    Afree data-analysis framework forμSR has been developed. musrfit is fully written in C++, is running under GNU/Linux, MacOSX, as well as Microsoft Windows, andis distributed under the termsof the GNU GPL.Itis based on the CERN ROOT framework and is utilizing the Minuit2 optimization routines for fitting. It consists of a set of programmes allowing the user to analyze and visualize the data.The fitting process is controlled by an ASCII-input file with an extended syntax. A dedicated text editoris helping the user to createand handle these files in an efficient way, execute the fitting, show the data, get online help, and so on. Aversatile tool for the generation of new input files and the extraction of fit parameters is provided as well. musrfit facilitates a plugin mechanism allowing to invoke user-defined functions. Hence, the functionality of the framework can be extended with a minimal amount of overhead for the user. Currently, musrfit can read the followingfacility raw-data files: PSI-BIN, MDU (PSI), ROOT (LEM/PSI), WKM (outdated ASCII format), MUD (TRIUMF), NeXus (ISIS).

  9. Platform Independent Source Code Transformations for Task Concurrency Management (Platformonafhankelijke broncodetransformaties voor het beheer van taakparallellisme)

    OpenAIRE

    Himpe, Stefaan

    2006-01-01

    De applicaties die tegenwoordig op ingebedde systemen gebruikt worden,vertonen steeds vaker dynamisch gedrag. Neem bijvoorbeeld een GSM.GSM gebruikers verwachten dat ze kunnen telefoneren en ondertusseniets opzoeken in hun ingebouwde agenda. Dit leidt tot applicatieswaarin verschillende taken tegelijk actief moeten kunnen zijn.Het starten en stoppen van die taken leidt tot een variërende behoefte aangeheugen en rekenkracht.De meeste GSM's bieden tegenwoordig ook spelletjes aan. Spelletjesvere...

  10. A platform-independent method to reduce CT truncation artifacts using discriminative dictionary representations.

    Science.gov (United States)

    Chen, Yang; Budde, Adam; Li, Ke; Li, Yinsheng; Hsieh, Jiang; Chen, Guang-Hong

    2017-01-01

    When the scan field of view (SFOV) of a CT system is not large enough to enclose the entire cross-section of the patient, or the patient needs to be positioned partially outside the SFOV for certain clinical applications, truncation artifacts often appear in the reconstructed CT images. Many truncation artifact correction methods perform extrapolations of the truncated projection data based on certain a priori assumptions. The purpose of this work was to develop a novel CT truncation artifact reduction method that directly operates on DICOM images. The blooming of pixel values associated with truncation was modeled using exponential decay functions, and based on this model, a discriminative dictionary was constructed to represent truncation artifacts and nonartifact image information in a mutually exclusive way. The discriminative dictionary consists of a truncation artifact subdictionary and a nonartifact subdictionary. The truncation artifact subdictionary contains 1000 atoms with different decay parameters, while the nonartifact subdictionary contains 1000 independent realizations of Gaussian white noise that are exclusive with the artifact features. By sparsely representing an artifact-contaminated CT image with this discriminative dictionary, the image was separated into a truncation artifact-dominated image and a complementary image with reduced truncation artifacts. The artifact-dominated image was then subtracted from the original image with an appropriate weighting coefficient to generate the final image with reduced artifacts. This proposed method was validated via physical phantom studies and retrospective human subject studies. Quantitative image evaluation metrics including the relative root-mean-square error (rRMSE) and the universal image quality index (UQI) were used to quantify the performance of the algorithm. For both phantom and human subject studies, truncation artifacts at the peripheral region of the SFOV were effectively reduced, revealing soft tissue and bony structure once buried in the truncation artifacts. For the phantom study, the proposed method reduced the relative RMSE from 15% (original images) to 11%, and improved the UQI from 0.34 to 0.80. A discriminative dictionary representation method was developed to mitigate CT truncation artifacts directly in the DICOM image domain. Both phantom and human subject studies demonstrated that the proposed method can effectively reduce truncation artifacts without access to projection data. © 2016 American Association of Physicists in Medicine.

  11. musrfit: A free platform-independent framework for muSR data analysis

    CERN Document Server

    Suter, A

    2011-01-01

    A free data-analysis framework for muSR has been developed. musrfit is fully written in C++, is running under GNU/Linux, Mac OS X, as well as Microsoft Windows, and is distributed under the terms of the GNU GPL. It is based on the CERN ROOT framework and is utilizing the Minuit optimization routines for fitting. It consists of a set of programs allowing the user to analyze and visualize the data. The fitting process is controlled by an ascii-input file with an extended syntax. A dedicated text editor is helping the user to create and handle these files in an efficient way, execute the fitting, show the data, get online help, and so on. A versatile tool for the generation of new input files and the extraction of fit parameters is provided as well. musrfit facilitates a plugin mechanism allowing to invoke user-defined functions. Hence, the functionality of the framework can be extended with a minimal amount of overhead for the user. Currently, musrfit can read the following facility raw-data files: PSI-BIN, MDU...

  12. Conversion of HSPF Legacy Model to a Platform-Independent, Open-Source Language

    Science.gov (United States)

    Heaphy, R. T.; Burke, M. P.; Love, J. T.

    2015-12-01

    Since its initial development over 30 years ago, the Hydrologic Simulation Program - FORTAN (HSPF) model has been used worldwide to support water quality planning and management. In the United States, HSPF receives widespread endorsement as a regulatory tool at all levels of government and is a core component of the EPA's Better Assessment Science Integrating Point and Nonpoint Sources (BASINS) system, which was developed to support nationwide Total Maximum Daily Load (TMDL) analysis. However, the model's legacy code and data management systems have limitations in their ability to integrate with modern software, hardware, and leverage parallel computing, which have left voids in optimization, pre-, and post-processing tools. Advances in technology and our scientific understanding of environmental processes that have occurred over the last 30 years mandate that upgrades be made to HSPF to allow it to evolve and continue to be a premiere tool for water resource planners. This work aims to mitigate the challenges currently facing HSPF through two primary tasks: (1) convert code to a modern widely accepted, open-source, high-performance computing (hpc) code; and (2) convert model input and output files to modern widely accepted, open-source, data model, library, and binary file format. Python was chosen as the new language for the code conversion. It is an interpreted, object-oriented, hpc code with dynamic semantics that has become one of the most popular open-source languages. While python code execution can be slow compared to compiled, statically typed programming languages, such as C and FORTRAN, the integration of Numba (a just-in-time specializing compiler) has allowed this challenge to be overcome. For the legacy model data management conversion, HDF5 was chosen to store the model input and output. The code conversion for HSPF's hydrologic and hydraulic modules has been completed. The converted code has been tested against HSPF's suite of "test" runs and shown good agreement and similar execution times while using the Numba compiler. Continued verification of the accuracy of the converted code against more complex legacy applications and improvement upon execution times by incorporating an intelligent network change detection tool is currently underway, and preliminary results will be presented.

  13. Operating Methods for a Computer System Providing Platform Independent Universal Client Device

    Science.gov (United States)

    1997-09-30

    errorDisplay = makeErrorTex tArea (rows, cols); if (errorDisplay = null) // it’s already been created return; addComponent(parent, errorDisplay, key...GUISCRIPT (in bytes). // // NOTE: We are planning to expand the size of the header to 32 bytes // and give it a different format. The new header

  14. 多核多线程并行编程模型研究及应用%Research and Application of Multi- Core Multi -Thread Parallel Programming Model

    Institute of Scientific and Technical Information of China (English)

    于方

    2012-01-01

    首先介绍目前基于多核平台的并行计算技术和方法,本文主要研究OpenMP+ Microsoft VisualStudio2005的多核多线程并行编程模型;以求解三角网格模型上最短路径问题为应用实例,验证了多核平台下利用该模型实现多核多线程并行编程的正确性和高效性,为解决其他应用领域的复杂计算提供了一种易实现、大众化的多核并行编程模式.

  15. Multi-Threads Parallel Programming Method on Multi-Core PC%多核平台下的多线程并行编程

    Institute of Scientific and Technical Information of China (English)

    于方

    2010-01-01

    本文主要研究多核处理器平台上的多线程并行编程方法,重点讨论了当前流行的OpenMP和Microsoft Visual Studio 2005多核多线程并行编程技术以及基于TBB的多核并行编程模型.事实表明,基于多核平台进行多线程并行编程能够充分利用多核体系结构的优势,提高计算效率、获得高水平的计算性能,已经成为未来实现个人低成本并行计算和多核软件技术发展的趋势.

  16. 基于多线程的OpenGL渲染方法研究%The Research of OpenGL rendering method based on Multi-Thread

    Institute of Scientific and Technical Information of China (English)

    李鑫

    2005-01-01

    介绍了Windows下OpenGL编程的基本方法.对于实时性要求较高的应用场合,提出了一种使用多线程技术OpenGL渲染的方法.并详细介绍了实现过程,最后给出了一个实例.

  17. 基于MFC类库的抢先式多线程搜索技术%Based on the MFC Class Library Preemptive Multi-Threaded Search Technology

    Institute of Scientific and Technical Information of China (English)

    金沂

    2009-01-01

    Win32 API支持抢先式多线程网络,SPIDER工程(程序)是一个如何用抢先式多线程技术实现在网上用网络蜘蛛/机器人聚集信息的程序,该工程产生一个象蜘蛛一样行动的程序,该程序为断开的URL链接检查WEB站点.本工程能用作收集、索引信息的模板,该模板将这些信息存入到可以用于查询的数据库文件中.

  18. 防火墙接收模块多线程技术的实现%Completion of Multi-Thread Techniques for Firewall's Receiving Module

    Institute of Scientific and Technical Information of China (English)

    黄力; 谢立新

    2008-01-01

    对基于IXP2800网络处理器硬件防火墙接收模块的数据结构进行详细的描述.分析接收状态寄存器的数据包处理特点.给出一种多微引擎多线程的并行处理实现方案,对设计的进行测试.结果说明:基于网络处理器的防火墙接收模块的多线程的实现可以提高防火墙的性能.

  19. Investigation of Turbulence Effect on Dynamic Behaviour of Aircraft Through Use of JDYNASIM: A Platform Independent Simulation Software

    Directory of Open Access Journals (Sweden)

    D. P. Coiro

    2000-01-01

    Full Text Available The need of fast and interactive tool to simulate aircraft behaviour is a demand of modern technology. This necessity is more evident when light aircraft and sailplanes are involved.. This paper presents an attempt to give simulation possibility almost to everyone through the use of JDynaSim code, written to meet this goal. JDynaSim is an interactive graphic flight simulation code written in JAVA and VRML languages which practically allows everyone to fly the aeroplane under investigation. This is true due to the fact that JAVA is a language born to work under a generic Internet browser (such as Microsoft Explorer or Netscape and thus it is independent from the operating system under which it is running. Dynamic motion equations are solved by 12 ordinary non-linear differential equations in which the non-linear forces are input in multidimensional matrix form and are interpolated at each time instant. Advancing in time is performed using a 4th order Runge-Kutta integration scheme. Translation equations of motion are written in a flight path axis system while rotational equations are written on body axis system. An on-purpose written pre-processor has to be used to transform forces to appropriate reference system. The code can interactively read mouse and keyboard inputs as well as files with command laws assigned in function of time. There is the possibility to record the interactive session performed and then to repeat the manoeuvre. This paper presents also the code extension for simulating the effect of gusts generated according to classical theories. In particular investigation has been performed to compare results coming from classical theories on aircraft responses to gusts inputs respect to those coming from JDynSim flight simulator. JDynaSim is available to everybody through Internet at the following URL: http://www.dpa.unina.it/coiro/

  20. Performance analysis of general purpose and digital signal processor kernels for heterogeneous systems-on-chip

    Directory of Open Access Journals (Sweden)

    T. von Sydow

    2003-01-01

    Full Text Available Various reasons like technology progress, flexibility demands, shortened product cycle time and shortened time to market have brought up the possibility and necessity to integrate different architecture blocks on one heterogeneous System-on-Chip (SoC. Architecture blocks like programmable processor cores (DSP- and GPP-kernels, embedded FPGAs as well as dedicated macros will be integral parts of such a SoC. Especially programmable architecture blocks and associated optimization techniques are discussed in this contribution. Design space exploration and thus the choice which architecture blocks should be integrated in a SoC is a challenging task. Crucial to this exploration is the evaluation of the application domain characteristics and the costs caused by individual architecture blocks integrated on a SoC. An ATE-cost function has been applied to examine the performance of the aforementioned programmable architecture blocks. Therefore, representative discrete devices have been analyzed. Furthermore, several architecture dependent optimization steps and their effects on the cost ratios are presented.

  1. TOUGH2: A general-purpose numerical simulator for multiphase nonisothermal flows

    Energy Technology Data Exchange (ETDEWEB)

    Pruess, K. [Lawrence Berkeley Lab., CA (United States)

    1991-06-01

    Numerical simulators for multiphase fluid and heat flows in permeable media have been under development at Lawrence Berkeley Laboratory for more than 10 yr. Real geofluids contain noncondensible gases and dissolved solids in addition to water, and the desire to model such `compositional` systems led to the development of a flexible multicomponent, multiphase simulation architecture known as MULKOM. The design of MULKOM was based on the recognition that the mass-and energy-balance equations for multiphase fluid and heat flows in multicomponent systems have the same mathematical form, regardless of the number and nature of fluid components and phases present. Application of MULKOM to different fluid mixtures, such as water and air, or water, oil, and gas, is possible by means of appropriate `equation-of-state` (EOS) modules, which provide all thermophysical and transport parameters of the fluid mixture and the permeable medium as a function of a suitable set of primary thermodynamic variables. Investigations of thermal and hydrologic effects from emplacement of heat-generating nuclear wastes into partially water-saturated formations prompted the development and release of a specialized version of MULKOM for nonisothermal flow of water and air, named TOUGH. TOUGH is an acronym for `transport of unsaturated groundwater and heat` and is also an allusion to the tuff formations at Yucca Mountain, Nevada. The TOUGH2 code is intended to supersede TOUGH. It offers all the capabilities of TOUGH and includes a considerably more general subset of MULKOM modules with added capabilities. The paper briefly describes the simulation methodology and user features.

  2. 78 FR 77662 - Notice of Availability (NOA) for General Purpose Warehouse and Information Technology Center...

    Science.gov (United States)

    2013-12-24

    ...-- Environmental Assessment (EA) Finding of No Significant Impact (FONSI). SUMMARY: On October 31, 2013, Defense... human environment within the context of NEPA and that no significant impacts on the human environment... environment. Specifically, no highly uncertain or controversial impacts, unique or unknown risk or...

  3. Developing the VirtualwindoW into a General Purpose Telepresence Interface

    Energy Technology Data Exchange (ETDEWEB)

    Kinoshita, Robert Arthur; Anderson, Matthew Oley; Mckay, Mark D; Willis, Walter David

    1999-04-01

    An important need while using robots or remotely operated equipment is the ability for the operator or an observer to easily and accurately perceive the operating environment. A classic problem in providing a complete representation of a work area is sensory overload or excessive complexity in the human–machine interface. In addition, remote operations often benefit from depth perception capability while viewing or manipulating objects. Thus, there is an on going effort within the robotic field to develop simplified telepresence interfaces. The Department of Energy’s Idaho National Engineering and Environmental Laboratory (INEEL) has been researching methods to generalize a human-machine interface for telepresence applications. Initial telepresence research conducted at the INEEL developed and implemented a concept called the VirtualwindoW. This system minimized the complexity of remote stereo viewing controls and provided the operator the “feel” of viewing the environment, including depth perception, in a natural setting. The VirtualwindoW has shown that the human-machine interface can be simplified while increasing operator performance. This paper deals with the continuing research and development of the VirtualwindoW to provide a generalized, reconfigurable system that easily utilizes commercially available components. The original system has now been expanded to include support for zoom lenses, camera blocks, wireless links, and even vehicle control.

  4. Developing the VirtualwindoW into a General Purpose Telepresence Interface

    Energy Technology Data Exchange (ETDEWEB)

    McKay, M D; Anderson, M O; Kinoshita, R A; Willis, W D

    1999-04-01

    An important need while using robots or remotely operated equipment is the ability for the operator or an observer to easily and accurately perceive the operating environment. A classic problem in providing a complete representation of a work area is sensory overload or excessive complexity in the human-machine interface. In addition, remote operations often benefit from depth perception capability while viewing or manipulating objects. Thus, there is an on going effort within the robotic field to develop simplified telepresence interfaces. The Department of Energy's Idaho National Engineering and Environmental Laboratory (INEEL) has been researching methods to generalize a human-machine interface for telepresence applications. Initial telepresence research conducted at the INEEL developed and implemented a concept called the VirtualwindoW. This system minimized the complexity of remote stereo viewing controls and provided the operator the "feel" of viewing the environment, including depth perception, in a natural setting. The VirtualwindoW has shown that the human-machine interface can be simplified while increasing operator performance. This paper deals with the continuing research and development of the VirtualwindoW to provide a generalized, reconfigurable system that easily utilizes commercially available components. The original system has now been expanded to include support for zoom lenses, camera blocks, wireless links, and even vehicle control.

  5. Integration of the Density Gradient Model into a General Purpose Device Simulator

    Directory of Open Access Journals (Sweden)

    Andreas Wettstein

    2002-01-01

    Full Text Available A generalized Density Gradient model has been implemented into the device simulator Dessis [DESSIS 7.0 reference manual (2001. ISE Integrated Systems Engineering AG, Balgriststrasse 102, CH-8008 Zürich].We describe the multidimensional discretization scheme used and discuss our modifications to the standard Density Gradient model. The evaluation of the model shows good agreement to results obtained by the Schro¨dinger equation.

  6. A General Purpose Analysis System Based on a Programmable Fluid Processor

    Science.gov (United States)

    2007-11-02

    confirmed. To eliminate false alarms, the ability of a detection system to navigate adaptively a sequence of tests would be of advantage . Biowarfare...techniques were developed to reduce the step height, such as manually dropping BCB onto the PFP5 device area, double layer coating, and spin-then- drop ...principle consists of using the roughened features to minimize the contact area between the surface and a sessile droplet9. Development of super

  7. Aviation Security Force Assistance: Joint General Purpose Forces as Air Advisors

    Science.gov (United States)

    2013-03-01

    David E. Thaler et al., Building Partner Health Capacity with U.S. Military Forces: Enhancing AFSOC Health Engagement Missions (Santa Monica, CA: RAND...language, and diplomacy.” 36 Similarly, the advisor must be careful to avoid demonstrating frustration with host nation personnel and must be...Africa. 12 USAFRICOM conducted the first APF event in Accra, Ghana in March, 2012, with participation of service members from Ghana , Togo, Benin

  8. Maximizing the Potential of the Special Operations Forces and General Purpose Forces

    Science.gov (United States)

    2014-05-22

    Operation to remove Panamanian dictator Manuel Noriega . The intervention followed a series of diplomatic challenges between the United States and... Noriega . Following the escalation of hostilities, US forces were deployed with the guidance to “Create an environment safe for Americans, Ensure the...integrity of the Panama Canal, Provide a stable environment for the freely elected Endara Government, and to bring Noriega to justice.”70 The operation

  9. 369 TFlop/s molecular dynamics simulations on the Roadrunner general-purpose heterogeneous supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Swaminarayan, Sriram [Los Alamos National Laboratory; Germann, Timothy C [Los Alamos National Laboratory; Kadau, Kai [Los Alamos National Laboratory; Fossum, Gordon C [IBM CORPORATION

    2008-01-01

    The authors present timing and performance numbers for a short-range parallel molecular dynamics (MD) code, SPaSM, that has been rewritten for the heterogeneous Roadrunner supercomputer. Each Roadrunner compute node consists of two AMD Opteron dual-core microprocessors and four PowerXCell 8i enhanced Cell microprocessors, so that there are four MPI ranks per node, each with one Opteron and one Cell. The interatomic forces are computed on the Cells (each with one PPU and eight SPU cores), while the Opterons are used to direct inter-rank communication and perform I/O-heavy periodic analysis, visualization, and checkpointing tasks. The performance measured for our initial implementation of a standard Lennard-Jones pair potential benchmark reached a peak of 369 Tflop/s double-precision floating-point performance on the full Roadrunner system (27.7% of peak), corresponding to 124 MFlop/Watt/s at a price of approximately 3.69 MFlops/dollar. They demonstrate an initial target application, the jetting and ejection of material from a shocked surface.

  10. 41 CFR 60-2.10 - General purpose and contents of affirmative action programs.

    Science.gov (United States)

    2010-07-01

    ... central premise underlying affirmative action is that, absent discrimination, over time a contractor's workforce, generally, will reflect the gender, racial and ethnic profile of the labor pools from which the... progress toward achieving the workforce that would be expected in the absence of discrimination. (2)...

  11. Developing USAF General Purpose Forces for Building Partner Nation Aviation Capacity

    Science.gov (United States)

    2010-02-17

    United States Air Force, Draft USAF Air Advisor Academy Charter, 2009, 2. 24 Ibid., 3. 25 Ken Arteaga , Headquarters Air Education and Training...26 Ken Arteaga , Headquarters Air Education and Training Command, “Air Advisor Education & Training...29 Mr. Ken Arteaga , Headquarters Air Education and Training Command, e-mail to the author, 9 February 2010. 30 Air Force Culture

  12. General-Purpose Stereo Imaging Velocimetry Technique Developed for Space and Industrial Applications

    Science.gov (United States)

    McDowell, Mark

    2004-01-01

    A new three-dimensional, full-field analysis technique has been developed for industrial and space applications. Stereo Imaging Velocimetry (SIV) will provide full-field analysis for three-dimensional flow data from any optically transparent fluid that can be seeded with tracer particles. The goal of SIV is to provide a means to measure three-dimensional fluid velocities quantitatively and qualitatively at many points. SIV is applicable to any optically transparent fluid experiment. Except for the tracer particles, this measurement technique is nonintrusive. Velocity accuracies are on the order of 95 to 99 percent of fullfield. The system components of SIV include camera calibration, centroid determination, overlap decomposition, particle tracking, stereo matching, and three-dimensional velocity analysis. SIV has been used successfully for space shuttle experiments as well as for fluid flow applications for business and industry.

  13. Spatial frequency characterisation of a far-field superlens to facilitate general purpose imaging

    Science.gov (United States)

    Fadakar Masouleh, Farzaneh; Teal, Paul; Moore, Ciaran

    2016-04-01

    Based on sub-wavelength energy concentration and enhancement of evanescent fields, far-field super-lenses (FSLs) were proposed recently as a means to achieve super-resolution imaging and thus improve the accuracy and resolution of optical microscopy. Comprised of a thin-film plasmonic enhancement layer and a diffraction grating, the performance of FSLs depends greatly on the geometry and size of its constituent parts. In this paper, we aim to characterize the resolution capabilities of FSLs in a novel and meaningful way, while also exploring the effects of non-ideal grating geometries due to fabrication limitations on imaging performance. We use finite element modelling to explore trapezoidal, inverse-trapezoidal, circular, rounded rectangular, and rectangular grating profiles and present a transfer function that quantifies the performance of these grating profiles in terms of their transmission at different wavenumbers.

  14. Environmental Assessment for Proposed General Purpose Warehouse Construction at Defense Distribution Officer Oklahoma City, Oklahoma (DDOO)

    Science.gov (United States)

    2008-05-01

    complex (SUND) - Teller fine sandy learn (T~B) - Teller -Urban land corT’jJiex (nLO) - Tnbbey fino sandy loam (TriA) - Urban land (URB) - Vanoss srlt...Mr. John Harrington 21 E Main Suite 100 Oklahoma City OK 73104-2405 405-234-2264 Audubon Society of Central Oklahoma President Ms. Jane Cunningham 5505

  15. An LHCb general-purpose acquisition board for beam and background monitoring at the LHC

    CERN Document Server

    Alessio, F; Guzik, Z

    2011-01-01

    In this paper we will present an LHCb custom-made acquisition board which was developed for a continuous beam and background monitoring during LHC operations at CERN. The paper describes both the conceptual design and its performance, and concludes with results from the first period of beam operations at the LHC. The main purpose of the acquisition board is to process signals from a pair of beam pickups to continuously monitor the intensity of each bunch, and to monitor the phase of the arrival time of each proton bunch with respect to the LHC bunch clock. The extreme versatility of the board also allowed the LHCb experiment to build a high-speed and high-sensitivity readout system for a fast background monitor based on a pair of plastic scintillators. The board has demonstrated very good performance and proved to be conceptually valid during the first months of operations at the LHC. Connected to the beam pickups, it provides the LHCb experiment with a real-time measurement of the total intensity of each bea...

  16. A Framework for Multimedia Communication in a General-Purpose Distributed System

    Science.gov (United States)

    2016-06-14

    June 1988. 5. A. Birrell and B. Nelson, "Implementing Remote Procedure Calls", ACM Trans. Computer Systems 2, 1 (Feb. 1984), 39-59. 6. P. T...1989, 23-28. 9. D. D. Clark, M. L. Lambert and L. Zhang, "NETBLT: A High Throughput Transport Protocol", Proc. of ACM SIGCOMM 87, Stowe, Vermont, Aug...Network Adapter Board (NAB): High-Performance Network Communication for Multiprocessors", ACM SIGCOMM 88, Aug. 1988, 175-187. 13. C. L. Liu and J. W

  17. HOOMD-blue, general-purpose many-body dynamics on the GPU

    Science.gov (United States)

    Anderson, Joshua; Keys, Aaron; Phillips, Carolyn; Dac Nguyen, Trung; Glotzer, Sharon

    2010-03-01

    We present HOOMD-blue, a new, open source code for performing molecular dynamics and related many-body dynamics simulations on graphics processing units (GPUs). All calculations are fully implemented on the GPU, enabling large performance speedups over traditional CPUs. On typical benchmarks, HOOMD-blue is about 60 times faster on a current generation GPU compared to running on a single CPU core. Next generation chips are due for release in early 2010 and are expected to nearly double performance. Efficient execution is achieved without any lack of generality and thus a wide variety of capabilities are present in the code, including standard bond, pair, angle, dihedral and improper potentials, along with the common NPT, NVE, NVT, and Brownian dynamics integration routines. The code is object-oriented, well documented, and easy to modify. We are constantly adding new features and looking for new developers to contribute to this fast maturing, open-source code [1]. In this talk, we present an overview of HOOMD-blue and give examples of its current and planned capabilities and speed over traditional CPU-based codes. [1] Find HOOMD-blue online at: http://codeblue.umich.edu/hoomd-blue/

  18. Thermal stress response of General Purpose Heat Source (GPHS) aeroshell material

    Science.gov (United States)

    Grinberg, I. M.; Hulbert, L. E.; Luce, R. G.

    1980-01-01

    A thermal stress test was conducted to determine the ability of the GPHS aeroshell 3 D FWPF material to maintain physical integrity when exposed to a severe heat flux such as would occur from prompt reentry of GPHS modules. The test was performed in the Giant Planetary Facility at NASA's Ames Research Center. Good agreement was obtained between the theoretical and experimental results for both temperature and strain time histories. No physical damage was observed in the test specimen. These results provide initial corroboration both of the analysis techniques and that the GPHS reentry member will survive the reentry thermal stress levels expected.

  19. Cross-Cultural Competency in the General Purpose Force: Training Strategies and Implications for Future Operations

    Science.gov (United States)

    2013-04-09

    degree of talent that surpasses C3. Using these concepts as a framework , the analysis herein will make suggestions designed to improve cross-cultural...talent recognition and recruiting practices and introduce a potential training paradigm to fit the traditional GPF and SOF/IW framework of the...leadership their exclusive area of expertise (e.g. Richard Lewis Communications4, The Hofstede Center5, or Caligiuri & Associates, Incorporated6 The last

  20. Active Vibration Control of a Smart Cantilever Beam on General Purpose Operating System

    Directory of Open Access Journals (Sweden)

    A. P. Parameswaran

    2013-07-01

    Full Text Available All mechanical systems suffer from undesirable vibrations during their operations. Their occurrence is uncontrollable as it depends on various factors. However, for efficient operation of the system, these vibrations have to be controlled within the specified limits. Light weight, rapid and multi-mode control of the vibrating structure is possible by the use of piezoelectric sensors and actuators and feedback control algorithms. In this paper, direct output feedback based active vibration control has been implemented on a cantilever beam using Lead Zirconate-Titanate (PZT sensors and actuators. Three PZT patches were used, one as the sensor, one as the exciter providing the forced vibrations and the third acting as the actuator that provides an equal but opposite phase vibration/force signal to that of sensed so as to damp out the vibrations. The designed algorithm is implemented on Lab VIEW 2010 on Windows 7 Platform.Defence Science Journal, 2013, 63(4, pp.413-417, DOI:http://dx.doi.org/10.14429/dsj.63.4865

  1. A NEW APROACH OF CONCEPTUAL FRAMEWORK FOR GENERAL PURPOSE FINANCIAL REPORTING BY PUBLIC SECTOR ENTITIES

    Directory of Open Access Journals (Sweden)

    Nistor Cristina

    2011-12-01

    Full Text Available The importance of accounting in the modern economy is obvious. That is more elevated bodies of the European Union and elsewhere dealing with the organization and functioning of accounting as a fundamental component of business (Nistor C., 2009. The mission of the International Federation of Accountants (IFAC is to serve the public interest, strengthen the worldwide accountancy profession and contribute to the development of strong international economies by initiating and encouraging the professional standards of high quality, the convergence process these international standards and to discuss issues of public interest which is extremely relevant international experience of (IFAC, 2011. Currently, the concepts related to financial reports in public sector are developed by IPSAS references. Many of today's IPSAS are based on international accounting standards (IAS / IFRS, to the extent that they are relevant to the requirements of the public sector. Therefore today's IPSAS are based on concepts and definitions of the IASB's conceptual framework, with changes where necessary for public sector specific approach. Thus this study present this brief draft statement under discussion by the leadership of IFAC in collaboration with other organizations and groups that develop financial reporting requirements of the public sector. Then, we highlight the importance and the degree of acceptance of the project which results from comments received. On the basis of combining qualitative with quantitative research seeks to demonstrate the necessity and usefulness of a common conceptual framework of the International Accounting Standards (in this case the Public Sector, starting from their emergence from presenting their bodies involved in the foundation, the content standards, experience of different countries. The results have direct implications on Romanian public accounting system, given that the reference of the international implementation and reporting is an actual goal. The study is primarily addressed to graduate, doctoral students, professors and researchers working in public sector accounting. The study aims at presenting the acceptance of the theme subject for discussion by the IPSASB. It is addressed also to all those interested to know the current evoltia development of International Public Sector Accounting.

  2. Photonic Crystal Surfaces as a General Purpose Platform for Label-Free and Fluorescent Assays

    OpenAIRE

    Cunningham, Brian T.

    2010-01-01

    Photonic crystal surfaces can be designed to provide a wide range of functions that are used to perform biochemical and cell-based assays. Detection of the optical resonant reflections from photonic crystal surfaces enables high sensitivity label-free biosensing, while the enhanced electromagnetic fields that occur at resonant wavelengths can be used to enhance the detection sensitivity of any surface-based fluorescence assay. Fabrication of photonic crystals from inexpensive plastic material...

  3. Photonic Crystal Surfaces as a General Purpose Platform for Label-Free and Fluorescent Assays.

    Science.gov (United States)

    Cunningham, Brian T

    2010-04-01

    Photonic crystal surfaces can be designed to provide a wide range of functions that are used to perform biochemical and cell-based assays. Detection of the optical resonant reflections from photonic crystal surfaces enables high sensitivity label-free biosensing, while the enhanced electromagnetic fields that occur at resonant wavelengths can be used to enhance the detection sensitivity of any surface-based fluorescence assay. Fabrication of photonic crystals from inexpensive plastic materials over large surface areas enables them to be incorporated into standard formats that include microplates, microarrays, and microfluidic channels. This report reviews the design of photonic crystal biosensors, their associated detection instrumentation, and biological applications. Applications including small molecule high throughput screening, cell membrane integrin activation, gene expression analysis, and protein biomarker detection are highlighted. Recent results in which photonic crystal surfaces are used for enhancing the detection of Surface-Enhanced Raman Spectroscopy, and the development of high resolution photonic crystal-based laser biosensors are also described.

  4. Preparing General Purpose Forces in the United States and British Armies for Counterinsurgent Operations

    Science.gov (United States)

    2010-12-10

    commander whose book Learning to Eat Soup with a Knife, was widely read by military leaders serving in Afghanistan and Iraq. Dr. Nagl also...sustainment training for units while deployed to Malaya.23 Arthur Campbell , a company commander in the Suffolk Regiment who served in Malaya in the early 1950s...Learning to Eat Soup with a Knife: Counterinsurgency Lessons from Malaya and Vietnam, comparing the Malayan Emergency and Vietnam War is akin to

  5. Construction of a General Purpose Command Language for Use in Computer Dialog.

    Science.gov (United States)

    1980-09-01

    a viztch, return to J1,11 LVT1RMOP ; the CLI.. Otlierx.,i se SK~hZ 7101;out pit thecho nce to 77f01 I)OAS 0, ITT0 JMP ’IiTE"MPUD ; Eetur to t ermixtAi... FRUT ; EM 2] laii SFR1)Lj: : ;FN DLI; =7UL 12, 11 o , a not (:a a, O 1NE: $yr(Fol iI ENI11 r SER VICE" EPPY] NE1 01 A N I ()cdI cl~ d 1(1 iv i c ,T...Wright-Patterson AFB, Ohio 45433 11. CONTROLLING OFFICE NAME AND ADDRESS 12. REPORT DATE September, 1980 13. NUMBER OF PAGES 198 14. MONITORING AGENCY NAME

  6. Analysis of impact of general-purpose graphics processor units in supersonic flow modeling

    Science.gov (United States)

    Emelyanov, V. N.; Karpenko, A. G.; Kozelkov, A. S.; Teterina, I. V.; Volkov, K. N.; Yalozo, A. V.

    2017-06-01

    Computational methods are widely used in prediction of complex flowfields associated with off-normal situations in aerospace engineering. Modern graphics processing units (GPU) provide architectures and new programming models that enable to harness their large processing power and to design computational fluid dynamics (CFD) simulations at both high performance and low cost. Possibilities of the use of GPUs for the simulation of external and internal flows on unstructured meshes are discussed. The finite volume method is applied to solve three-dimensional unsteady compressible Euler and Navier-Stokes equations on unstructured meshes with high resolution numerical schemes. CUDA technology is used for programming implementation of parallel computational algorithms. Solutions of some benchmark test cases on GPUs are reported, and the results computed are compared with experimental and computational data. Approaches to optimization of the CFD code related to the use of different types of memory are considered. Speedup of solution on GPUs with respect to the solution on central processor unit (CPU) is compared. Performance measurements show that numerical schemes developed achieve 20-50 speedup on GPU hardware compared to CPU reference implementation. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.

  7. Applications for General Purpose Command Buffers: The Emergency Conjunction Avoidance Maneuver

    Science.gov (United States)

    Scheid, Robert J; England, Martin

    2016-01-01

    A case study is presented for the use of Relative Operation Sequence (ROS) command buffers to quickly execute a propulsive maneuver to avoid a collision with space debris. In this process, a ROS is custom-built with a burn time and magnitude, uplinked to the spacecraft, and executed in 15 percent of the time of the previous method. This new process provides three primary benefits. First, the planning cycle can be delayed until it is certain a burn must be performed, reducing team workload. Second, changes can be made to the burn parameters almost up to the point of execution while still allowing the normal uplink product review process, reducing the risk of leaving the operational orbit because of outdated burn parameters, and minimizing the chance of accidents from human error, such as missed commands, in a high-stress situation. Third, the science impacts can be customized and minimized around the burn, and in the event of an abort can be eliminated entirely in some circumstances. The result is a compact burn process that can be executed in as few as four hours and can be aborted seconds before execution. Operational, engineering, planning, and flight dynamics perspectives are presented, as well as a functional overview of the code and workflow required to implement the process. Future expansions and capabilities are also discussed.

  8. Adding Hierarchical Objects to Relational Database General-Purpose XML-Based Information Managements

    Science.gov (United States)

    Lin, Shu-Chun; Knight, Chris; La, Tracy; Maluf, David; Bell, David; Tran, Khai Peter; Gawdiak, Yuri

    2006-01-01

    NETMARK is a flexible, high-throughput software system for managing, storing, and rapid searching of unstructured and semi-structured documents. NETMARK transforms such documents from their original highly complex, constantly changing, heterogeneous data formats into well-structured, common data formats in using Hypertext Markup Language (HTML) and/or Extensible Markup Language (XML). The software implements an object-relational database system that combines the best practices of the relational model utilizing Structured Query Language (SQL) with those of the object-oriented, semantic database model for creating complex data. In particular, NETMARK takes advantage of the Oracle 8i object-relational database model using physical-address data types for very efficient keyword searches of records across both context and content. NETMARK also supports multiple international standards such as WEBDAV for drag-and-drop file management and SOAP for integrated information management using Web services. The document-organization and -searching capabilities afforded by NETMARK are likely to make this software attractive for use in disciplines as diverse as science, auditing, and law enforcement.

  9. Is there any difference on the purpose of enterprise education and the general purpose of education?

    DEFF Research Database (Denmark)

    Blenker, Per

    The paper claims that two different types of inhabitants populate research on the entrepreneurship-education interface. Some are coming from entrepreneurship research – others are coming from education research. Depending on their disciplinary background they pose two different research questions...... other and contribute to each others research, a classical didactical method of thesis, anti-thesis and synthesis is used. Based on this analysis, the two central question of the enterprise–education nexus is reformulated into one new question of: “How can we provide self-directed learning that enables...

  10. A comprehensive workflow for general-purpose neural modeling with highly configurable neuromorphic hardware systems.

    Science.gov (United States)

    Brüderle, Daniel; Petrovici, Mihai A; Vogginger, Bernhard; Ehrlich, Matthias; Pfeil, Thomas; Millner, Sebastian; Grübl, Andreas; Wendt, Karsten; Müller, Eric; Schwartz, Marc-Olivier; de Oliveira, Dan Husmann; Jeltsch, Sebastian; Fieres, Johannes; Schilling, Moritz; Müller, Paul; Breitwieser, Oliver; Petkov, Venelin; Muller, Lyle; Davison, Andrew P; Krishnamurthy, Pradeep; Kremkow, Jens; Lundqvist, Mikael; Muller, Eilif; Partzsch, Johannes; Scholze, Stefan; Zühl, Lukas; Mayr, Christian; Destexhe, Alain; Diesmann, Markus; Potjans, Tobias C; Lansner, Anders; Schüffny, René; Schemmel, Johannes; Meier, Karlheinz

    2011-05-01

    In this article, we present a methodological framework that meets novel requirements emerging from upcoming types of accelerated and highly configurable neuromorphic hardware systems. We describe in detail a device with 45 million programmable and dynamic synapses that is currently under development, and we sketch the conceptual challenges that arise from taking this platform into operation. More specifically, we aim at the establishment of this neuromorphic system as a flexible and neuroscientifically valuable modeling tool that can be used by non-hardware experts. We consider various functional aspects to be crucial for this purpose, and we introduce a consistent workflow with detailed descriptions of all involved modules that implement the suggested steps: The integration of the hardware interface into the simulator-independent model description language PyNN; a fully automated translation between the PyNN domain and appropriate hardware configurations; an executable specification of the future neuromorphic system that can be seamlessly integrated into this biology-to-hardware mapping process as a test bench for all software layers and possible hardware design modifications; an evaluation scheme that deploys models from a dedicated benchmark library, compares the results generated by virtual or prototype hardware devices with reference software simulations and analyzes the differences. The integration of these components into one hardware-software workflow provides an ecosystem for ongoing preparative studies that support the hardware design process and represents the basis for the maturity of the model-to-hardware mapping software. The functionality and flexibility of the latter is proven with a variety of experimental results.

  11. Random ferns method implementation for the general-purpose machine learning

    CERN Document Server

    Kursa, Miron B

    2012-01-01

    In this paper I present an extended implementation of the Random ferns algorithm contained in the R package rFerns. It differs from the original by the ability of consuming categorical and numerical attributes instead of only binary ones. Also, instead of using simple attribute subspace ensemble it employs bagging and thus produce error approximation and variable importance measure modelled after Random forest algorithm. I also present benchmarks' results which show that although Random ferns' accuracy is mostly smaller than achieved by Random forest, its speed and good quality of importance measure it provides make rFerns a reasonable choice for a specific applications.

  12. BLASTbus electronics: general-purpose readout and control for balloon-borne experiments

    Science.gov (United States)

    Benton, S. J.; Ade, P. A.; Amiri, M.; Angilè, F. E.; Bock, J. J.; Bond, J. R.; Bryan, S. A.; Chiang, H. C.; Contaldi, C. R.; Crill, B. P.; Devlin, M. J.; Dober, B.; Doré, O. P.; Farhang, M.; Filippini, J. P.; Fissel, L. M.; Fraisse, A. A.; Fukui, Y.; Galitzki, N.; Gambrel, A. E.; Gandilo, N. N.; Golwala, S. R.; Gudmundsson, J. E.; Halpern, M.; Hasselfield, M.; Hilton, G. C.; Holmes, W. A.; Hristov, V. V.; Irwin, K. D.; Jones, W. C.; Kermish, Z. D.; Klein, J.; Korotkov, A. L.; Kuo, C. L.; MacTavish, C. J.; Mason, P. V.; Matthews, T. G.; Megerian, K. G.; Moncelsi, L.; Morford, T. A.; Mroczkowski, T. K.; Nagy, J. M.; Netterfield, C. B.; Novak, G.; Nutter, D.; O'Brient, R.; Ogburn, R. W.; Pascale, E.; Poidevin, F.; Rahlin, A. S.; Reintsema, C. D.; Ruhl, J. E.; Runyan, M. C.; Savini, G.; Scott, D.; Shariff, J. A.; Soler, J. D.; Thomas, N. E.; Trangsrud, A.; Truch, M. D.; Tucker, C. E.; Tucker, G. S.; Tucker, R. S.; Turner, A. D.; Ward-Thompson, D.; Weber, A. C.; Wiebe, D. V.; Young, E. Y.

    2014-07-01

    We present the second generation BLASTbus electronics. The primary purposes of this system are detector readout, attitude control, and cryogenic housekeeping, for balloon-borne telescopes. Readout of neutron transmutation doped germanium (NTD-Ge) bolometers requires low noise and parallel acquisition of hundreds of analog signals. Controlling a telescope's attitude requires the capability to interface to a wide variety of sensors and motors, and to use them together in a fast, closed loop. To achieve these different goals, the BLASTbus system employs a flexible motherboard-daughterboard architecture. The programmable motherboard features a digital signal processor (DSP) and field-programmable gate array (FPGA), as well as slots for three daughterboards. The daughterboards provide the interface to the outside world, with versions for analog to digital conversion, and optoisolated digital input/output. With the versatility afforded by this design, the BLASTbus also finds uses in cryogenic, thermometry, and power systems. For accurate timing control to tie everything together, the system operates in a fully synchronous manner. BLASTbus electronics have been successfully deployed to the South Pole, and own on stratospheric balloons.

  13. BLASTbus electronics: general-purpose readout and control for balloon-borne experiments

    CERN Document Server

    Benton, S J; Amiri, M; Angilè, F E; Bock, J J; Bond, J R; Bryan, S A; Chiang, H C; Contaldi, C R; Crill, B P; Devlin, M J; Dober, B; Doré, O P; Dowell, C D; Farhang, M; Filippini, J P; Fissel, L M; Fraisse, A A; Fukui, Y; Galitzki, N; Gambrel, A E; Gandilo, N N; Golwala, S R; Gudmundsson, J E; Halpern, M; Hasselfield, M; Hilton, G C; Holmes, W A; Hristov, V V; Irwin, K D; Jones, W C; Kermish, Z D; Klein, J; Korotkov, A L; Kuo, C L; MacTavish, C J; Mason, P V; Matthews, T G; Megerian, K G; Moncelsi, L; Morford, T A; Mroczkowski, T K; Nagy, J M; Netterfield, C B; Novak, G; Nutter, D; O'Brient, R; Ogburn, R W; Pascale, E; Poidevin, F; Rahlin, A S; Reintsema, C D; Ruhl, J E; Runyan, M C; Savini, G; Scott, D; Shariff, J A; Soler, J D; Thomas, N E; Trangsrud, A; Truch, M D; Tucker, C E; Tucker, G S; Tucker, R S; Turner, A D; Ward-Thompson, D; Weber, A C; Wiebe, D V; Young, E Y

    2014-01-01

    We present the second generation BLASTbus electronics. The primary purposes of this system are detector readout, attitude control, and cryogenic housekeeping, for balloon-borne telescopes. Readout of neutron transmutation doped germanium (NTD-Ge) bolometers requires low noise and parallel acquisition of hundreds of analog signals. Controlling a telescope's attitude requires the capability to interface to a wide variety of sensors and motors, and to use them together in a fast, closed loop. To achieve these different goals, the BLASTbus system employs a flexible motherboard-daughterboard architecture. The programmable motherboard features a digital signal processor (DSP) and field-programmable gate array (FPGA), as well as slots for three daughterboards. The daughterboards provide the interface to the outside world, with versions for analog to digital conversion, and optoisolated digital input/output. With the versatility afforded by this design, the BLASTbus also finds uses in cryogenic, thermometry, and powe...

  14. General Purpose Real-time Data Analysis and Visualization Software for Volcano Observatories

    Science.gov (United States)

    Cervelli, P. F.; Miklius, A.; Antolik, L.; Parker, T.; Cervelli, D.

    2011-12-01

    In 2002, the USGS developed the Valve software for management, visualization, and analysis of volcano monitoring data. In 2004, the USGS developed similar software, called Swarm, for the same purpose but specifically tailored for seismic waveform data. Since then, both of these programs have become ubiquitous at US volcano observatories, and in the case of Swarm, common at volcano observatories across the globe. Though innovative from the perspective of software design, neither program is methodologically novel. Indeed, the software can perform little more than elementary 2D graphing, along with basic geophysical analysis. So, why is the software successful? The answer is that both of these programs take data from the realm of discipline specialists and make them universally available to all observatory scientists. In short, the software creates additional value from existing data by leveraging the observatory's entire intellectual capacity. It enables rapid access to different data streams, and allows anyone to compare these data on a common time scale or map base. It frees discipline specialists from routine tasks like preparing graphics or compiling data tables, thereby making more time for interpretive efforts. It helps observatory scientists browse through data, and streamlines routine checks for unusual activity. It encourages a multi-parametric approach to volcano monitoring. And, by means of its own usefulness, it creates incentive to organize and capture data streams not yet available. Valve and Swarm are both written in Java, open-source, and freely available. Swarm is a stand-alone Java application. Valve is a system consisting of three parts: a web-based user interface, a graphing and analysis engine, and a data server. Both can be used non-interactively (e.g., via scripts) to generate graphs or to dump raw data. Swarm has a simple, built-in alarm capability. Several alarm algorithms have been built around Valve. Both programs remain under active development by the USGS and external collaborators. In this presentation, we will explain and diagram how the Valve and Swarm software work, show several real-life use cases, and address operational questions about how the software functions in an observatory environment.

  15. Accuracy of Surface Plate Measurements - General Purpose Software for Flatness Measurement

    NARCIS (Netherlands)

    Meijer, J.; Heuvelman, C.J.

    1990-01-01

    Flatness departures of surface plates are generally obtained from straightness measurements of lines on the surface. A computer program has been developed for on-line measurement and evaluation, based on the simultaneous coupling of measurements in all grid points. Statistical methods are used to de

  16. GAFit: A general-purpose, user-friendly program for fitting potential energy surfaces

    Science.gov (United States)

    Rodríguez-Fernández, Roberto; Pereira, Francisco B.; Marques, Jorge M. C.; Martínez-Núñez, Emilio; Vázquez, Saulo A.

    2017-08-01

    We have developed a software package based on a genetic algorithm that fits an analytic function to a given set of data points. The code, called GAFit, was also interfaced with the CHARMM and MOPAC programs in order to facilitate force field parameterizations and fittings of specific reaction parameters (SRP) for semiempirical Hamiltonians. The present tool may be applied to a wide range of fitting problems, though it has been especially designed to significantly reduce the hard work involved in the development of potential energy surfaces for complex systems. For this purpose, it has been equipped with several programs to help the user in the preparation of the input files. We showcase the application of the computational tool to several chemical-relevant problems: force-field parameterization, with emphasis on nonbonded energy terms or intermolecular potentials, derivation of SRP for semiempirical Hamiltonians, and fittings of generic analytical functions.

  17. The Raw Microprocessor: A Computational Fabric for Software Circuits and General-Purpose Programs

    Science.gov (United States)

    2002-04-01

    please visit our Digital Library at http://computer.org/publications/ dlib . 35MARCH–APRIL 2002 ...found that wire delay inside the tile was large enough that placement could not be ignored as an issue. We creat- ed a library of routines (7,000

  18. General- Purpose Computation on Graphics Processors%GPU的通用计算应用研究

    Institute of Scientific and Technical Information of China (English)

    张浩; 李利军; 林岚

    2005-01-01

    由于图形处理器(GPU)最近几年迅速发展,国内外学者已经将基于GPU的通用计算作为一个新的研究领域.本文在研究国外最新文献的基础上,分析了GPU本身的特性,阐明了基于GPU的应用程序的结构,研究了GPU在编程方法上与普通CPU的差别,并以高斯滤波为实例详细描述了GPU编程的方法和过程.

  19. Research on General Purpose Computation on Graphic Process Unit%GPU通用计算研究

    Institute of Scientific and Technical Information of China (English)

    丁鹏; 陈利学; 龚捷; 张岩

    2010-01-01

    随着图形硬件的快速发展,GPU的通用计算已经成为了一个新的研究领域.本文分析GPU编程模型,介绍使用图形硬件进行通用计算的方法,并把一些常用的算法映射到了GPU上.通过这些算法与CPU上对应的算法进行比较,分析使用GPU进行通用计算的优势和劣势.

  20. Automatic proximate analysis of coal using a general-purpose robot

    Energy Technology Data Exchange (ETDEWEB)

    Maeda, K.

    1986-01-01

    In order to ensure supplies of blast furnace coke at Nippon Kokan's Fukuyama Works, the results of analysis of coal (as-received and blended) and coke are used in the selection of appropriate operating conditions for the coke ovens and blast furnaces. The author discusses the following topics: 1) selection of items for automatic analysis; 2) analytic methods used (weight loss, combustion, volumetric analysis and titration); the nature of the automated system adopted (arrangement of apparatus, functions of the robot, system configuration and software used); and degree of precision obtained in analysis. Typical analytic results are given. 4 refs., 16 figs., 5 tabs.

  1. Spectro-photometric distances to stars: a general-purpose Bayesian approach

    CERN Document Server

    Santiago, Basílio X; Anders, Friedrich; Chiappini, Cristina; Girardi, Léo; Rocha-Pinto, Helio J; Balbinot, Eduardo; da Costa, Luiz N; Maia, Marcio A G; Schultheis, Mathias; Steinmetz, Matthias; Miglio, Andrea; Montalbán, Josefina; Schneider, Donald P; Beers, Timothy C; Frinchaboy, Peter M; Lee, Young Sun; Zasowski, Gail

    2016-01-01

    We have developed a procedure that estimates distances to stars using measured spectroscopic and photometric quantities. It employs a Bayesian approach to build the probability distribution function over stellar evolutionary models given the data, delivering estimates of expected distance for each star individually. Our method provides several alternative distance estimates for each star in the output, along with their associated uncertainties. The code was first tested on simulations, successfully recovering input distances to mock stars with errors that scale with the uncertainties in the adopted spectro-photometric parameters, as expected. The code was then validated by comparing our distance estimates to parallax measurements from the Hipparcos mission for nearby stars (< 60 pc), to asteroseismic distances of CoRoT red giant stars, and to known distances of well-studied open and globular clusters. The photometric data of these reference samples cover both the optical and near infra-red wavelengths. The...

  2. A perturbation approach for geometrically nonlinear structural analysis using a general purpose finite element code

    NARCIS (Netherlands)

    Rahman, T.

    2009-01-01

    In this thesis, a finite element based perturbation approach is presented for geometrically nonlinear analysis of thin-walled structures. Geometrically nonlinear static and dynamic analyses are essential for this class of structures. Nowadays nonlinear analysis of thin-walled shell structures is oft

  3. Limits to high-speed simulations of spiking neural networks using general-purpose computers

    Directory of Open Access Journals (Sweden)

    Friedemann eZenke

    2014-09-01

    Full Text Available To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed towards synaptic plasticity. In particular spike-timing-dependent plasticity (STDP creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite.

  4. Quantitative analysis of jugular venous pulse obtained by using a general-purpose ultrasound scanner

    CERN Document Server

    Sisini, Francesco

    2016-01-01

    This is a self-published methodological note distributed under the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The note contains an original reasoning of mine and the goal to share thoughts and methodologies, not results. Therefore before using the contents of these notes, everyone is invited to verify the accuracy of the assumptions and conclusions.

  5. Cook-Off Studies on the General Purpose Cast Explosives PBXC-116 and PBXC-117

    Science.gov (United States)

    1976-05-01

    therniogravimietric analyses and] differential scanning calor ~ri0etr determinations, were included inl tlat preliminary report. These explosives dcmonstrafed ipood...of the thermocouples show flame temperatures. The firing officer reported that flame covered the pit at 40 seco .... .. me thermocouples 16 through 19

  6. MADYMO : a general purpose mathematical dynamical model for crash victim simulation.

    NARCIS (Netherlands)

    Bacchetti, A.C. & Maltha, J.

    1978-01-01

    This report gives a complete overview of the work of tno-iw on the program package "madymo" for crash injury prevention research, since the start in 1973. The aim of this project is the development of a highly versatile program package for 2- and 3-dimensional simulations of traffic accidents,

  7. General-Purpose Front End for Real-Time Data Processing

    Science.gov (United States)

    James, Mark

    2007-01-01

    FRONTIER is a computer program that functions as a front end for any of a variety of other software of both the artificial intelligence (AI) and conventional data-processing types. As used here, front end signifies interface software needed for acquiring and preprocessing data and making the data available for analysis by the other software. FRONTIER is reusable in that it can be rapidly tailored to any such other software with minimum effort. Each component of FRONTIER is programmable and is executed in an embedded virtual machine. Each component can be reconfigured during execution. The virtual-machine implementation making FRONTIER independent of the type of computing hardware on which it is executed.

  8. Evaluating Security Requirements in a General-Purpose Processor by Combining Assertion Checkers with Code Coverage

    Science.gov (United States)

    2012-06-01

    some some DFA Minimization partial 3 Full Boolean-Layer Optimization 3 VHDL and Verilog Output 3 B. The Property Specification Language The Property...used in functional verification of processor designs in hardware design language (HDL) format, such as VHDL or Verilog. For a detailed look at PSL, the...boolean simplifica- tions, and support for both VHDL and Verilog output. Because the method uses automata as an intermediate representation, we also

  9. General Purpose Convolution Algorithm in S4 Classes by Means of FFT

    Directory of Open Access Journals (Sweden)

    Peter Ruckdeschel

    2014-08-01

    By means of object orientation this default algorithm is overloaded by more specific algorithms where possible, in particular where explicit convolution formulae are available. Our focus is on R package distr which implements this approach, overloading operator + for convolution; based on this convolution, we define a whole arithmetics of mathematical operations acting on distribution objects, comprising operators +, -, *, /, and ^.

  10. Factors Affecting Preservice Teachers' Computer Use for General Purposes: Implications for Computer Training Courses

    Science.gov (United States)

    Zogheib, Salah

    2014-01-01

    As the majority of educational research has focused on preservice teachers' computer use for "educational purposes," the question remains: Do preservice teachers use computer technology for daily life activities and encounters? And do preservice teachers' personality traits and motivational beliefs related to computer training provided…

  11. The Graphics Terminal Display System; a Powerful General-Purpose CAI Package.

    Science.gov (United States)

    Hornbeck, Frederick W., Brock, Lynn

    The Graphic Terminal Display System (GTDS) was created to support research and development in computer-assisted instruction (CAI). The system uses an IBM 360/50 computer and interfaces with a large-screen graphics display terminal, a random-access slide projector, and a speech synthesizer. An authoring language, GRAIL, was developed for CAI, and…

  12. STARS: A general-purpose finite element computer program for analysis of engineering structures

    Science.gov (United States)

    Gupta, K. K.

    1984-01-01

    STARS (Structural Analysis Routines) is primarily an interactive, graphics-oriented, finite-element computer program for analyzing the static, stability, free vibration, and dynamic responses of damped and undamped structures, including rotating systems. The element library consists of one-dimensional (1-D) line elements, two-dimensional (2-D) triangular and quadrilateral shell elements, and three-dimensional (3-D) tetrahedral and hexahedral solid elements. These elements enable the solution of structural problems that include truss, beam, space frame, plane, plate, shell, and solid structures, or any combination thereof. Zero, finite, and interdependent deflection boundary conditions can be implemented by the program. The associated dynamic response analysis capability provides for initial deformation and velocity inputs, whereas the transient excitation may be either forces or accelerations. An effective in-core or out-of-core solution strategy is automatically employed by the program, depending on the size of the problem. Data input may be at random within a data set, and the program offers certain automatic data-generation features. Input data are formatted as an optimal combination of free and fixed formats. Interactive graphics capabilities enable convenient display of nodal deformations, mode shapes, and element stresses.

  13. A General Purpose Feature Extractor for Light Detection and Ranging Data

    Science.gov (United States)

    2010-11-17

    an impediment to robust feature-based systems. The alternative LIDAR approach, scan matching, directly matches point clouds . This approach dispenses...4528. 11. Rnnholm, P.; Hyypp, H.; Hyypp, J.; Haggrn, H. Orientation of airborne laser scanning point clouds with multi-view, multi-scale image blocks...building extraction, reconstruction, and regularization from airborne laser scanning point clouds . Sensors 2008, 8, 7323-7343. 14. Dellaert, F. Square

  14. FLUENT/BFC - A general purpose fluid flow modeling program for all flow speeds

    Science.gov (United States)

    Dvinsky, Arkady S.

    FLUENT/BFC is a fluid flow modeling program for a variety of applications. Current capabilities of the program include laminar and turbulent flows, subsonic and supersonic viscous flows, incompressible flows, time-dependent and stationary flows, isothermal flows and flows with heat transfer, Newtonian and power-law fluids. The modeling equations in the program have been written in coordinate system invariant form to accommodate the use of boundary-conforming, generally nonorthogonal coordinate systems. The boundary-conforming coordinate system can be generated using both an internal grid generator, which is an integral part of the code, and external application-specific grid generators. The internal grid generator is based on a solution of a system of elliptic partial differential equations and can produce grids for a wide variety of two- and three-dimensional geometries.

  15. Framework for a Robust General Purpose Navier-Stokes Solver on Unstructured Meshes

    Science.gov (United States)

    Xiao, Cheng-Nian; Denner, Fabian; van Wachem, Berend G. M.

    2016-11-01

    A numerical framework for a pressure-based all-speeds flow solver operating on unstructured meshes, which is robust for a broad range of flow configurations, is proposed. The distinct features of our framework are the full coupling of the momentum and continuity equations as well as the use of an energy equation in conservation form to relate the thermal quantities with the flow field. In order to overcome the well-documented instability occurring while coupling the thermal energy to the remaining flow variables, a multistage iteration cycle has been devised which exhibits excellent convergence behavior without requiring any numerical relaxation parameters. Different spatial schemes for accurate shock resolution as well as complex thermodynamic gas models are also seamlessly incorporated into the framework. The solver is directly applicable to stationary and transient flows in all Mach number regimes (sub-, trans-, supersonic), exhibits strong robustness and accurately predicts flow and thermal variables at all speeds across shocks of different strengths. We present a wide range of results for both steady and transient compressible flows with vastly different Mach numbers and thermodynamic conditions in complex geometries represented by different types of unstructured meshes. The authors are grateful for the financial support provided by Shell.

  16. General-purpose molecular dynamics simulations on GPU-based clusters

    OpenAIRE

    Trott, Christian R.; Winterfeld, Lars; Crozier, Paul S.

    2010-01-01

    We present a GPU implementation of LAMMPS, a widely-used parallel molecular dynamics (MD) software package, and show 5x to 13x single node speedups versus the CPU-only version of LAMMPS. This new CUDA package for LAMMPS also enables multi-GPU simulation on hybrid heterogeneous clusters, using MPI for inter-node communication, CUDA kernels on the GPU for all methods working with particle data, and standard LAMMPS C++ code for CPU execution. Cell and neighbor list approaches are compared for be...

  17. A general-purpose contact detection algorithm for nonlinear structural analysis codes

    Energy Technology Data Exchange (ETDEWEB)

    Heinstein, M.W.; Attaway, S.W.; Swegle, J.W.; Mello, F.J.

    1993-05-01

    A new contact detection algorithm has been developed to address difficulties associated with the numerical simulation of contact in nonlinear finite element structural analysis codes. Problems including accurate and efficient detection of contact for self-contacting surfaces, tearing and eroding surfaces, and multi-body impact are addressed. The proposed algorithm is portable between dynamic and quasi-static codes and can efficiently model contact between a variety of finite element types including shells, bricks, beams and particles. The algorithm is composed of (1) a location strategy that uses a global search to decide which slave nodes are in proximity to a master surface and (2) an accurate detailed contact check that uses the projected motions of both master surface and slave node. In this report, currently used contact detection algorithms and their associated difficulties are discussed. Then the proposed algorithm and how it addresses these problems is described. Finally, the capability of the new algorithm is illustrated with several example problems.

  18. Plasduino: an inexpensive, general purpose data acquisition framework for educational experiments

    CERN Document Server

    Baldini, L; Andreoni, E; Angelini, F; Bianchi, A; Bregeon, J; Fidecaro, F; Massai, M M; Merlin, V; Nespolo, J; Orselli, S; Pesce-Rollins, M

    2013-01-01

    Based on the Arduino development platform, Plasduino is an open-source data acquisition framework specifically designed for educational physics experiments. The source code, schematics and documentation are in the public domain under a GPL license and the system, streamlined for low cost and ease of use, can be replicated on the scale of a typical didactic lab with minimal effort. We describe the basic architecture of the system and illustrate its potential with some real-life examples.

  19. A GENERAL PURPOSE SUITE FOR JOB MANAGEMENT, BOOKKEEPING, AND GRID SUBMISSION

    Directory of Open Access Journals (Sweden)

    Armando Fella

    2011-07-01

    Full Text Available This paper briefly presents the prototype of a software framework permitting different multi-disciplinary user communities to take advantage of the power of the Grid computing. The idea behind the project is to offer a software infrastructure allowing an easy, quick and customizable access to the Grid to research groups or organizations that need to simulate big amount of data.

  20. General Purpose Electronic Test Equipment (GPETE) Acquisition Considerations for Automated Calibration.

    Science.gov (United States)

    1983-06-01

    block (BB) test instruments requiring off-line calibration. For example, of the 23 AI/USH-470 building blocks, only one (the calibration module itself...management facility and phone conversations with IAVSUP, NAVCOMPT and the Fleet Material Suppor-t Office ( FPSO ) failed to locate a viable figure. With

  1. The Dynamics of a General Purpose Technology in a Research and Assimilation Model

    NARCIS (Netherlands)

    Nahuis, R.

    1998-01-01

    Where is the productivity growth from the IT revolution? Why did the skill premium rise sharply in the early eighties? Were these phenomena related? This paper examines these questions in a general equilibrium model of growth. Technological progress in firms is driven by research aimed at improving

  2. An Assessment of Ada’s Suitability in General Purpose Programming Applications.

    Science.gov (United States)

    1985-09-01

    LIBFILE, ITEM => BOOK, TO => 1); BOOK.NEXTBOOK :=POSITIVE’last; BOOK.AUTHOR : TOLSTOY, COUNT LEO " BOOK.TITLE :" ANNA KARENINA BOOK.CALL NUMBER 10... ANNA KARENINA TOLSTOY, COUNT LEO END OF LIBRARY FILE TYPE I TO INSERT, D TO DELETE, OR V TO VIEW FILE, OR F TO FINISH:I TYPE THE NAME OF THE BOOK: JON...VIEW FILE, OR F TO FINISH:V WAR AND PEACE TOLSTOY, COUNT LEO 5 JOE MAMA JOE DADDY 7 ANNA KARENINA TOLSTOY, COUNT LEO 10 JOE DADDY JOE MAMA 15 END OF

  3. Atomicrex—a general purpose tool for the construction of atomic interaction models

    Science.gov (United States)

    Stukowski, Alexander; Fransson, Erik; Mock, Markus; Erhart, Paul

    2017-07-01

    We introduce atomicrex, an open-source code for constructing interatomic potentials as well as more general types of atomic-scale models. Such effective models are required to simulate extended materials structures comprising many thousands of atoms or more, because electronic structure methods become computationally too expensive at this scale. atomicrex covers a wide range of interatomic potential types and fulfills many needs in atomistic model development. As inputs, it supports experimental property values as well as ab initio energies and forces, to which models can be fitted using various optimization algorithms. The open architecture of atomicrex allows it to be used in custom model development scenarios beyond classical interatomic potentials while thanks to its Python interface it can be readily integrated e.g., with electronic structure calculations or machine learning algorithms.

  4. Factors Affecting Preservice Teachers' Computer Use for General Purposes: Implications for Computer Training Courses

    Science.gov (United States)

    Zogheib, Salah

    2014-01-01

    As the majority of educational research has focused on preservice teachers' computer use for "educational purposes," the question remains: Do preservice teachers use computer technology for daily life activities and encounters? And do preservice teachers' personality traits and motivational beliefs related to computer training provided…

  5. A programming environment to control switching networks based on STC104 packet routing chip

    Science.gov (United States)

    Legrand, I. C.; Schwendicke, U.; Leich, H.; Medinnis, M.; Koehler, A.; Wegner, P.; Sulanke, K.; Dippel, R.; Gellrich, A.

    1997-02-01

    The software environment used to control a large switching architecture based on SGS-Thomson STC104 (an asynchronous 32-way dynamic packet routing chip) is presented. We are evaluating this switching technology for large scale, real-time parallel systems. A Graphical User Interface (GUI) written as a multi-thread application in Java allows to set the switch configuration and to continuously monitor the state of each link. This GUI connects to a multi-thread server via TCP/IP sockets. The server is running on a PC-Linux system and implements the virtual channel protocol in communicating with the STC104 switching units using the Data Strobe link or the VME bus. Linux I/O drivers to control the Data Strobe link parallel adaptor (STC101) were developed. For each client the server creates a new thread and allocates a new socket for communications. The Java code of the GUI may be transferred to any client using the http protocol providing a user friendly interface to the system with real-time monitoring which is also platform independent.

  6. A GPU-accelerated adaptive discontinuous Galerkin method for level set equation

    Science.gov (United States)

    Karakus, A.; Warburton, T.; Aksel, M. H.; Sert, C.

    2016-01-01

    This paper presents a GPU-accelerated nodal discontinuous Galerkin method for the solution of two- and three-dimensional level set (LS) equation on unstructured adaptive meshes. Using adaptive mesh refinement, computations are localised mostly near the interface location to reduce the computational cost. Small global time step size resulting from the local adaptivity is avoided by local time-stepping based on a multi-rate Adams-Bashforth scheme. Platform independence of the solver is achieved with an extensible multi-threading programming API that allows runtime selection of different computing devices (GPU and CPU) and different threading interfaces (CUDA, OpenCL and OpenMP). Overall, a highly scalable, accurate and mass conservative numerical scheme that preserves the simplicity of LS formulation is obtained. Efficiency, performance and local high-order accuracy of the method are demonstrated through distinct numerical test cases.

  7. Adaptive Multi-Thread Resilient Video Coding Scheme for Internet Channel%一种面向Internet信道的自适应多线程视频抗误码策略

    Institute of Scientific and Technical Information of China (English)

    马仲华; 余松煜

    2002-01-01

    从典型时变Internet信道的波动特性出发,基于最新的H.263++低码率视频压缩标准,提出了一个利用参考帧编码方式的自适应多线程冗余抗误码策略.信道仿真结果表明,自适应多线程冗余策略在适当增加码率的前提下,能提供更强的抗数据包丢失能力,具有比传统算法更强的健壮性和网络适应力.

  8. Design and implement of memory pool under multi-thread of Linux%一种Linux多线程应用下内存池的设计与实现

    Institute of Scientific and Technical Information of China (English)

    许健; 于鸿洋

    2012-01-01

    After much research on the allocate and gain mechanism , dynamic adjustment, safety use, free method, the basic size of the memory block ensure it's can work well in mutli - mem enviorment. Meanwhile, using a mechanism of array - based linked list to improve the searching and allocation algorithm in the memory pool, making the time complexity stable at O(1),and avoid the degradation of allocation and query performance when aquired memory mem number is too large in the traditional mem pool. Experimental results show that the improved memory pool has a smaller cost and better efficiency compared with the traditional memory distribution.%对内存池中内存块获取、分配机制、内存块大小、内存释放,以及在多线程环境下的安全处理等细节进行了研究,保证了在多线程环境下能够快速同时采用一种基于数组的链表机制,改进内存池中内存块的查找算法,将其时间复杂度稳定在O(1),避免了传统内存池中请求的线程数目过多时,引发的获取内存块性能下降的问题.同时在内部设置管理线程,动态增加或删除空闲的内存块.实验结果表明,改进后的内存池与传统的内存分配方式相比消耗更小,效率更好.

  9. 一种基于综合历史信息的SMT结构分支预测算法%An Intergrated History Information Branch Prediction Policy for Simultaneous Multi-threading Architecture

    Institute of Scientific and Technical Information of China (English)

    王晶; 樊晓桠; 叶曾

    2008-01-01

    在SMT结构中,可以同时从多个线程中取指.当可取指线程个数较少时,分支预测的重要性与在超标量处理器中的相比有增无减,因为SMT结构中转移误预测的代价更大了.影响分支预测准确率的关键因素是历史信息的组织方式和更新方式.本文仿真分析了这些因素对分支预测准确率的影响,提出了一种基于综合历史信息的分支预测算法--IHBP,把全局信息和局部信息结合在一起预测转移,解决了SMT结构中分支预测信息过时、混乱等问题,使得预测的准确率更具备鲁棒性.仿真结果表明:在8线程结构中,该算法与目前国际普遍采用的Gshare算法和Pag算法相比,分支预测准确率分别提高了8.5%和2.3%.

  10. Multi-threaded Optimization of Complex Matrix Multiplication on Loongson-3A Architecture%龙芯3A上复数矩阵乘法的多线程优化

    Institute of Scientific and Technical Information of China (English)

    陈强; 何颂颂; 王坤

    2011-01-01

    BLAS库分为两类函数运算:复数函数与实数函数.矩阵乘法函数是BLAS库的核心函数,BLAS库中的许多函数在实现时都调用了矩阵乘法函数.文章结合龙芯3A体系结构的特点,通过对矩阵乘法计算过程的分析选择了先对矩阵分块然后进行任务划分的方式,从而减少了数据拷贝数量,提高了拷贝数据的利用率,并运用循环展开、指令调度、数据分块等技术对子线程的运算进行了优化.优化后的ZGEMM函数的多线程运算速度是ATLAS库的两倍.

  11. De1phi多线程在分布式实时多任务系统中的应用%The Application of Delphi Multi-thread for Distributed Real-time Multi-task System

    Institute of Scientific and Technical Information of China (English)

    殷苌茗; 李峰; 陈焕文

    2000-01-01

    讨论了Delphi多线程的若干问题,对Delphi多线程的使用方法作了论述,同时对Delphi多线程在分布式实时多任务系统开发中的应用作了阐述. 最后对一个具体的基于Delphi多线程的分布式实时多任务系统的结构和功能进行了描述.

  12. 滑动轴承混合润滑多线程并行计算数值方法%Mixed lubrication study of multi-thread parallel computing algorithm for journal bearings

    Institute of Scientific and Technical Information of China (English)

    韩彦峰; 王家序; 周广武; 肖科

    2016-01-01

    The complex and time-consuming mixed-(elastic hydrodynamic lubrication) problems are introduced by the contact between the journal and bearing require high-speed computation.So a faster mixed-EHL parallel computing algorithm based on the OpenMP was proposed,which exhibits higher efficiency than the normal SOR(successive over relaxation) method.In this method,the Reynolds computing domain was separated into two indecently sub-domains,which can avoid the data race issues among the CPU(central processing unit) computing threads.A group of simulations were presented to evaluate the effects of core numbers,meshes size and workstation configurations on the paral lel computing performance.The results show that parallel computing method can significantly improve the mixed-EHL computing speed,however the increment of speed has a non-linear relationship with cores numbers.Besides,the clock rate has significantly effects on the computing parallel performance comparing to CPU cache and RAM(random-access memory).%针对混合润滑数值分析将动压效应、弹性变形和界面接触特性耦合而非常耗时的问题,基于共享内存并行系统的多线程程序设计语言OpenMP,提出一种多线程混合润滑并行计算数值方法——红黑线交叉并行计算法.该并行计算模型是将雷诺方程求解域分成两个相互独立的子求解域,依次对两个子求解域进行并行数值求解,可以有效克服CPU线程间数据争用问题,加快求解速度.着重研究了并行计算核数、网格数量和工作站配置对并行计算性能的影响,分析结果表明:并行计算模型能够有效提高滑动轴承混合润滑计算速度,并行计算速度的提升幅度与并行计算核数成非线性关系,随着CPU核数的增加计算速度的增加幅度逐渐减小;此外,与内存和缓存相比,CPU的主频对并行计算速度有非常大的影响.

  13. Software Architecture Appl ication Analysis on Automation Home System Based on Multi-threads%基于多线程的智能家居控制软件应用分析

    Institute of Scientific and Technical Information of China (English)

    袁晓磊; 彭钢; 马瑞; 张福东; 李帅华; 李剑锋

    2014-01-01

    This paper introduces the general structure of the home automation control system and Linux multi-mission operation system general structure,analyses the system function from the aspects of main central thread 433 MHz radio frequency equipment condition reception thread,3 1 5 MHz radio frequency safety protection alarm reception thread,UDP control order reception/equipment condition up-date thread,safety protection alarm/message control thread, share data protection among the threads,illustrates the appli-cation and effects of this software characteristics in home au-tomation.%介绍智能家居控制系统和 Linux多任务操作系统总体架构,从主线程、433 MHz 射频设备状态接收线程、315 MHz射频安防报警接收线程、UDP 控制命令接收∕设备状态更新线程、安防报警∕短信控制线程、线程间共享数据保护等方面对系统功能进行分析,并通过实例说明该软件在智能家居控制中的应用情况及效果。

  14. Cafe Variome : General-Purpose Software for Making Genotype-Phenotype Data Discoverable in Restricted or Open Access Contexts

    NARCIS (Netherlands)

    Lancaster, Owen; Beck, Tim; Atlan, David; Swertz, Morris; Thangavelu, Dhiwagaran; Veal, Colin; Dalgleish, Raymond; Brookes, Anthony J.

    2015-01-01

    Biomedical data sharing is desirable, but problematic. Data "discovery" approaches-which establish the existence rather than the substance of data-precisely connect data owners with data seekers, and thereby promote data sharing. Cafe Variome (http://www.cafevariome.org) was therefore designed to

  15. PolarBRDF: A general purpose Python package for visualization and quantitative analysis of multi-angular remote sensing measurements

    Science.gov (United States)

    Singh, Manoj K.; Gautam, Ritesh; Gatebe, Charles K.; Poudyal, Rajesh

    2016-11-01

    The Bidirectional Reflectance Distribution Function (BRDF) is a fundamental concept for characterizing the reflectance property of a surface, and helps in the analysis of remote sensing data from satellite, airborne and surface platforms. Multi-angular remote sensing measurements are required for the development and evaluation of BRDF models for improved characterization of surface properties. However, multi-angular data and the associated BRDF models are typically multidimensional involving multi-angular and multi-wavelength information. Effective visualization of such complex multidimensional measurements for different wavelength combinations is presently somewhat lacking in the literature, and could serve as a potentially useful research and teaching tool in aiding both interpretation and analysis of BRDF measurements. This article describes a newly developed software package in Python (PolarBRDF) to help visualize and analyze multi-angular data in polar and False Color Composite (FCC) forms. PolarBRDF also includes functionalities for computing important multi-angular reflectance/albedo parameters including spectral albedo, principal plane reflectance and spectral reflectance slope. Application of PolarBRDF is demonstrated using various case studies obtained from airborne multi-angular remote sensing measurements using NASA's Cloud Absorption Radiometer (CAR). Our visualization program also provides functionalities for untangling complex surface/atmosphere features embedded in pixel-based remote sensing measurements, such as the FCC imagery generation of BRDF measurements of grasslands in the presence of wildfire smoke and clouds. Furthermore, PolarBRDF also provides quantitative information of the angular distribution of scattered surface/atmosphere radiation, in the form of relevant BRDF variables such as sunglint, hotspot and scattering statistics.

  16. General Purpose Digital Signal Processing VME-Module for 1-Turn Delay Feedback Systems of the CERN Accelerator Chain

    CERN Document Server

    Rossi, V

    2010-01-01

    In the framework of the LHC project and the modifications of the SPS as its injector, the concept has been developed of a global digital signal processing unit (DSPU) that implements in numerical form the architecture of low-level RF systems. Since 2002 a Digital Notch Filter with programmable delay for the SPS Transverse Damper has been fully operational with fixed target and LHC-type beams circulating in the SPS. The approach, using an FPGA as core for the low-level system, is very flexible and allows the upgrade of the signal processing by modification of the original firmware. The development for the LHC 1-Turn delay Feedback has benefited from the same methodology and similar technology. The achieved performances of the LHC 1-Turn delay Feedback are compared with project requirements. The project flow for the recent LHC 1-T Feedback allows synergy with several other applications. The CERN PS Transverse Damper DSPU, with automatic delay compensation adapting the loop delay to the time of flight of the par...

  17. A general purpose Fortran 90 electronic structure program for conjugated systems using Pariser-Parr-Pople model

    CERN Document Server

    Sony, Priya

    2009-01-01

    Pariser-Parr-Pople (P-P-P) model Hamiltonian has been used extensively over the years to perform calculations of electronic structure and optical properties of $\\pi$-conjugated systems successfully. In spite of tremendous successes of \\emph{ab initio} theory of electronic structure of large systems, the P-P-P model continues to be a popular one because of a recent resurgence in interest in the physics of $\\pi$-conjugated polymers, fullerenes and other carbon based materials. In this paper, we describe a Fortran 90 computer program developed by us, which uses P-P-P model Hamiltonian to not only solve Hartree-Fock (HF) equation for closed- and open-shell systems, but also for performing correlation calculations at the level of single configuration interactions (SCI) for molecular systems. Moreover, the code is capable of computing linear optical absorption spectrum at various levels, such as, tight binding (TB) Hueckel model, HF, SCI, and also of calculating the band structure using the Hueckel model. The code ...

  18. SHRIF, a General-Purpose System for Heuristic Retrieval of Information and Facts, Applied to Medical Knowledge Processing.

    Science.gov (United States)

    Findler, Nicholas V.; And Others

    1992-01-01

    Describes SHRIF, a System for Heuristic Retrieval of Information and Facts, and the medical knowledge base that was used in its development. Highlights include design decisions; the user-machine interface, including the language processor; and the organization of the knowledge base in an artificial intelligence (AI) project like this one. (57…

  19. Polarbrdf: A General Purpose Python Package for Visualization Quantitative Analysis of Multi-Angular Remote Sensing Measurements

    Science.gov (United States)

    Singh, Manoj K.; Gautam, Ritesh; Gatebe, Charles K.; Poudyal, Rajesh

    2016-01-01

    The Bidirectional Reflectance Distribution Function (BRDF) is a fundamental concept for characterizing the reflectance property of a surface, and helps in the analysis of remote sensing data from satellite, airborne and surface platforms. Multi-angular remote sensing measurements are required for the development and evaluation of BRDF models for improved characterization of surface properties. However, multi-angular data and the associated BRDF models are typically multidimensional involving multi-angular and multi-wavelength information. Effective visualization of such complex multidimensional measurements for different wavelength combinations is presently somewhat lacking in the literature, and could serve as a potentially useful research and teaching tool in aiding both interpretation and analysis of BRDF measurements. This article describes a newly developed software package in Python (PolarBRDF) to help visualize and analyze multi-angular data in polar and False Color Composite (FCC) forms. PolarBRDF also includes functionalities for computing important multi-angular reflectance/albedo parameters including spectral albedo, principal plane reflectance and spectral reflectance slope. Application of PolarBRDF is demonstrated using various case studies obtained from airborne multi-angular remote sensing measurements using NASA's Cloud Absorption Radiometer (CAR). Our visualization program also provides functionalities for untangling complex surface/atmosphere features embedded in pixel-based remote sensing measurements, such as the FCC imagery generation of BRDF measurements of grasslands in the presence of wild fire smoke and clouds. Furthermore, PolarBRDF also provides quantitative information of the angular distribution of scattered surface/atmosphere radiation, in the form of relevant BRDF variables such as sunglint, hotspot and scattering statistics.

  20. The Development of a General Purpose ARM-based Processing Unit for the TileCal sROD

    CERN Multimedia

    Cox, Mitchell A

    2014-01-01

    The Large Hadron Collider at CERN generates enormous amounts of raw data which present a serious computing challenge. After planned upgrades in 2022, the data output from the ATLAS Tile Calorimeter will increase by 200 times to 41 Tb/s! ARM processors are common in mobile devices due to their low cost, low energy consumption and high performance. It is proposed that a cost-effective, high data throughput Processing Unit (PU) can be developed by using several consumer ARM processors in a cluster configuration to allow aggregated processing performance and data throughput while maintaining minimal software design difficulty for the end-user. This PU could be used for a variety of high-level functions on the high-throughput raw data such as spectral analysis and histograms to detect possible issues in the detector at a low level. High-throughput I/O interfaces are not typical in consumer ARM System on Chips but high data throughput capabilities are feasible via the novel use of PCI-Express as the I/O interface t...

  1. Multi-point injection: A general purpose delivery system for treatment and containment of hazardous and radiological waste

    Energy Technology Data Exchange (ETDEWEB)

    Kauschinger, J.L. [Ground Environmental Services, Alpharetta, GA (United States); Kubarewicz, J. [Jacobs Engineering, Oak Ridge, TN (United States); Van Hoesen, S.D. [Lockheed Martin Energy Systems, Oak Ridge, TN (United States)

    1997-12-31

    The multi-point injection (MPI) technology is a proprietary jetting process for the in situ delivery of various agents to treat radiological and/or chemical wastes. A wide variety of waste forms can be treated, varying from heterogeneous solid waste dumped into shallow burial trenches, bottom sludge (heel material) inside of underground tanks, and contaminated soils with widely varying soil composition (gravel, silts/clays, soft rock). The robustness of the MPI system is linked to the use of high speed mono-directional jets to deliver various types of agents for a variety of applications, such as: pretreatment of waste prior to insitu vitrification, solidification of waste for creating low conductivity monoliths, oxidants for insitu destruction of organic waste, and grouts for creating barriers (vertical, inclined, and bottom seals). The only strict limitation placed upon the MPI process is that the material can be pumped under high pressure. This paper describes the procedures to inject ordinary grout to form solidified monoliths of solid wastes.

  2. S2O - A software tool for integrating research data from general purpose statistic software into electronic data capture systems.

    Science.gov (United States)

    Bruland, Philipp; Dugas, Martin

    2017-01-07

    Data capture for clinical registries or pilot studies is often performed in spreadsheet-based applications like Microsoft Excel or IBM SPSS. Usually, data is transferred into statistic software, such as SAS, R or IBM SPSS Statistics, for analyses afterwards. Spreadsheet-based solutions suffer from several drawbacks: It is generally not possible to ensure a sufficient right and role management; it is not traced who has changed data when and why. Therefore, such systems are not able to comply with regulatory requirements for electronic data capture in clinical trials. In contrast, Electronic Data Capture (EDC) software enables a reliable, secure and auditable collection of data. In this regard, most EDC vendors support the CDISC ODM standard to define, communicate and archive clinical trial meta- and patient data. Advantages of EDC systems are support for multi-user and multicenter clinical trials as well as auditable data. Migration from spreadsheet based data collection to EDC systems is labor-intensive and time-consuming at present. Hence, the objectives of this research work are to develop a mapping model and implement a converter between the IBM SPSS and CDISC ODM standard and to evaluate this approach regarding syntactic and semantic correctness. A mapping model between IBM SPSS and CDISC ODM data structures was developed. SPSS variables and patient values can be mapped and converted into ODM. Statistical and display attributes from SPSS are not corresponding to any ODM elements; study related ODM elements are not available in SPSS. The S2O converting tool was implemented as command-line-tool using the SPSS internal Java plugin. Syntactic and semantic correctness was validated with different ODM tools and reverse transformation from ODM into SPSS format. Clinical data values were also successfully transformed into the ODM structure. Transformation between the spreadsheet format IBM SPSS and the ODM standard for definition and exchange of trial data is feasible. S2O facilitates migration from Excel- or SPSS-based data collections towards reliable EDC systems. Thereby, advantages of EDC systems like reliable software architecture for secure and traceable data collection and particularly compliance with regulatory requirements are achievable.

  3. Nutrition and Hydration Status of Aircrew Members Consuming The Food Packet, Survival, General Purpose, Improved During A Simulated Survival Scenario

    Science.gov (United States)

    1992-11-01

    cell destruction (hemolysis) (42). One of the causes could be 50 mechanical trauma inflicted on the capillaries of the feet from marching or running...the result of exercise-induced skeletal muscle trauma occurring during the FTX (45). Blood lipid values were all within accepted ranges. Cholesterol...enough. abcde fgh ijklm ooooooooooooo 29. How often were you THIRSTY during the field exercise? Fill in one oval. ALMOST FAIRLY ALMOST

  4. DACC program cost and work breakdown structure-dictionary. General purpose aft cargo carrier study, volume 2

    Science.gov (United States)

    1985-01-01

    Results of detailed cost estimates and economic analysis performed on the updated 201 configuration of the dedicated Aft Cargo Carrier (DACC) are given. The objective of this economic analysis is to provide the National Aeronautics and Space Administration (NASA) with information on the economics of using the DACC on the Space Transportation System (STS). The detailed cost estimates for the DACC are presented by a work breakdown structure (WBS) to ensure that all elements of cost are considered in the economic analysis and related subsystem trades. Costs reported by WBS provide NASA with a basis for comparing competing designs and provide detailed cost information that can be used to forecast phase C/D planning for new projects or programs derived from preliminary conceptual design studies. The scope covers all STS and STS/DACC launch vehicle cost impacts for delivering an orbital transfer vehicle to a 120 NM low Earth orbit (LEO).

  5. 15 CFR 744.17 - Restrictions on certain exports and reexports of general purpose microprocessors for “military...

    Science.gov (United States)

    2010-01-01

    ... reexport commodities described in ECCN 3A991.a.1 on the CCL (“microprocessor microcircuits”, “microcomputer... amendment to the EAR, that a license is required for export or reexport of items described in ECCN 3A991.a.1...

  6. The Development of a General Purpose ARM-based Processing Unit for the ATLAS TileCal sROD

    CERN Document Server

    Cox, Mitchell Arij; The ATLAS collaboration; Mellado Garcia, Bruce Rafael

    2015-01-01

    The Large Hadron Collider at CERN generates enormous amounts of raw data which present a serious computing challenge. After Phase-II upgrades in 2022, the data output from the ATLAS Tile Calorimeter will increase by 200 times to 41 Tb/s! ARM processors are common in mobile devices due to their low cost, low energy consumption and high performance. It is proposed that a cost-effective, high data throughput Processing Unit (PU) can be developed by using several consumer ARM processors in a cluster configuration to allow aggregated processing performance and data throughput while maintaining minimal software design difficulty for the end-user. This PU could be used for a variety of high-level functions on the high-throughput raw data such as spectral analysis and histograms to detect possible issues in the detector at a low level. High-throughput I/O interfaces are not typical in consumer ARM System on Chips but high data throughput capabilities are feasible via the novel use of PCI-Express as the I/O interface ...

  7. Federal Specification MMM-A-1617B for Adhesive, Rubber-Base, General-Purpose HAP-Free Replacement

    Science.gov (United States)

    2011-05-01

    7.80 3M-4491 (alternative) NA Acetone Cyclohexanone 22–26 0 42–52 7.00–7.40 3M-1099 (alternative) NA Acetone 31–37 0 0 7.30–7.50...foams, plastics, vinyl extrusions, and sheeting. This formulation contains acetone and cyclohexanone (table 3) (18), which are both non-HAP...solvents; however, cyclohexanone is a VOC (10, 11). 3M Scotch-Weld (alternative) Nitrile High Performance Plastic Adhesive 1099 (3M-1099) is a medium

  8. PolarBRDF: A general purpose Python package for visualization and quantitative analysis of multi-angular remote sensing measurements

    Science.gov (United States)

    Poudyal, R.; Singh, M.; Gautam, R.; Gatebe, C. K.

    2016-12-01

    The Bidirectional Reflectance Distribution Function (BRDF) is a fundamental concept for characterizing the reflectance property of a surface, and helps in the analysis of remote sensing data from satellite, airborne and surface platforms. Multi-angular remote sensing measurements are required for the development and evaluation of BRDF models for improved characterization of surface properties. However, multi-angular data and the associated BRDF models are typically multidimensional involving multi-angular and multi-wavelength information. Effective visualization of such complex multidimensional measurements for different wavelength combinations is presently somewhat lacking in the literature, and could serve as a potentially useful research and teaching tool in aiding both interpretation and analysis of BRDF measurements. This article describes a newly developed software package in Python (PolarBRDF) to help visualize and analyze multi-angular data in polar and False Color Composite (FCC) forms. PolarBRDF also includes functionalities for computing important multi-angular reflectance/albedo parameters including spectral albedo, principal plane reflectance and spectral reflectance slope. Application of PolarBRDF is demonstrated using various case studies obtained from airborne multi-angular remote sensing measurements using NASA's Cloud Absorption Radiometer (CAR)- http://car.gsfc.nasa.gov/. Our visualization program also provides functionalities for untangling complex surface/atmosphere features embedded in pixel-based remote sensing measurements, such as the FCC imagery generation of BRDF measurements of grasslands in the presence of wildfire smoke and clouds. Furthermore, PolarBRDF also provides quantitative information of the angular distribution of scattered surface/atmosphere radiation, in the form of relevant BRDF variables such as sunglint, hotspot and scattering statistics.

  9. Simpler methods do it better: Success of Recurrence Quantification Analysis as a general purpose data analysis tool

    Energy Technology Data Exchange (ETDEWEB)

    Webber, Charles L., E-mail: cwebber@lumc.ed [Department of Cell and Molecular Physiology, Loyola University Medical Center, Maywood, IL (United States); Marwan, Norbert, E-mail: marwan@pik-potsdam.d [Potsdam Institute for Climate Impact Research (PIK), 14412 Potsdam (Germany); Facchini, Angelo, E-mail: a.facchini@unisi.i [Center the Study of Complex Systmes and Department of Information Enginering, University of Siena, 53100 Siena (Italy); Giuliani, Alessandro, E-mail: alessandro.giuliani@iss.i [Environment and Health Department, Istituto Superiore di Sanita, Roma (Italy)

    2009-10-05

    Over the last decade, Recurrence Quantification Analysis (RQA) has become a new standard tool in the toolbox of nonlinear methodologies. In this Letter we trace the history and utility of this powerful tool and cite some common applications. RQA continues to wend its way into numerous and diverse fields of study.

  10. General Purpose Force Capability; the Challenge of Versatility and Achieving Balance Along the Widest Possible Spectrum of Conflict

    Science.gov (United States)

    2010-04-01

    conflict and managing competing resources. la rfare efined The 2006 Irregular Warfa re JOC defined Irregular Warfare as: “A violent struggle...general war from what was efined e s into fall of the Berlin Wall and the end of the Soviet conventi U.S. struggled to anticipate and failed to

  11. Digital system upset. The effects of simulated lightning-induced transients on a general-purpose microprocessor

    Science.gov (United States)

    Belcastro, C. M.

    1983-01-01

    Flight critical computer based control systems designed for advanced aircraft must exhibit ultrareliable performance in lightning charged environments. Digital system upset can occur as a result of lightning induced electrical transients, and a methodology was developed to test specific digital systems for upset susceptibility. Initial upset data indicates that there are several distinct upset modes and that the occurrence of upset is related to the relative synchronization of the transient input with the processing sate of the digital system. A large upset test data base will aid in the formulation and verification of analytical upset reliability modeling techniques which are being developed.

  12. Polarbrdf: A General Purpose Python Package for Visualization Quantitative Analysis of Multi-Angular Remote Sensing Measurements

    Science.gov (United States)

    Singh, Manoj K.; Gautam, Ritesh; Gatebe, Charles K.; Poudyal, Rajesh

    2016-01-01

    The Bidirectional Reflectance Distribution Function (BRDF) is a fundamental concept for characterizing the reflectance property of a surface, and helps in the analysis of remote sensing data from satellite, airborne and surface platforms. Multi-angular remote sensing measurements are required for the development and evaluation of BRDF models for improved characterization of surface properties. However, multi-angular data and the associated BRDF models are typically multidimensional involving multi-angular and multi-wavelength information. Effective visualization of such complex multidimensional measurements for different wavelength combinations is presently somewhat lacking in the literature, and could serve as a potentially useful research and teaching tool in aiding both interpretation and analysis of BRDF measurements. This article describes a newly developed software package in Python (PolarBRDF) to help visualize and analyze multi-angular data in polar and False Color Composite (FCC) forms. PolarBRDF also includes functionalities for computing important multi-angular reflectance/albedo parameters including spectral albedo, principal plane reflectance and spectral reflectance slope. Application of PolarBRDF is demonstrated using various case studies obtained from airborne multi-angular remote sensing measurements using NASA's Cloud Absorption Radiometer (CAR). Our visualization program also provides functionalities for untangling complex surface/atmosphere features embedded in pixel-based remote sensing measurements, such as the FCC imagery generation of BRDF measurements of grasslands in the presence of wild fire smoke and clouds. Furthermore, PolarBRDF also provides quantitative information of the angular distribution of scattered surface/atmosphere radiation, in the form of relevant BRDF variables such as sunglint, hotspot and scattering statistics.

  13. A general-purpose framework to simulate musculoskeletal system of human body: using a motion tracking approach.

    Science.gov (United States)

    Ehsani, Hossein; Rostami, Mostafa; Gudarzi, Mohammad

    2016-02-01

    Computation of muscle force patterns that produce specified movements of muscle-actuated dynamic models is an important and challenging problem. This problem is an undetermined one, and then a proper optimization is required to calculate muscle forces. The purpose of this paper is to develop a general model for calculating all muscle activation and force patterns in an arbitrary human body movement. For this aim, the equations of a multibody system forward dynamics, which is considered for skeletal system of the human body model, is derived using Lagrange-Euler formulation. Next, muscle contraction dynamics is added to this model and forward dynamics of an arbitrary musculoskeletal system is obtained. For optimization purpose, the obtained model is used in computed muscle control algorithm, and a closed-loop system for tracking desired motions is derived. Finally, a popular sport exercise, biceps curl, is simulated by using this algorithm and the validity of the obtained results is evaluated via EMG signals.

  14. Using Low-Level Architectural Features for Configuration InfoSec in a General-Purpose Self-Configurable System

    Directory of Open Access Journals (Sweden)

    Nicholas J. Macias

    2009-12-01

    Full Text Available Unique characteristics of biological systems are described, and similarities are made to certain computing architectures. The security challenges posed by these characteristics are discussed. A method of securely isolating portions of a design using introspective capabilities of a fine-grain self-configurable device is presented. Experimental results are discussed, and plans for future work are given.

  15. Cafe Variome : General-Purpose Software for Making Genotype-Phenotype Data Discoverable in Restricted or Open Access Contexts

    NARCIS (Netherlands)

    Lancaster, Owen; Beck, Tim; Atlan, David; Swertz, Morris; Thangavelu, Dhiwagaran; Veal, Colin; Dalgleish, Raymond; Brookes, Anthony J.

    2015-01-01

    Biomedical data sharing is desirable, but problematic. Data "discovery" approaches-which establish the existence rather than the substance of data-precisely connect data owners with data seekers, and thereby promote data sharing. Cafe Variome (http://www.cafevariome.org) was therefore designed to pr

  16. GARLIC - A general purpose atmospheric radiative transfer line-by-line infrared-microwave code: Implementation and evaluation

    Science.gov (United States)

    Schreier, Franz; Gimeno García, Sebastián; Hedelt, Pascal; Hess, Michael; Mendrok, Jana; Vasquez, Mayte; Xu, Jian

    2014-04-01

    A suite of programs for high resolution infrared-microwave atmospheric radiative transfer modeling has been developed with emphasis on efficient and reliable numerical algorithms and a modular approach appropriate for simulation and/or retrieval in a variety of applications. The Generic Atmospheric Radiation Line-by-line Infrared Code - GARLIC - is suitable for arbitrary observation geometry, instrumental field-of-view, and line shape. The core of GARLIC's subroutines constitutes the basis of forward models used to implement inversion codes to retrieve atmospheric state parameters from limb and nadir sounding instruments. This paper briefly introduces the physical and mathematical basics of GARLIC and its descendants and continues with an in-depth presentation of various implementation aspects: An optimized Voigt function algorithm combined with a two-grid approach is used to accelerate the line-by-line modeling of molecular cross sections; various quadrature methods are implemented to evaluate the Schwarzschild and Beer integrals; and Jacobians, i.e. derivatives with respect to the unknowns of the atmospheric inverse problem, are implemented by means of automatic differentiation. For an assessment of GARLIC's performance, a comparison of the quadrature methods for solution of the path integral is provided. Verification and validation are demonstrated using intercomparisons with other line-by-line codes and comparisons of synthetic spectra with spectra observed on Earth and from Venus.

  17. Pd-PEPPSI-IHept(Cl) : A General-Purpose, Highly Reactive Catalyst for the Selective Coupling of Secondary Alkyl Organozincs.

    Science.gov (United States)

    Atwater, Bruce; Chandrasoma, Nalin; Mitchell, David; Rodriguez, Michael J; Organ, Michael G

    2016-10-01

    Dichloro[1,3-bis(2,6-di-4-heptylphenyl)imidazol-2-ylidene](3-chloropyridyl)palladium(II) (Pd-PEPPSI-IHept(Cl) ), a new, very bulky yet flexible Pd-N-heterocyclic carbene (NHC) complex has been evaluated in the cross-coupling of secondary alkylzinc reactants with a wide variety of oxidative addition partners in high yields and excellent selectivity. The desired, direct reductive elimination branched products were obtained with no sign of migratory insertion across electron-rich and electron-poor aromatics and all forms of heteroaromatics (five and six membered). Impressively, there is no impact of substituents at the site of reductive elimination (i.e., ortho or even di-ortho), which has not yet been demonstrated by another catalyst system to date.

  18. Environmental Assessment: Construction and Operation of Defense Logistics Agency General Purpose Warehouse of Consolidation, Containerization and Palletization

    Science.gov (United States)

    2007-08-16

    hazardous characteristic due to chlordane, underlying hazardous constituents ( UHCs ) must also be evaluated. This is because RCRA LDRs restrict disposal...until not only the chlordane meets LDR treatment standards, but also UHCs . UHCs are defined in 40 CFR 268.2 as "any constituent listed in 40 CFR...treated to 0.26 mg/kg total chlordane before land disposal. (In addition, UHCs must also be meet standards in 40 CFR 268.48.) Example

  19. A microarray platform-independent classification tool for cell of origin class allows comparative analysis of gene expression in diffuse large B-cell lymphoma.

    Directory of Open Access Journals (Sweden)

    Matthew A Care

    Full Text Available Cell of origin classification of diffuse large B-cell lymphoma (DLBCL identifies subsets with biological and clinical significance. Despite the established nature of the classification existing studies display variability in classifier implementation, and a comparative analysis across multiple data sets is lacking. Here we describe the validation of a cell of origin classifier for DLBCL, based on balanced voting between 4 machine-learning tools: the DLBCL automatic classifier (DAC. This shows superior survival separation for assigned Activated B-cell (ABC and Germinal Center B-cell (GCB DLBCL classes relative to a range of other classifiers. DAC is effective on data derived from multiple microarray platforms and formalin fixed paraffin embedded samples and is parsimonious, using 20 classifier genes. We use DAC to perform a comparative analysis of gene expression in 10 data sets (2030 cases. We generate ranked meta-profiles of genes showing consistent class-association using ≥6 data sets as a cut-off: ABC (414 genes and GCB (415 genes. The transcription factor ZBTB32 emerges as the most consistent and differentially expressed gene in ABC-DLBCL while other transcription factors such as ARID3A, BATF, and TCF4 are also amongst the 24 genes associated with this class in all datasets. Analysis of enrichment of 12323 gene signatures against meta-profiles and all data sets individually confirms consistent associations with signatures of molecular pathways, chromosomal cytobands, and transcription factor binding sites. We provide DAC as an open access Windows application, and the accompanying meta-analyses as a resource.

  20. Thread Algebra with Multi-Level Strategic Interleaving

    NARCIS (Netherlands)

    Bergstra, J.A.; Middelburg, C.A.

    2004-01-01

    In a previous paper, we developed an algebraic theory of threads and multi-threads based on strategic interleaving. This theory includes a number of plausible interleaving strategies on thread vectors. The strategic interleaving of a thread vector constitutes a multi-thread. Several multi-threads ma

  1. A Fault Detection Mechanism in a Data-flow Scheduled Multithreaded Processor

    NARCIS (Netherlands)

    Fu, J.; Yang, Q.; Poss, R.; Jesshope, C.R.; Zhang, C.

    2014-01-01

    This paper designs and implements the Redundant Multi-Threading (RMT) in a Data-flow scheduled MultiThreaded (DMT) multicore processor, called Data-flow scheduled Redundant Multi-Threading (DRMT). Meanwhile, It presents Asynchronous Output Comparison (AOC) for RMT techniques to avoid fault detection

  2. Many-core technologies: The move to energy-efficient, high-throughput x86 computing (TFLOPS on a chip)

    CERN Document Server

    CERN. Geneva

    2012-01-01

    With Moore's Law alive and well, more and more parallelism is introduced into all computing platforms at all levels of integration and programming to achieve higher performance and energy efficiency. Especially in the area of High-Performance Computing (HPC) users can entertain a combination of different hardware and software parallel architectures and programming environments. Those technologies range from vectorization and SIMD computation over shared memory multi-threading (e.g. OpenMP) to distributed memory message passing (e.g. MPI) on cluster systems. We will discuss HPC industry trends and Intel's approach to it from processor/system architectures and research activities to hardware and software tools technologies. This includes the recently announced new Intel(r) Many Integrated Core (MIC) architecture for highly-parallel workloads and general purpose, energy efficient TFLOPS performance, some of its architectural features and its programming environment. At the end we will have a br...

  3. Introduction

    Science.gov (United States)

    Diniz, Pedro C.; Juurlink, Ben; Darte, Alain; Karl, Wolfgang

    This topic deals with architecture design and compilation for high performance systems. The areas of interest range from microprocessors to large-scale parallel machines; from general-purpose platforms to specialized hardware (e.g., graphic coprocessors, low-power embedded systems); and from hardware design to compiler technology. On the compilation side, topics of interest include programmer productivity issues, concurrent and/or sequential language aspects, program analysis, transformation, automatic discovery and/or management of parallelism at all levels, and the interaction between the compiler and the rest of the system. On the architecture side, the scope spans system architectures, processor micro-architecture, memory hierarchy, and multi-threading, and the impact of emerging trends.

  4. General-Purpose Components Implement USB-Based Data-Acquisition System%通用组件实现基于USB的数据采集系统

    Institute of Scientific and Technical Information of China (English)

    V Gopalakrishnan

    2008-01-01

    图1是基于USB的数据采集系统的一种设计实例,该数据采集系统使用一个采用通用元件的串行模数转换器,例如D触发器、二进制计数器和移位寄存器。利用DLPDesign的DLP—USB245MFIFO—to—USB转换器模块.可以通过主机USB端口与外围设备通信。可以编写自己的程序通过此模块来读取和写入数据,或者从DLP网站上下载免费的测试应用软件。此外,还可下载National Instruments(美国国家仪器有限公司)的LabView串行读写虚拟仪器(VI)。

  5. PENGEOM-A general-purpose geometry package for Monte Carlo simulation of radiation transport in material systems defined by quadric surfaces

    Science.gov (United States)

    Almansa, Julio; Salvat-Pujol, Francesc; Díaz-Londoño, Gloria; Carnicer, Artur; Lallena, Antonio M.; Salvat, Francesc

    2016-02-01

    The Fortran subroutine package PENGEOM provides a complete set of tools to handle quadric geometries in Monte Carlo simulations of radiation transport. The material structure where radiation propagates is assumed to consist of homogeneous bodies limited by quadric surfaces. The PENGEOM subroutines (a subset of the PENELOPE code) track particles through the material structure, independently of the details of the physics models adopted to describe the interactions. Although these subroutines are designed for detailed simulations of photon and electron transport, where all individual interactions are simulated sequentially, they can also be used in mixed (class II) schemes for simulating the transport of high-energy charged particles, where the effect of soft interactions is described by the random-hinge method. The definition of the geometry and the details of the tracking algorithm are tailored to optimize simulation speed. The use of fuzzy quadric surfaces minimizes the impact of round-off errors. The provided software includes a Java graphical user interface for editing and debugging the geometry definition file and for visualizing the material structure. Images of the structure are generated by using the tracking subroutines and, hence, they describe the geometry actually passed to the simulation code.

  6. Louisiana State University System General Purpose Financial Statements and Independent Auditor's Reports as of and for the Year Ended June 30, 1998, with Supplemental Information Schedules.

    Science.gov (United States)

    Louisiana State Legislative Auditor, Baton Rouge.

    This report presents results of a financial audit of the Louisiana State University (LSU) system. The auditors also rendered opinions on financial statements of separate, incorporated foundations which oversee the investment of various university endowments, the financial statements for which were prepared by other auditors. An accompanying letter…

  7. Initial Performance Studies of a General-Purpose Detector for Multi-TeV Physics at a 100 TeV pp Collider

    Energy Technology Data Exchange (ETDEWEB)

    Chekanov, S. V. [Argonne; Beydler, M. [Argonne; Kotwal, A. V. [Fermilab; Gray, L. [Fermilab; Sen, S. [Duke U.; Tran, N. V. [Fermilab; Yu, S. -S. [Taiwan, Natl. Central U.; Zuzelski, J. [Michigan State U.

    2016-12-21

    This paper describes simulations of detector response to multi-TeV physics at the Future Circular Collider (FCC-hh) or Super proton-proton Collider (SppC) which aim to collide proton beams with a centre-of-mass energy of 100 TeV. The unprecedented energy regime of these future experiments imposes new requirements on detector technologies which can be studied using the detailed GEANT4 simulations presented in this paper. The initial performance of a detector designed for physics studies at the FCC-hh or SppC experiments is described with an emphasis on measurements of single particles up to 33 TeV in transverse momentum. The reconstruction of hadronic jets has also been studied in the transverse momentum range from 50 GeV to 26 TeV. The granularity requirements for calorimetry are investigated using the two-particle spatial resolution achieved for hadron showers.

  8. Implementation of routine ash predictions using a general purpose atmospheric dispersion model (HYSPLIT) adapted for calculating ash thickness on the ground.

    Science.gov (United States)

    Hurst, Tony; Davis, Cory; Deligne, Natalia

    2016-04-01

    GNS Science currently produces twice-daily forecasts of the likely ash deposition if any of the active or recently active volcanoes in New Zealand was to erupt, with a number of alternative possible eruptions for each volcano. These use our ASHFALL program for calculating ash thickness, which uses 1-D wind profiles at the location of each volcano derived from Numerical Weather Prediction (NWP) model output supplied by MetService. HYSPLIT is a hybrid Lagrangian dispersion model, developed by NOAA/ARL, which is used by MetService in its role as a Volcanic Ash Advisory Centre, to model airborne volcanic ash, with meteorological data provided by external and in-house NWP models. A by-product of the HYSPLIT volcanic ash dispersion simulations is the deposition rate at the ground surface. Comparison of HYSPLIT with ASHFALL showed that alterations to the standard fall velocity model were required to deal with ash particles larger than about 50 microns, which make up the bulk of ash deposits near a volcano. It also required the ash injected into the dispersion model to have a concentration based on a typical umbrella-shaped eruption column, rather than uniform across all levels. The different parameters used in HYSPLIT also caused us to revisit what possible combinations of eruption size and column height were appropriate to model as a likely eruption. We are now running HYSPLIT to produce alternative ash forecasts. It is apparent that there are many times at which the 3-D wind model used in HYSPLIT gives a substantially different ash deposition pattern to the 1-D wind model of ASHFALL, and the use of HYSPLIT will give more accurate predictions. ASHFALL is likely still to be used for probabilistic hazard forecasting, in which very large numbers of runs are required, as HYSPLIT takes much more computer time.

  9. Optimization of GATE and PHITS Monte Carlo code parameters for spot scanning proton beam based on simulation with FLUKA general-purpose code

    Energy Technology Data Exchange (ETDEWEB)

    Kurosu, Keita [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States); Department of Radiation Oncology, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871 (Japan); Department of Radiology, Osaka University Hospital, Suita, Osaka 565-0871 (Japan); Das, Indra J. [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States); Moskvin, Vadim P. [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States); Department of Radiation Oncology, St. Jude Children’s Research Hospital, Memphis, TN 38105 (United States)

    2016-01-15

    Spot scanning, owing to its superior dose-shaping capability, provides unsurpassed dose conformity, in particular for complex targets. However, the robustness of the delivered dose distribution and prescription has to be verified. Monte Carlo (MC) simulation has the potential to generate significant advantages for high-precise particle therapy, especially for medium containing inhomogeneities. However, the inherent choice of computational parameters in MC simulation codes of GATE, PHITS and FLUKA that is observed for uniform scanning proton beam needs to be evaluated. This means that the relationship between the effect of input parameters and the calculation results should be carefully scrutinized. The objective of this study was, therefore, to determine the optimal parameters for the spot scanning proton beam for both GATE and PHITS codes by using data from FLUKA simulation as a reference. The proton beam scanning system of the Indiana University Health Proton Therapy Center was modeled in FLUKA, and the geometry was subsequently and identically transferred to GATE and PHITS. Although the beam transport is managed by spot scanning system, the spot location is always set at the center of a water phantom of 600 × 600 × 300 mm{sup 3}, which is placed after the treatment nozzle. The percentage depth dose (PDD) is computed along the central axis using 0.5 × 0.5 × 0.5 mm{sup 3} voxels in the water phantom. The PDDs and the proton ranges obtained with several computational parameters are then compared to those of FLUKA, and optimal parameters are determined from the accuracy of the proton range, suppressed dose deviation, and computational time minimization. Our results indicate that the optimized parameters are different from those for uniform scanning, suggesting that the gold standard for setting computational parameters for any proton therapy application cannot be determined consistently since the impact of setting parameters depends on the proton irradiation technique. We therefore conclude that customization parameters must be set with reference to the optimized parameters of the corresponding irradiation technique in order to render them useful for achieving artifact-free MC simulation for use in computational experiments and clinical treatments.

  10. Advanced development and using of space nuclear power systems as a part of transport power supply modules for general purpose spacecraft

    Science.gov (United States)

    Menshikov, Valery A.; Kuzin, Anatoly I.; Pavlov, Konstantin A.; Zatserkovny, Sergey P.; Kalmykov, Alexandr V.; Sorokin, Alexandr N.; Bulavatsky, Andrey Y.; Vasilkovsky, Vladimir V.; Zrodnikov, Anatoly V.; Trukhanov, Yuri L.; Nikolaev, Yuri V.; Bezzubtsev, Valery S.; Lutov, Evgeny I.; Pavshoock, Vladimir A.; Akimov, Vladimir N.; Arkhangelsky, Nicolay I.; Gladyshev, Sergey N.

    1996-03-01

    Nuclear transport power systems (NTPS) can provide solving such important science, commerce and defense tasks in space as radar surveillance, information affording, global ecological monitoring, defense of Earth from dangerous space objects, manufacturing in space, investigations of asteroids, comets and solar systems' planets (Kuzin et al. 1993a, 1993b). The creation of NTPS for real space systems, however, must be based on proved NTPS effectiveness in comparison with other power and propulsion systems such as, nonnuclear electric-rocket systems and so on. When the NTPS effectiveness is proved, the operation safety of such systems must be suited to the UN requirements for all stages of the life cycle in view of possible failures. A nuclear transport power module provides both a large amount of thermal and electrical power and a long acting time (about 6-7 years after completing the delivery task). For this reason such module is featured with the high power supplying-mass delivery effectiveness and the considerable increasing of the total effectiveness of a spacecraft with the module. In the report, the such NTPS three types, namely the system on the base of thermionic reactor-converter with electric rocket propulsion system (ERPS), the dual mode thermionic nuclear power system with pumping of working fluid through the active reactor zone, and the system on the base of the nuclear thermal rocket engine technology is compared with the transport power modules on the base of solar power system from the point of view of providing the highest degree of the effectiveness.

  11. HypCal, a general-purpose computer program for the determination of standard reaction enthalpy and binding constant values by means of calorimetry.

    Science.gov (United States)

    Arena, Giuseppe; Gans, Peter; Sgarlata, Carmelo

    2016-09-01

    The program HypCal has been developed to provide a means for the simultaneous determination, from data obtained by isothermal titration calorimetry, of both standard enthalpy of reaction and binding constant values. The chemical system is defined in terms of species of given stoichiometry rather than in terms of binding models (e.g., independent or cooperative). The program does not impose any limits on the complexity of the chemical systems that can be treated, including competing ligand systems. Many titration curves may be treated simultaneously. HypCal can also be used as a simulation program when designing experiments. The use of the program is illustrated with data obtained with nicotinic acid (niacin, pyridine-3 carboxylic acid). Preliminary experiments were used to establish the rather different titration conditions for the two sets of titration curves that are needed to determine the parameters for protonation of the carboxylate and amine groups.

  12. A tunable general purpose Q-band resonator for CW and pulse EPR/ENDOR experiments with large sample access and optical excitation

    Science.gov (United States)

    Reijerse, Edward; Lendzian, Friedhelm; Isaacson, Roger; Lubitz, Wolfgang

    2012-01-01

    We describe a frequency tunable Q-band cavity (34 GHz) designed for CW and pulse Electron Paramagnetic Resonance (EPR) as well as Electron Nuclear Double Resonance (ENDOR) and Electron Electron Double Resonance (ELDOR) experiments. The TE 011 cylindrical resonator is machined either from brass or from graphite (which is subsequently gold plated), to improve the penetration of the 100 kHz field modulation signal. The (self-supporting) ENDOR coil consists of four 0.8 mm silver posts at 2.67 mm distance from the cavity center axis, penetrating through the plunger heads. It is very robust and immune to mechanical vibrations. The coil is electrically shielded to enable CW ENDOR experiments with high RF power (500 W). The top plunger of the cavity is movable and allows a frequency tuning of ±2 GHz. In our setup the standard operation frequency is 34.0 GHz. The microwaves are coupled into the resonator through an iris in the cylinder wall and matching is accomplished by a sliding short in the coupling waveguide. Optical excitation of the sample is enabled through slits in the cavity wall (transmission ˜60%). The resonator accepts 3 mm o.d. sample tubes. This leads to a favorable sensitivity especially for pulse EPR experiments of low concentration biological samples. The probehead dimensions are compatible with that of Bruker flexline Q-band resonators and it fits perfectly into an Oxford CF935 Helium flow cryostat (4-300 K). It is demonstrated that, due to the relatively large active sample volume (20-30 μl), the described resonator has superior concentration sensitivity as compared to commercial pulse Q-band resonators. The quality factor ( Q L) of the resonator can be varied between 2600 (critical coupling) and 1300 (over-coupling). The shortest achieved π/2-pulse durations are 20 ns using a 3 W microwave amplifier. ENDOR (RF) π-pulses of 20 μs ( 1H @ 51 MHz) were obtained for a 300 W amplifier and 7 μs using a 2500 W amplifier. Selected applications of the resonator are presented.

  13. ProSPer:一个支持proactive特性的通用型事件监控系统%ProSPer: A Proactive Event Monitor for General Purposes

    Institute of Scientific and Technical Information of China (English)

    刘家红; 吴泉源

    2009-01-01

    大规模网络安全监控应用中需要对网络安全态势进行动态评估,在网络出现重大安全风险前进行proac-tive特性的有效防范.把网络安全监控系统建模为事件监控系统,对满足复合时序和属性值逻辑关系的多个事件进行关联,把多个原子事件复合为语义更丰富、更抽象的复合安全事件.已有研究提出了不同的复合事件检测模型,但缺乏proactive的事件监控能力.基于时序关系并不能提高事件监控的预测能力的假设.设计了基于top-k复合事件检测模型的事件监控系统ProSPer,为网络安全监控等应用系统提供proactive特性的事件监控能力.与已有的复合事件检测系统相比,ProSPer检测复合事件时无需读取全部成分事件,这种proactive特性是非常有意义的设计.

  14. SediFoam: A general-purpose, open-source CFD-DEM solver for particle-laden flow with emphasis on sediment transport

    Science.gov (United States)

    Sun, Rui; Xiao, Heng

    2016-04-01

    With the growth of available computational resource, CFD-DEM (computational fluid dynamics-discrete element method) becomes an increasingly promising and feasible approach for the study of sediment transport. Several existing CFD-DEM solvers are applied in chemical engineering and mining industry. However, a robust CFD-DEM solver for the simulation of sediment transport is still desirable. In this work, the development of a three-dimensional, massively parallel, and open-source CFD-DEM solver SediFoam is detailed. This solver is built based on open-source solvers OpenFOAM and LAMMPS. OpenFOAM is a CFD toolbox that can perform three-dimensional fluid flow simulations on unstructured meshes; LAMMPS is a massively parallel DEM solver for molecular dynamics. Several validation tests of SediFoam are performed using cases of a wide range of complexities. The results obtained in the present simulations are consistent with those in the literature, which demonstrates the capability of SediFoam for sediment transport applications. In addition to the validation test, the parallel efficiency of SediFoam is studied to test the performance of the code for large-scale and complex simulations. The parallel efficiency tests show that the scalability of SediFoam is satisfactory in the simulations using up to O(107) particles.

  15. Real-Time Radio Wave Propagation for Mobile Ad-Hoc Network Emulation and Simulation Using General Purpose Graphics Processing Units (GPGPUs)

    Science.gov (United States)

    2014-05-01

    required substantial reformulation . An example of this reformulation was the replacement of nested conditional control flow inside inner loops. Second...Longley-Rice algorithm focused on the ITM algorithm LRPROP routine. LRPROP was implemented in both the Brook+ and CUDA languages . Subsequently, 5 the...kernels that make up LRPROP that were also implemented in the architecture specific Brooke+ and CUDA languages . The kernels are executed in succession

  16. SU-E-T-254: Optimization of GATE and PHITS Monte Carlo Code Parameters for Uniform Scanning Proton Beam Based On Simulation with FLUKA General-Purpose Code

    Energy Technology Data Exchange (ETDEWEB)

    Kurosu, K [Department of Radiation Oncology, Osaka University Graduate School of Medicine, Osaka (Japan); Department of Medical Physics ' Engineering, Osaka University Graduate School of Medicine, Osaka (Japan); Takashina, M; Koizumi, M [Department of Medical Physics ' Engineering, Osaka University Graduate School of Medicine, Osaka (Japan); Das, I; Moskvin, V [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN (United States)

    2014-06-01

    Purpose: Monte Carlo codes are becoming important tools for proton beam dosimetry. However, the relationships between the customizing parameters and percentage depth dose (PDD) of GATE and PHITS codes have not been reported which are studied for PDD and proton range compared to the FLUKA code and the experimental data. Methods: The beam delivery system of the Indiana University Health Proton Therapy Center was modeled for the uniform scanning beam in FLUKA and transferred identically into GATE and PHITS. This computational model was built from the blue print and validated with the commissioning data. Three parameters evaluated are the maximum step size, cut off energy and physical and transport model. The dependence of the PDDs on the customizing parameters was compared with the published results of previous studies. Results: The optimal parameters for the simulation of the whole beam delivery system were defined by referring to the calculation results obtained with each parameter. Although the PDDs from FLUKA and the experimental data show a good agreement, those of GATE and PHITS obtained with our optimal parameters show a minor discrepancy. The measured proton range R90 was 269.37 mm, compared to the calculated range of 269.63 mm, 268.96 mm, and 270.85 mm with FLUKA, GATE and PHITS, respectively. Conclusion: We evaluated the dependence of the results for PDDs obtained with GATE and PHITS Monte Carlo generalpurpose codes on the customizing parameters by using the whole computational model of the treatment nozzle. The optimal parameters for the simulation were then defined by referring to the calculation results. The physical model, particle transport mechanics and the different geometrybased descriptions need accurate customization in three simulation codes to agree with experimental data for artifact-free Monte Carlo simulation. This study was supported by Grants-in Aid for Cancer Research (H22-3rd Term Cancer Control-General-043) from the Ministry of Health, Labor and Welfare of Japan, Grants-in-Aid for Scientific Research (No. 23791419), and JSPS Core-to-Core program (No. 23003). The authors have no conflict of interest.

  17. Development of a compact and general-purpose experimental apparatus with a touch-sensitive screen for use in evaluating cognitive functions in common marmosets.

    Science.gov (United States)

    Takemoto, Atsushi; Izumi, Akihiro; Miwa, Miki; Nakamura, Katsuki

    2011-07-15

    Common marmosets have been used extensively in biomedical research and the recent advent of techniques to generate transgenic marmosets has accelerated the use of this model. New methods that efficiently assess the degree of cognitive function in common marmosets are needed in order to establish their suitability as non-human primate models of higher brain function disorders. Here, we have developed a new apparatus suitable for testing the cognitive functions of common marmosets. Utilizing a mini laptop PC with a touch-sensitive screen as the main component, the apparatus is small and lightweight and can be easily attached to the home cages. The ease of designing and testing new paradigms with the flexible software is another advantage of this system. We have tested visual discrimination and its reversal tasks using this apparatus and confirmed its efficacy.

  18. SediFoam: A general-purpose, open-source CFD-DEM solver for particle-laden flow with emphasis on sediment transport

    CERN Document Server

    Sun, Rui

    2016-01-01

    With the growth of available computational resource, CFD-DEM (computational fluid dynamics-discrete element method) becomes an increasingly promising and feasible approach for the study of sediment transport. Several existing CFD-DEM solvers are applied in chemical engineering and mining industry. However, a robust CFD-DEM solver for the simulation of sediment transport is still desirable. In this work, the development of a three-dimensional, massively parallel, and open-source CFD-DEM solver SediFoam is detailed. This solver is built based on open-source solvers OpenFOAM and LAMMPS. OpenFOAM is a CFD toolbox that can perform three-dimensional fluid flow simulations on unstructured meshes; LAMMPS is a massively parallel DEM solver for molecular dynamics. Several validation tests of SediFoam are performed using cases of a wide range of complexities. The results obtained in the present simulations are consistent with those in the literature, which demonstrates the capability of SediFoam for sediment transport a...

  19. Installation to Production of a Large-Scale General Purpose Graphics Processing Unit (GPGPU) Cluster at the U.S. Army Research Laboratory: Thufir

    Science.gov (United States)

    2014-09-01

    return it to the originator . Army Research Laboratory Aberdeen Proving Ground, MD 21005-5067 ARL-TR-7085 September 2014 Installation to...tree types (deciduous or coniferous ) and tree density (4). Initially, we will be running two software applications with RF propagation models using

  20. Advanced development and using of space nuclear power systems as a part of transport power supply modules for general purpose spacecraft

    Energy Technology Data Exchange (ETDEWEB)

    Menshikov, V.A.; Kuzin, A.I.; Pavlov, K.A.; Zatserkovny, S.P. [Russian Federation Ministry of Defense Central, Scientific-Research Institute of Space Force, Moscow, K-160 (Russian Federation); Kalmykov, A.V.; Sorokin, A.N.; Bulavatsky, A.Y. [Russian Federation Ministry of Defense, Main Department of Space Force, Moscow, K-160 (Russian Federation); Vasilkovsky, V.V. [Russian Federation Ministry of Atomic Power, 26, Staromonetny St., Moscow, 101000 (Russia); Zrodnikov, A.V. [Russian Federation State Research Center, ``Institute of Physics and Power Engineering``, 1, Bondarenko Sq., Obninsk, Kaluga Region, 249020 (Russian Federation); Trukhanov, Y.L. [``Red Star`` State Enterprise, 1A, Electrolitny St., Moscow, 115230 (Russia); Nikolaev, Y.V. [Scientific Industrial Association ``Lutch``, 24, Zhelesnodorozhnaya St., Podolsk, Moscow Region, 142100 (Russia); Bezzubtsev, V.S. [Research and Development Institute of Power Engineering, Moscow, 101000 (Russia); Lutov, E.I. [Central Design Bureau for Machine Building, St. Petersburg, 195027 (Russia); Pavshoock, V.A. [Russian Research Center ``Kurchatov Institute``, 1, Kurchatov Sq., Moscow, 123182 (Russia); Akimov, V.N.; Arkhangelsky, N.I. [Scientific-Research Institute of Thermal Processes, 8/10, Onezhskaya St., Moscow, 125438 (Russia); Gladyshev, S.N. [State Rocket Center ``Design Bureau of Academician V.P. Makeev``, 1, Turgoyarskoe St., Miass, Tchelyabinsk Region, 456300 (Russia)

    1996-03-01

    Nuclear transport power systems (NTPS) can provide solving such important science, commerce and defense tasks in space as radar surveillance, information affording, global ecological monitoring, defense of Earth from dangerous space objects, manufacturing in space, investigations of asteroids, comets and solar systems{close_quote} planets (Kuzin {ital et} {ital al}. 1993a, 1993b). The creation of NTPS for real space systems, however, must be based on proved NTPS effectiveness in comparison with other power and propulsion systems such as, nonnuclear electric-rocket systems and so on. When the NTPS effectiveness is proved, the operation safety of such systems must be suited to the UN requirements for all stages of the life cycle in view of possible failures. A nuclear transport power module provides both a large amount of thermal and electrical power and a long acting time (about 6{endash}7 years after completing the delivery task). For this reason such module is featured with the high power supplying-mass delivery effectiveness and the considerable increasing of the total effectiveness of a spacecraft with the module. In the report, the such NTPS three types, namely the system on the base of thermionic reactor-converter with electric rocket propulsion system (ERPS), the dual mode thermionic nuclear power system with pumping of working fluid through the active reactor zone, and the system on the base of the nuclear thermal rocket engine technology is compared with the transport power modules on the base of solar power system from the point of view of providing the highest degree of the effectiveness. {copyright} {ital 1996 American Institute of Physics.}

  1. An efficient simulator of 454 data using configurable statistical models

    Directory of Open Access Journals (Sweden)

    Persson Bengt

    2011-10-01

    Full Text Available Abstract Background Roche 454 is one of the major 2nd generation sequencing platforms. The particular characteristics of 454 sequence data pose new challenges for bioinformatic analyses, e.g. assembly and alignment search algorithms. Simulation of these data is therefore useful, in order to further assess how bioinformatic applications and algorithms handle 454 data. Findings We developed a new application named 454sim for simulation of 454 data at high speed and accuracy. The program is multi-thread capable and is available as C++ source code or pre-compiled binaries. Sequence reads are simulated by 454sim using a set of statistical models for each chemistry. 454sim simulates recorded peak intensities, peak quality deterioration and it calculates quality values. All three generations of the Roche 454 chemistry ('GS20', 'GS FLX' and 'Titanium' are supported and defined in external text files for easy access and tweaking. Conclusions We present a new platform independent application named 454sim. 454sim is generally 200 times faster compared to previous programs and it allows for simple adjustments of the statistical models. These improvements make it possible to carry out more complex and rigorous algorithm evaluations in a reasonable time scale.

  2. High performance computing for three-dimensional agent-based molecular models.

    Science.gov (United States)

    Pérez-Rodríguez, G; Pérez-Pérez, M; Fdez-Riverola, F; Lourenço, A

    2016-07-01

    Agent-based simulations are increasingly popular in exploring and understanding cellular systems, but the natural complexity of these systems and the desire to grasp different modelling levels demand cost-effective simulation strategies and tools. In this context, the present paper introduces novel sequential and distributed approaches for the three-dimensional agent-based simulation of individual molecules in cellular events. These approaches are able to describe the dimensions and position of the molecules with high accuracy and thus, study the critical effect of spatial distribution on cellular events. Moreover, two of the approaches allow multi-thread high performance simulations, distributing the three-dimensional model in a platform independent and computationally efficient way. Evaluation addressed the reproduction of molecular scenarios and different scalability aspects of agent creation and agent interaction. The three approaches simulate common biophysical and biochemical laws faithfully. The distributed approaches show improved performance when dealing with large agent populations while the sequential approach is better suited for small to medium size agent populations. Overall, the main new contribution of the approaches is the ability to simulate three-dimensional agent-based models at the molecular level with reduced implementation effort and moderate-level computational capacity. Since these approaches have a generic design, they have the major potential of being used in any event-driven agent-based tool. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. 基于Java多线程隐藏数组下标变换表达式的代码迷惑算法%Based on Java Multi-Thread Code Obfuscation via Hiding the Transformation of Subscripts of the Array

    Institute of Scientific and Technical Information of China (English)

    刘九; 林孔升; 尚汪洋; 蔡德霞

    2010-01-01

    对数组下标变换表达式进行预处理,使得表达式能够并行处理后,通过对线程控制,并行求解变换表达式的值,实现一种隐藏数组下标变换过程的代码迷惑算法.该算法所处理的迷惑代码能够很好地抗击源代码静态分析和基于源代码植入反迷惑攻击.

  4. High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez, Edward Steven,

    2013-09-01

    The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

  5. A New Effort for Atmospherical Forecast: Meteorological Image Processing Software (MIPS) for Astronomical Observations

    Science.gov (United States)

    Shameoni Niaei, M.; Kilic, Y.; Yildiran, B. E.; Yüzlükoglu, F.; Yesilyaprak, C.

    2016-12-01

    We have described a new software (MIPS) about the analysis and image processing of the meteorological satellite (Meteosat) data for an astronomical observatory. This software will be able to help to make some atmospherical forecast (cloud, humidity, rain) using meteosat data for robotic telescopes. MIPS uses a python library for Eumetsat data that aims to be completely open-source and licenced under GNU/General Public Licence (GPL). MIPS is a platform independent and uses h5py, numpy, and PIL with the general-purpose and high-level programming language Python and the QT framework.

  6. General-Purpose Parallel Algorithm Based on CUDA for Source Pencils' Deployment of Large γ Irradiator%基于CUDA的大型γ辐照装置通用并行排源算法

    Institute of Scientific and Technical Information of China (English)

    杨磊; 王玲; 龚学余

    2013-01-01

    Combined with standard mathematical model for evaluating quality of deploying results, a new high-performance parallel algorithm for source pencils' deployment was obtained by using parallel plant growth simulation algorithm which was completely parallelized with CUDA execute model, and the corresponding code can run on GPU. Based on such work, several instances in various scales were used to test the new version of algorithm. The results show that, based on the advantage of old versions, the performance of new one is improved more than 500 times comparing with the CPU version, and also 30 times with the CPU plus GPU hybrid version. The computation time of new version is less than ten minutes for the irradiator of which the activity is less than 111 PBq. For a single GTX275 GPU, the maximum computing power of new version is no more than 167 PBq as well as the computation time is no more than 25 minutes, and for multiple GPUs, the power can be improved more. Overall, the new version of algorithm running on GPU can satisfy the requirement of source pencils' deployment of any domestic irradiator, and it is of high competitiveness.%本文利用CUDA执行模型实现了植物模拟生长算法的完全并行化,结合标准排源质量评价数学模型,得到了一种高效率的并行排源算法,对应的代码能运行在GPU上.在此基础上,利用若干不同规模的排源算例对新版本算法进行了测试.测试结果表明,在保持已有版本算法优点的基础上,新算法的计算效率相对CPU版本提升了500倍以上,相对CPU+ GPU混合版本,也提升了30倍以上.对111 PBq以下装置,新算法的计算时间小于10 min.利用单GTX275 GPU,新算法的计算性能上限为167 PBq左右,时间不超过25 min,利用多GPU还可提高计算能力.综上所述,基于GPU的新版本算法可满足目前国内任意规模γ辐照装置的高质量排源需要,具有高度的竞争力.

  7. Base Vehicle Equipment, Special Vehicle, General Purpose Vehicle, and Vehicle Body Mechanics Career Ladders, AFSs 472X0, 472X1A/B/C/D, 472X2 and 472X3.

    Science.gov (United States)

    1982-08-01

    preparing lesson plans counseling trainees on training progress maintaining training records, charts or graphs scoring tests ,, Differences between the two...implementing OJT programs determining OJT training requirements evaluating OJT trainers or trainees Sixteen of the 19 Technical Training Instructors...74 0666-1 DISASSEMBLE OR ASSEMBLE REUELING PLW ABSJR 74 3215 ADJUST WNGINE DRVE ELTS 74 * 130#- REMOVE OR INSTALL SPARK PLUGS 71 6684 RV OR INSTALL

  8. An Assessment of the Ability of the U.S. Department of Defense and the Services to Measure and Track Language and Culture Training and Capabilities Among General Purpose Forces

    Science.gov (United States)

    2012-01-01

    effects on people’s affective responses (e.g., attitudes, self-concepts, emotions , and feelings of comfort with another culture); and positive...Security Force Assistance, Jason M. Brunner, and Christopher L. Vowels , The Human Dimension of Advising: An Analysis of Interpersonal, Linguistic

  9. A General-Purpose Control Strategy for Battery Energy Storage System in Microgrid%微网电池储能系统通用综合控制策略

    Institute of Scientific and Technical Information of China (English)

    董宜鹏; 谢小荣; 孙浩; 陈志刚; 刘志文

    2013-01-01

    As a buffer for microgrid, energy storage system plays a significant role in stability control, power quality improvement and uninterrupted power supply. Energy storage system is the key to safe and reliable operation of microgrid. Firstly, three kinds of energy storage system control strategies are introduced, and the application in different microgrid operation control modes is analyzed. Secondly, a comprehensive control strategy for the outer loop controller of battery energy storage system (BESS) is designed, to achieve PQ control, V/f control and droop control, which makes BESS can be applied in different microgrids using master-slave control or peer-to-peer control. In addition, the bi-directional switchover between grid-connected mode and islanded mode is considered, and an improved BESS comprehensive control strategy to realize smooth switchover is proposed. Finally, the validity of the strategy is verified through simulation.%储能系统作为微网的功率/能量缓冲环节,对微网的稳定控制、电能质量改善和不间断供电起着重要作用,是微网安全可靠运行的关键。首先分析了微网中储能系统常用的3种控制策略,讨论了其在微网不同运行控制模式中的应用;进而设计了一种针对电池储能系统(battery energy storage system,BESS)逆变器外环控制器的综合控制策略,能够兼具PQ控制、V/f控制和下垂控制功能,从而既可以用于主从控制的微网,也能用于对等控制的微网;同时还考虑了微网从孤岛转入并网和并网转入孤岛的双向切换过程,对设计的BESS综合控制策略进行了改进,实现了双向的平滑切换。算例结果验证了提出的综合控制策略的有效性。

  10. Research of virtualization of multitask oriented general purpose computation on graphic processing unit%面向多任务的GPU通用计算虚拟化技术研究

    Institute of Scientific and Technical Information of China (English)

    张云洲; 袁家斌; 吕相文

    2013-01-01

    随着硬件功能的不断丰富和软件开发环境的逐渐成熟,GPU在通用计算领域的应用越来越广泛,使用GPU集群来进行海量数据计算的例子不胜枚举.但是,相对于CPU,GPU的功耗较大,如果每个节点都配备GPU,则将大大增加集群的功耗.虚拟化技术的引入使得在虚拟机中利用GPU资源进行通用计算成为可能.为高效、充分地利用GPU,针对GPU的特点,提出了一种面向多任务的可动态调度、支持多用户并发的GPU虚拟化解决方案.在已有的GPU虚拟化方案的基础上,综合考虑虚拟机域间通信的通用性以及任务的周转时间,建立了CUDA管理端来对GPU资源进行统一管理.通过设置综合负载评价值实现负载均衡并降低任务的平均周转时间.在设计的系统上进行大规模矩阵运算实验,结果说明了GPU虚拟化方案在计算系统中的可行性和高效性.

  11. 一种针对Web、E-mail服务的通用应用层防火墙%A Kind of General-purpose Application-level Firewall Applied to WWW Service and E-mail

    Institute of Scientific and Technical Information of China (English)

    余婧; 林璟锵; 荆继武

    2004-01-01

    介绍了一种自行设计并实现的应用层防火墙,它能有效地预防针对Web服务和电子邮件服务的攻击,体现了其专用性;同时配置软件模块的加入可以实现防火墙的通用性,使得系统可以很容易地移植到针对其他的应用,实现了系统的通用性.

  12. Department of the Navy Justification of Estimates for Fiscal Year 1985 Submitted to Congress February 1984. Operation & Maintenance, Navy. Book 1. Budget Activity 1. Strategic Forces Budget. Activity 2. General Purpose Forces. Budget Activity 4. Airlift and Sealift.

    Science.gov (United States)

    1984-02-01

    23.225.600 * ,> --• \\N I O&M.N 18 m$ < > . .- .v. • • :*’•••.’ r*—* . *—»• i ! A •:. • • ’•-• > i • «’• •’• »- Vr liiM i" I’I >’• i...by the Drug Education and ... ^ Interdiction Team in the Mediter - «’**-’• 3 ranean area. Other costs associated -iL- with this team are supplies

  13. Department of the Navy Justification of Estimates for Fiscal Year 1984 Submitted to Congress January 1983. Operation & Maintenance, Navy. Book 1. Budget Activity 1: Strategic Forces, Budget Activity 2: General Purpose Forces

    Science.gov (United States)

    1983-01-01

    Accounting Activity, Great LakesTransfer to BA-9 -47 c. Programmatic Decreases -92,7771) Reduction to Fund DrugTesting 2) Overseas...to re-align funding for drugtesting program publichealth service program, Sealift Prepositioningto finance ammunition prepositioning, and

  14. Efficiency of Cache Mechanism for Network Processors

    Institute of Scientific and Technical Information of China (English)

    XU Bo; CHANG Jian; HUANG Shimeng; XUE Yibo; LI Jun

    2009-01-01

    With the explosion of network bandwidth and the ever-changing requirements for diverse net-work-based applications, the traditional processing architectures, i.e., general purpose processor (GPP) and application specific integrated circuits (ASIC) cannot provide sufficient flexibility and high performance at the same time. Thus, the network processor (NP) has emerged as an altemative to meet these dual demands for today's network processing. The NP combines embedded multi-threaded cores with a dch memory hierarchy that can adapt to different networking circumstances when customized by the application developers. In to-day's NP architectures, muitithreading prevails over cache mechanism, which has achieved great success in GPP to hide memory access latencies. This paper focuses on the efficiency of the cache mechanism in an NP. Theoretical timing models of packet processing are established for evaluating cache efficiency and experi-ments are performed based on real-life network backbone traces. Testing results show that an improvement of neady 70% can be gained in throughput with assistance from the cache mechanism. Accordingly, the cache mechanism is still efficient and irreplaceable in network processing, despite the existing of multithreading.

  15. Lilith: A software framework for the rapid development of scalable tools for distributed computing

    Energy Technology Data Exchange (ETDEWEB)

    Gentile, A.C.; Evensky, D.A.; Armstrong, R.C.

    1997-12-31

    Lilith is a general purpose tool that provides a highly scalable, easy distribution of user code across a heterogeneous computing platform. By handling the details of code distribution and communication, such a framework allows for the rapid development of tools for the use and management of large distributed systems. This speed-up in development not only enables the easy creation of tools as needed but also facilitates the ultimate development of more refined, hard-coded tools as well. Lilith is written in Java, providing platform independence and further facilitating rapid tool development through Object reuse and ease of development. The authors present the user-involved objects in the Lilith Distributed Object System and the Lilith User API. They present an example of tool development, illustrating the user calls, and present results demonstrating Lilith`s scalability.

  16. Estratégia de capacitação de enfermeiros recém-admitidos em unidades de internação geral Estrategia de capacitación de enfermeros recién admitidos en unidades de internamiento general Strategy for the qualification of newly hired nurses in internment units for general purposes

    Directory of Open Access Journals (Sweden)

    Ivana Lucia Correa Pimentel de Siqueira

    2005-09-01

    Full Text Available O objetivo deste artigo é apresentar um programa de capacitação de enfermeiros recém-admitidos, implementado em unidades de internação de um hospital privado de São Paulo. O programa foi estruturado de forma a diminuir o tempo de treinamento e possibilitar a participação dos enfermeiros das unidades de internação. O programa está divido em três fases. A primeira destina-se à revisão técnica. Na segunda, com um monitor de treinamento, os ingressantes assumem a coordenação assistencial da unidade. Na terceira são revisados rotinas e temas administrativos. Este programa vem apresentando bons resultados em relação à individualização do treinamento e participação dos enfermeiros da instituição.El objetivo de este artículo es presentar un programa de capacitación de enfermeros recién-admitidos, implementado en unidades de internamiento de un hospital privado de São Paulo. El programa fue estructurado de forma a disminuir el tiempo de entrenamiento y posibilitar la participación de los enfermeros de las unidades de internamiento. El programa está divido en tres fases. La primera se destina a la revisión técnica. En la segunda, con un monitor de entrenamiento, los ingresantes asumen la coordinación asistencial de la unidad. En la tercera se revisan las rutinas y temas administrativos. Este programa viene dando buenos resultados en relación a la individualización del entrena-miento y participación de los enfermeros de la institución.The aim of this article is to describe a qualification program for newly hired nurses that has been implemented in internment units in a private hospital in São Paulo. The program was designed in such a way as to reduce training time and make possible the participation of the units' nursing staff. It is divided in three phases. Phase 1 is a technical revision. In Phase 2, with the assistance of a training monitor, the trainees take over the assistance coordination of the unit. In Phase 3, administrative routines and topics are revised. This program has been showing good results regarding the individualization of the training and the participation of nurses.

  17. Research on General-purpose Performance Testing and Evaluation Methods for the Acceptance of Wind Farm%适用于风电场验收的通用机组特性测试及评价方法研究

    Institute of Scientific and Technical Information of China (English)

    段振云; 张学瑞; 邢作霞

    2013-01-01

      为规范风电市场,提高风电机组产品质量、可靠性和安全性,对风电产品的相关质量进行监督、检测和认证成为风电产业发展必不可少的部分。根据某风电场要求,对其风电场内机组设备进行检测验收,检测内容包括风电机组功率特性测试以及风电场噪声水平测试,为风电场设备验收及评价提供了一种思路及方法。%In order to regulate the market of wind power, the quality, reliability and security of wind turbine products, the quality supervision, inspection and certi cation of wind power products become an essential part of the wind power industry development. According to the requirements of a wind farm, wind farm unit equipment should accept the test including the wind turbine power performance testing and noise level of the wind farm . e result provides ideas and methods for equipment inspection and evaluation of the wind farm.

  18. Department of the Navy Justification of Estimates FY 1991 Budget Estimates Submitted to Congress January 1990, Operation & Maintenance, Navy. Book 1. Budget Activity 1. Strategic Forces. Budget Activity 2. General Purpose Forces. Budget Activity 3. Intelligence & Communications. Budget Activity 4. Airlift and Sealift.

    Science.gov (United States)

    1990-01-01

    JL4)CWVLQ.-4I1LJ0. a, OLa E-4 4 4-0 41 WF-4 0 .,4IWW to j 3 C.’- Q UL4 V)lU - 0) C3-’ -4 4-4W4 4-1 El 4.- C. -,-C:to>O9: o ):10 cd 0 C: C:) . di4 -4 0...bD 8 -4 0 0 u Ot uB (d2 VJ’a A0)WL.aW4 >, boLA Da , (d 0.0 0. :E -4 X - 0) 00.) 0. COV)= Wr, 0 W Z34-J4444Jl- .0) bO.=)4-1 ,-4 41 ) m0.0 a) C) C-) -4

  19. Department of the Navy Justification of Estimates Amended Fiscal Year 1988 and 1989 Biennial Budget Submitted to Congress February 1988. Operation and Maintenance, Navy. Book 1. Budget Activity 1: Strategic Forces Budget Activity 2: General Purpose Forces Budget Activity 3: Intelligence and Communications Budget Activity 4: Airlift and Sealift

    Science.gov (United States)

    1988-02-01

    W = - Q;0) OLA CO a~ -0UJ I=LA td - r ( A 0 CL OLA1 a 0) w OJ0~K b .1 0 al ( apC 0.0 0 r 41-10 ,01 , -4. -W 4v - 0c -OL 4 >, CW 0 - ) -4 4LA*;.(0 VOO...00-C m0C 0.L C 0 0 4 M’ I W-CCC LA OLA *HZ A- 4.00>0Ff- 0 . LA -4 A.-4WQ) vO C: d -)4 CAU -4 LAW oW 0.40 *- 6 bC rd w04 A b0𔃺 1O. -l.a0.. 40) 4-1 0... boLA u)0 o 4 cd u C M - 0 ( L) - 1)U 010010) 0 Q)LA0 a C M- a vG~~1 50 0 > = Q) aLa) a) u 0 0) G C- L/)--4 0 OO 0 0. ZY- ~U4 4 U( 0 Hl >A> )w- i-I d

  20. Department of the Navy Amended FY 1992/FY 1993 Biennial Budget Estimates. Justification of Estimates Submitted to Congress January 1992. Operation and Maintenance, Navy: Budget Activity 1: Strategic Forces, Budget Activity 2: General Purpose Forces, Budget Activity 3: Intelligence and Communications; Budget activity 4: Airlift and Sealift

    Science.gov (United States)

    1992-01-01

    0 V b -.. C 4 .4 P -. -- . 4,U 0 Vi~i0~ do In c hV0i0d.4 4UM 4 -4h -,W"U ’Ř r v t n bola Ŕ 0, C1 c V C m to a s b4 6, C D 4Q0 .- V) m’ m >-P- V) W...34 .CA>"doC 010%cD ý4 L t cU U C 0 3" 10 0 a#.- oLa 0 WE LmaUMedCZt *diI UA O UC (jC 0 ’V.4.O6o’ I EU I-4 0 IV 121 CC -0 41 M VE W* -4i -E C "𔃺 C I hi 1

  1. Real-Time, General-Purpose, High-Speed Signal Processing Systems for Underwater Research. Proceedings of a Working Level Conference held at Supreme Allied Commander, Atlantic, Anti-Submarine Warfare Research Center (SACLANTCEN) on 18-21 September 1979. Part 1. Sessions I to III.

    Science.gov (United States)

    1979-12-01

    radar processiag (h) 8-1 to 8-6 by RKa-J~rgem Alker NARTIE - Hultiprocesser for high capacity real-time processing i 91 to 94 by TaSgvr Lumdh A modular...H * CREATE REFL. COEF. FOR RAYTRACE ........ * ...................... * * I * DISPLtA REFL. COEF FOR RAYTRACE ...IEEEEEEEEEEEEE Emlllllllllllu -EEIIIIIEEEEE -- Eh~~hh~h ALKER: Correlator for radar processing A MICRO-PROGRAMMABLE CORRELATOR FOR REAL-TIME RADAR

  2. Use of Checkpoint-Restart for Complex HEP Software on Traditional Architectures and Intel MIC

    CERN Document Server

    Arya, Kapil; Dotti, Andrea; Elmer, Peter

    2013-01-01

    Process checkpoint-restart is a technology with great potential for use in HEP workflows. Use cases include debugging, reducing the startup time of applications both in offline batch jobs and the High Level Trigger, permitting job preemption in environments where spare CPU cycles are being used opportunistically and efficient scheduling of a mix of multicore and single-threaded jobs. We report on tests of checkpoint-restart technology using CMS software, Geant4-MT (multi-threaded Geant4), and the DMTCP (Distributed Multithreaded Checkpointing) package. We analyze both single- and multi-threaded applications and test on both standard Intel x86 architectures and on Intel MIC. The tests with multi-threaded applications on Intel MIC are used to consider scalability and performance. These are considered an indicator of what the future may hold for many-core computing.

  3. Testing Object-Oriented Programs using Dynamic Aspects and Non-Determinism

    DEFF Research Database (Denmark)

    Achenbach, Michael; Ostermann, Klaus

    2010-01-01

    without parameterization or generation of tests. It also eases modelling naturally non-deterministic program features like IO or multi-threading in integration tests. Dynamic AOP facilitates powerful design adaptations without exposing test features, keeping the scope of these adaptations local to each...... test. We also combine non-determinism and dynamic aspects in a new approach to testing multi-threaded programs using co-routines.......The implementation of unit tests with mock objects and stubs often involves substantial manual work. Stubbed methods return simple default values, therefore variations of these values require separate test cases. The integration of mock objects often requires more infrastructure code and design...

  4. On the notion of abstract platform in MDA development

    NARCIS (Netherlands)

    Andrade Almeida, João; Dijkman, R.M.; van Sinderen, Marten J.; Ferreira Pires, Luis

    2004-01-01

    Although platform-independence is a central property in MDA models, the study of platform-independence has been largely overlooked in MDA. As a consequence, there is a lack of guidelines to select abstraction criteria and modelling concepts for platform-independent design. In addition, there is

  5. On the Notion of Abstract Platform in MDA Development

    NARCIS (Netherlands)

    Almeida, João Paulo; Dijkman, Remco; Sinderen, van Marten; Ferreira Pires, Luis

    2004-01-01

    Although platform-independence is a central property in MDA models, the study of platform-independence has been largely overlooked in MDA. As a consequence, there is a lack of guidelines to select abstraction criteria and modelling concepts for platform-independent design. In addition, there is litt

  6. Implementing hybrid MPI/OpenMP parallelism in Fluidity

    Science.gov (United States)

    Gorman, Gerard; Lange, Michael; Avdis, Alexandros; Guo, Xiaohu; Mitchell, Lawrence; Weiland, Michele

    2014-05-01

    Parallelising finite element codes using domain decomposition methods and MPI has nearly become routine at the application code level. This has been helped in no small part by the development of an eco-system of open source libraries to provide key functionality, for example SCOTCH for graph partitioning or PETSc for sparse iterative solvers. As we move to an era where pure MPI no longer suffices, application developers cannot only focus on the application code, but must consider the full software stack. In the case of Fluidity (an open source control volume/finite element general purpose fluid dynamics code) the decision to improve parallel efficiency by moving to a hybrid MPI/OpenMP programming model it became necessary to get involved in extending 3rd party open source libraries, specifically PETSc, in addition to the application code itself. The effort involved in re-engineering a large application code highlights the fact that as computing platforms continue their advance towards low power many core processors, the software stack must also develop at a similar pace or application codes will suffer. In this presentation we will illustrate the steps required to re-engineer Fluidity to achieve good parallel efficiency when using MPI/OpenMP. We identify performance pitfalls when using Fortran features such as automatic arrays in a multi-threaded context, as well as poor data locality on NUMA platforms. A significant proportion of the computational cost is in the sparse iterative solvers. For this we collaborated with the development team at Argonne National Laboratory to add OpenMP support to PETSc. We will present performance results for both the application as a whole, as well as for key individual components such as matrix assembly and the solvers. We also show that while we did not explicitly target I/O for optimisation here, its performance is nonetheless greatly improved because of fewer processes accessing the file system. One of the main remaining

  7. From Domain Specific Languages to DEVS Components: Application to Cognitive M&S

    Science.gov (United States)

    2011-04-01

    DEVS/SOA framework [11] is analogous to other DEVS distributed simulation frameworks like DEVS/HLA, DEVS/RMI and DEVS/ CORBA [12-16] and uses web...W, " CORBA -Based Multi-threaded Distributed Simulation of Hierarchical DEVS Models: Transforming Model Structure into a Non-hierarchical One

  8. Light-Weight Process Groups in the ISIS System

    Science.gov (United States)

    1993-01-01

    Marc Rozier. Multi- threaded Processes in Chorus/MIX. Technical Report CS/TR-89-37.3, Chorus Systbmes, 6, avenue Gustave Eiffel , F-78182, Saint-Quentin...Systbmes, 6, avenue Gustave Eif- fel, F-78182, Saint-Quentin-En-Yvelines, France, October 1989. [5] Francois Armand, Frederic Herrmann, Jim Lipkis, and

  9. Processor Management in the Tera MTA Computer System,

    Science.gov (United States)

    1993-01-01

    This paper describes the processor scheduling issues specific to the Tera MTA (Multi Threaded Architecture) computer system and presents solutions to...classic scheduling problems. The Tera MTA exploits parallelism at all levels, from fine-grained instruction-level parallelism within a single

  10. An Intelligent Agent Based on Virtual Geographic Environment System

    Institute of Scientific and Technical Information of China (English)

    SHEN Dayong; LIN Hui; GONG Jianhua; ZHAO Yibin; FANG Zhaobao; GUO Zhongyang

    2004-01-01

    On the basis of previous work, this paper designs an intelligent agent based on virtual geographic environment (VGE) system that is characterized by huge data, rapid computation, multi-user, multi-thread and intelligence and issues challenges to traditional GIS models and algorithms. The new advances in software and hardware technology lay a reliable basis for system design, development and application.

  11. Dynamic Reverse Code Generation for Backward Execution

    DEFF Research Database (Denmark)

    Lee, Jooyong

    2007-01-01

    . In this paper, we present a method to generate reverse code, so that backtracking can be performed by executing reverse code. The novelty of our work is that we generate reverse code on-the-fly, while running a debugger, which makes it possible to apply the method even to debugging multi-threaded programs....

  12. MetAlign 3.0: performance enhancement by efficient use of advances in computer hardware

    NARCIS (Netherlands)

    Lommen, A.; Kools, H.J.

    2012-01-01

    A new, multi-threaded version of the GC-MS and LC-MS data processing software, metAlign, has been developed which is able to utilize multiple cores on one PC. This new version was tested using three different multi-core PCs with different operating systems. The performance of noise reduction, baseli

  13. IOCP Application in Radiation Imaging System

    Institute of Scientific and Technical Information of China (English)

    FENG; Shu-qiang; ZHAO; Xiao; ZHANG; Guo-guang

    2015-01-01

    Using IOCP kernel object,integrated with multi-thread,event and message queue mechanisms,imaging system communicates with other sub-systems efficiently.On data processing steps,data analysis and data handling are assigned to I/O thread pool and logic thread pool

  14. Lilith: A Java framework for the development of scalable tools for high performance distributed computing platforms

    Energy Technology Data Exchange (ETDEWEB)

    Evensky, D.A.; Gentile, A.C.; Armstrong, R.C.

    1998-03-19

    Increasingly, high performance computing constitutes the use of very large heterogeneous clusters of machines. The use and maintenance of such clusters are subject to complexities of communication between the machines in a time efficient and secure manner. Lilith is a general purpose tool that provides a highly scalable, secure, and easy distribution of user code across a heterogeneous computing platform. By handling the details of code distribution and communication, such a framework allows for the rapid development of tools for the use and management of large distributed systems. Lilith is written in Java, taking advantage of Java`s unique features of loading and distributing code dynamically, its platform independence, its thread support, and its provision of graphical components to facilitate easy to use resultant tools. The authors describe the use of Lilith in a tool developed for the maintenance of the large distributed cluster at their institution and present details of the Lilith architecture and user API for the general user development of scalable tools.

  15. Importance of Explicit Vectorization for CPU and GPU Software Performance

    CERN Document Server

    Dickson, Neil G; Hamze, Firas

    2010-01-01

    Much of the current focus in high-performance computing is on multi-threading, multi-computing, and graphics processing unit (GPU) computing. However, vectorization and non-parallel optimization techniques, which can often be employed additionally, are less frequently discussed. In this paper, we present an analysis of several optimizations done on both central processing unit (CPU) and GPU implementations of a particular computationally intensive Metropolis Monte Carlo algorithm. Explicit vectorization on the CPU and the equivalent, explicit memory coalescing, on the GPU are found to be critical to achieving good performance of this algorithm in both environments. The fully-optimized CPU version achieves a 9x to 12x speedup over the original CPU version, in addition to speedup from multi-threading. This is 2x faster than the fully-optimized GPU version.

  16. High-Performance Physics Simulations Using Multi-Core CPUs and GPGPUs in a Volunteer Computing Context

    CERN Document Server

    Karimi, Kamran; Hamze, Firas

    2010-01-01

    This paper presents two conceptually simple methods for parallelizing a Parallel Tempering Monte Carlo simulation in a distributed volunteer computing context, where computers belonging to the general public are used. The first method uses conventional multi-threading. The second method uses CUDA, a graphics card computing system. Parallel Tempering is described, and challenges such as parallel random number generation and mapping of Monte Carlo chains to different threads are explained. While conventional multi-threading on CPUs is well-established, GPGPU programming techniques and technologies are still developing and present several challenges, such as the effective use of a relatively large number of threads. Having multiple chains in Parallel Tempering allows parallelization in a manner that is similar to the serial algorithm. Volunteer computing introduces important constraints to high performance computing, and we show that both versions of the application are able to adapt themselves to the varying an...

  17. Issues on efficiency of XML parsers

    Directory of Open Access Journals (Sweden)

    Codruţa Vancea

    2009-10-01

    Full Text Available Using XML (Extensible Markup Language processing can result into significant runtime overhead in a XML-based infrastructural middleware, such as multi-thread server application. Based on well-formed pairs of named-marking tags, XML element’s structure contribute to its cross-platform and vender neutrality characteristics, but require extra computation in processing. In this paper we analyze XML processing overhead for the most known XML parsers and decide what parser is most suited for a multi-thread server application that will process many streams of XML content, streams that can be very large. Based on the analysis that was carried on, we offer a solution for XML binding.

  18. FNCS: A Framework for Power System and Communication Networks Co-Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ciraci, Selim; Daily, Jeffrey A.; Fuller, Jason C.; Fisher, Andrew R.; Marinovici, Laurentiu D.; Agarwal, Khushbu

    2014-04-13

    This paper describes the Fenix framework that uses a federated approach for integrating power grid and communication network simulators. Compared existing approaches, Fenix al- lows co-simulation of both transmission and distribution level power grid simulators with the communication network sim- ulator. To reduce the performance overhead of time synchro- nization, Fenix utilizes optimistic synchronization strategies that make speculative decisions about when the simulators are going to exchange messages. GridLAB-D (a distribution simulator), PowerFlow (a transmission simulator), and ns-3 (a telecommunication simulator) are integrated with the frame- work and are used to illustrate the enhanced performance pro- vided by speculative multi-threading on a smart grid applica- tion. Our speculative multi-threading approach achieved on average 20% improvement over the existing synchronization methods

  19. GPU technology as a platform for accelerating local complexity analysis of protein sequences.

    Science.gov (United States)

    Papadopoulos, Agathoklis; Kirmitzoglou, Ioannis; Promponas, Vasilis J; Theocharides, Theocharis

    2013-01-01

    The use of GPGPU programming paradigm (running CUDA-enabled algorithms on GPU cards) in Bioinformatics showed promising results [1]. As such a similar approach can be used to speedup other algorithms such as CAST, a popular tool used for masking low-complexity regions (LCRs) in protein sequences [2] with increased sensitivity. We developed and implemented a CUDA-enabled version (GPU_CAST) of the multi-threaded version of CAST software first presented in [3] and optimized in [4]. The proposed software implementation uses the nVIDIA CUDA libraries and the GPGPU programming paradigm to take advantage of the inherent parallel characteristics of the CAST algorithm to execute the calculations on the GPU card of the host computer system. The GPU-based implementation presented in this work, is compared against the multi-threaded, multi-core optimized version of CAST [4] and yielded speedups of 5x-10x for large protein sequence datasets.

  20. SPMTM: A Novel ScratchPad Memory Based Hybrid Nested Transactional Memory Framework

    Science.gov (United States)

    Feng, Degui; Jiang, Guanjun; Zhang, Tiefei; Hu, Wei; Chen, Tianzhou; Cao, Mingteng

    Chip multiprocessor (CMP) has been the mainstream of processor design with the progress in semiconductor technology. It provides higher concurrency for the threads compared with the traditional single-core processor. Lock-based synchronization of multi-threads has been proved as an inefficient approach with high overhead. The previous works show that TM is an efficient solution to solve the synchronization of multi-threads. This paper presents SPMTM, a novel on-chip memory based nested TM framework. The on-chip memory used in this framework is not cache but scratchpad memory (SPM), which is software-controlled SRAM on chip. TM information will be stored in SPM to enhance the access speed and reduce the power consumption in SPMTM. Experimental results show that SPMTM can obtain average 16.3% performance improvement of the benchmarks compared with lock-based synchronization and with the increase in the number of processor core, the performance improvement is more significant.

  1. Monte Carlo simulation on Graphical Processor Unit of the scattered beam in radiography non-destructive testing context

    Science.gov (United States)

    Tisseur, David; Andrieux, Alexan; Costin, Marius; Vabre, Alexandre

    2014-06-01

    CEA-LIST develops CIVA software for non-destructive testing simulation. Radiography Monte Carlo simulation for the scattered beam can be quite long (several hours) even on a multi-thread CPU implementation. In order to reduce this computation time, we have modified and adapted for CIVA a GPU open source code named MCGPU. This paper presents our work and the results of cross comparison between CIVA and the modified MCGPU code in a NDT context.

  2. Whole record surveillance is superior to chief complaint surveillance for predicting influenza.

    Science.gov (United States)

    Welsh, Gail; Wahner-Roedler, Dietlind; Froehling, David Arthur; Trusko, Brett; Elkin, Peter

    2008-11-06

    Matched records of positive and negative influenza cases were parsed with a Natural Language Processor, the Multi-threaded Clinical Vocabulary Server (MCVS). Output was coded into SNOMED-CT reference terminology and compared to the SNOMED case definition of influenza. Odds ratios for each element of the influenza case definition by each section of the record were used to generate ROC curves. C-statistics showed that whole record surveillance was superior to chief complaint surveillance for predicting influenza.

  3. A Robust and Fast System for CTC Computer-Aided Detection of Colorectal Lesions

    OpenAIRE

    2010-01-01

    We present a complete, end-to-end computer-aided detection (CAD) system for identifying lesions in the colon, imaged with computed tomography (CT). This system includes facilities for colon segmentation, candidate generation, feature analysis, and classification. The algorithms have been designed to offer robust performance to variation in image data and patient preparation. By utilizing efficient 2D and 3D processing, software optimizations, multi-threading, feature selection, and an optimiz...

  4. An affine-intuitionistic system of types and effects: confluence and termination

    CERN Document Server

    Amadio, Roberto; Madet, Antoine

    2010-01-01

    We present an affine-intuitionistic system of types and effects which can be regarded as an extension of Barber-Plotkin Dual Intuitionistic Linear Logic to multi-threaded programs with effects. In the system, dynamically generated values such as references or channels are abstracted into a finite set of regions. We introduce a discipline of region usage that entails the confluence (and hence determinacy) of the typable programs. Further, we show that a discipline of region stratification guarantees termination.

  5. Parallel Performance of MPI Sorting Algorithms on Dual-Core Processor Windows-Based Systems

    CERN Document Server

    Elnashar, Alaa Ismail

    2011-01-01

    Message Passing Interface (MPI) is widely used to implement parallel programs. Although Windowsbased architectures provide the facilities of parallel execution and multi-threading, little attention has been focused on using MPI on these platforms. In this paper we use the dual core Window-based platform to study the effect of parallel processes number and also the number of cores on the performance of three MPI parallel implementations for some sorting algorithms.

  6. Experiences with OpenMP in tmLQCD

    Energy Technology Data Exchange (ETDEWEB)

    Deuzeman, A. [Bern Univ. (Switzerland). Albert Einstein Center for Fundamental Physics; Jansen, K. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Kostrzewa, B. [Humboldt Univ. Berlin (Germany). Inst. fuer Physik; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Urbach, C. [Bonn Univ. (Germany). HISKP (Theory); Collaboration: European Twisted Mass Collaboration

    2013-11-15

    An overview is given of the lessons learned from the introduction of multi-threading using OpenMP in tmLQCD. In particular, programming style, performance measurements, cache misses, scaling, thread distribution for hybrid codes, race conditions, the overlapping of communication and computation and the measurement and reduction of certain overheads are discussed. Performance measurements and sampling profiles are given for different implementations of the hopping matrix computational kernel.

  7. Change management in multicultural organisations

    OpenAIRE

    Parkes, Aneta

    2016-01-01

    The book fits into a multidisciplinary research approach. The articles are the result of research conducted by eminent international economists, authors representing academic centres in different countries. The articles address current phenomena observed in the global economy. The authors do not aspire to comprehensively explain all the very complex and multi-dimensional economic developments, but illustrate many of these phenomena in an original way. The multi-threaded and multi-dimensional ...

  8. Systemic Assurance

    Science.gov (United States)

    2015-07-31

    simulations, cyber-physical robotic systems, and extremely large commercial Java programs. An important goal is to develop incrementally compostable...randomness and stochastic behavior, both as integral to algorithm design and as a consequence of multi- threading , (3) Concurrency and distribution, as the...should be involved in all four threads of the effort: Baseline Analysis of Codified Best Practices The baselining analysis for existing practices has

  9. Energy Efficiency Studies of Mont Blanc Applications

    OpenAIRE

    2013-01-01

    In this thesis, the performance and energy efficiency of four different implementations of matrix multiplication, written in OmpSs and OpenCL, is tested and evaluated. The benchmarking is done using an Intel Ivy Bridge Core i7 3770K. The results are evaluated and discussed with regards to different optimization configurations, like vectorization and multi-threading. Energy measurements are taken using PAPI, which in turn uses the Running Average Power Limit interface in the Intel processor to...

  10. A modular simulation framework for colonoscopy using a new haptic device.

    Science.gov (United States)

    Hellier, David; Samur, Evren; Passenger, Josh; Spälter, Ulrich; Frimmel, Hans; Appleyard, Mark; Bleuler, Hannes; Ourselin, Sébastien

    2008-01-01

    We have developed a multi-threaded framework for colonoscopy simulation utilising OpenGL with an interface to a real-time prototype colonoscopy haptic device. A modular framework has enabled us to support multiple haptic devices and efficiently integrate new research into physically based modelling of the colonoscope, colon and surrounding organs. The framework supports GPU accelerated algorithms as runtime modules, allowing the real-time calculations required for haptic feedback.

  11. The competencies of global managers in multinational corporations

    OpenAIRE

    Czarnecka, Aleksandra; Szymura-Tyc, Maja

    2016-01-01

    The book fits into a multidisciplinary research approach. The articles are the result of research conducted by eminent international economists, authors representing academic centres in different countries. The articles address current phenomena observed in the global economy. The authors do not aspire to comprehensively explain all the very complex and multi-dimensional economic developments, but illustrate many of these phenomena in an original way. The multi-threaded and multi-dimensional ...

  12. Precise Thread-Modular Abstract Interpretation of Concurrent Programs Using Relational Interference Abstractions

    OpenAIRE

    Monat, Raphaël; Miné, Antoine

    2017-01-01

    International audience; We present a static analysis by abstract interpretation of numeric properties in multi-threaded programs. The analysis is sound (assuming a sequentially consistent memory), parameterized by a choice of abstract domains and, in order to scale up, it is modular, in that it iterates over each thread individually (possibly several times) instead of iterating over their product. We build on previous work that formalized rely-guarantee verification methods as a concrete, fix...

  13. A Global 3D P-Velocity Model of the Earth's Crust and Mantle for Improved Event Location

    Science.gov (United States)

    Ballard, S.; Young, C. J.; Hipp, J. R.; Chang, M.; Lewis, J.; Begnaud, M. L.; Rowe, C. A.

    2009-12-01

    Effectively monitoring for small nuclear tests (Java-based distributed computing framework developed by Sandia National Laboratories (SNL), providing us with 300+ processors having an efficiency of better than 90% for the calculations. We evaluate our model both in terms of travel time residual variance reduction and in location improvement for GT events. For the latter, we use a new multi-threaded version of the SNL-developed LocOO code modified to use 3D velocity models.

  14. Software Development for Digital Control of WDW Series Testing Machine and Measurement of KIC

    Institute of Scientific and Technical Information of China (English)

    黄兴; 马杭; 程昌钧

    2005-01-01

    Software has been developed for digital control of WDW series testing machine and the measurement of fracture toughness by modularized design. Development of the software makes use of multi-thread and serial communication techniques, which can accurately control the testing machine and measure the fracture toughness in real-time. Three-point bending specimens were used in the measurement. The software operates stably and reliably, expanding the function of WDW series testing machine.

  15. An Implementation of IP-Phone Gateway

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    To implement voice service over packet-based network (Internet) and TDM-based network (PSTN Public Switched Telephone Network), an IP phone gateway is necessary to transform media stream and convert signaling protocols used over both the two networks. In this article, the architecture of the IP-Phone gateway is described firstly. Then the communications mechanism between functional blocks and multi-thread consideration are presented.

  16. Time-predictable architectures

    CERN Document Server

    Rochange, Christine; Uhrig , Sascha

    2014-01-01

    Building computers that can be used to design embedded real-time systems is the subject of this title. Real-time embedded software requires increasingly higher performances. The authors therefore consider processors that implement advanced mechanisms such as pipelining, out-of-order execution, branch prediction, cache memories, multi-threading, multicorearchitectures, etc. The authors of this book investigate the timepredictability of such schemes.

  17. 多核处理器上的并行联机分析处理算法研究%Parallel On-Line Analysis Processing Algorithms Research on Multi-Core CPUs

    Institute of Scientific and Technical Information of China (English)

    周国亮; 王桂兰; 朱永利

    2013-01-01

    Computer hardware technology has greatly developed, especially large memory and multi-core, but the algorithm efficiency dose not improve with the development of hardware. The fundamental reason is the insufficient utilization of CPU cache and the limitation of single-thread programming. In the field of OLAP (on-line analysis processing), data cube computation is an important and time-consuming operation, so how to improve the perfor-mance of data cube computation is a difficult research point in this field. Based on the characteristics of multi-core CPUs, this paper proposes two parallel algorithms, MT-Multi-Way (multi-threading multi-way) and MT-BUC (multi-threading bottom-up computation), which utilize data partition and multi-thread cooperation. All these algo-rithms avoid cache contentions between threads and keep loading balance, so obtain near-linear speedup. Based on these algorithms, this paper suggests one unified framework for cube computation on multi-core CPUs, including how to partition data and resolve recursive program on multi-core CPUs for guiding cube computation parallelization.

  18. Parallel computing of discrete element method on multi-core processors

    Institute of Scientific and Technical Information of China (English)

    Yusuke Shigeto; Mikio Sakai

    2011-01-01

    This paper describes parallel simulation techniques for the discrete element method (DEM) on multi-core processors.Recently,multi-core CPU and GPU processors have attracted much attention in accelerating computer simulations in various fields.We propose a new algorithm for multi-thread parallel computation of DEM,which makes effective use of the available memory and accelerates the computation.This study shows that memory usage is drastically reduced by using this algorithm.To show the practical use of DEM in industry,a large-scale powder system is simulated with a complicated drive unit.We compared the performance of the simulation between the latest GPU and CPU processors with optimized programs for each processor.The results show that the difference in performance is not substantial when using either GPUs or CPUs with a multi-thread parallel algorithm.In addition,DEM algorithm is shown to have high scalability in a multi-thread parallel computation on a CPU.

  19. Efficient Parallelization of a Dynamic Unstructured Application on the Tera MTA

    Science.gov (United States)

    Oliker, Leonid; Biswas, Rupak

    1999-01-01

    The success of parallel computing in solving real-life computationally-intensive problems relies on their efficient mapping and execution on large-scale multiprocessor architectures. Many important applications are both unstructured and dynamic in nature, making their efficient parallel implementation a daunting task. This paper presents the parallelization of a dynamic unstructured mesh adaptation algorithm using three popular programming paradigms on three leading supercomputers. We examine an MPI message-passing implementation on the Cray T3E and the SGI Origin2OOO, a shared-memory implementation using cache coherent nonuniform memory access (CC-NUMA) of the Origin2OOO, and a multi-threaded version on the newly-released Tera Multi-threaded Architecture (MTA). We compare several critical factors of this parallel code development, including runtime, scalability, programmability, and memory overhead. Our overall results demonstrate that multi-threaded systems offer tremendous potential for quickly and efficiently solving some of the most challenging real-life problems on parallel computers.

  20. Scientific Visualization Made Easy for the Scientist

    Science.gov (United States)

    Westerhoff, M.; Henderson, B.

    2002-12-01

    amirar is an application program used in creating 3D visualizations and geometric models of 3D image data sets from various application areas, e.g. medicine, biology, biochemistry, chemistry, physics, and engineering. It has demonstrated significant adoption in the market place since becoming commercially available in 2000. The rapid adoption has expanded the features being requested by the user base and broadened the scope of the amira product offering. The amira product offering includes amira Standard, amiraDevT, used to extend the product capabilities by users, amiraMolT, used for molecular visualization, amiraDeconvT, used to improve quality of image data, and amiraVRT, used in immersive VR environments. amira allows the user to construct a visualization tailored to his or her needs without requiring any programming knowledge. It also allows 3D objects to be represented as grids suitable for numerical simulations, notably as triangular surfaces and volumetric tetrahedral grids. The amira application also provides methods to generate such grids from voxel data representing an image volume, and it includes a general-purpose interactive 3D viewer. amiraDev provides an application-programming interface (API) that allows the user to add new components by C++ programming. amira supports many import formats including a 'raw' format allowing immediate access to your native uniform data sets. amira uses the power and speed of the OpenGLr and Open InventorT graphics libraries and 3D graphics accelerators to allow you to access over 145 modules, enabling you to process, probe, analyze and visualize your data. The amiraMolT extension adds powerful tools for molecular visualization to the existing amira platform. amiraMolT contains support for standard molecular file formats, tools for visualization and analysis of static molecules as well as molecular trajectories (time series). amiraDeconv adds tools for the deconvolution of 3D microscopic images. Deconvolution is the