WorldWideScience

Sample records for high-performance fortran hpf

  1. Scientific Programming with High Performance Fortran: A Case Study Using the xHPF Compiler

    Directory of Open Access Journals (Sweden)

    Eric De Sturler

    1997-01-01

    Full Text Available Recently, the first commercial High Performance Fortran (HPF subset compilers have appeared. This article reports on our experiences with the xHPF compiler of Applied Parallel Research, version 1.2, for the Intel Paragon. At this stage, we do not expect very High Performance from our HPF programs, even though performance will eventually be of paramount importance for the acceptance of HPF. Instead, our primary objective is to study how to convert large Fortran 77 (F77 programs to HPF such that the compiler generates reasonably efficient parallel code. We report on a case study that identifies several problems when parallelizing code with HPF; most of these problems affect current HPF compiler technology in general, although some are specific for the xHPF compiler. We discuss our solutions from the perspective of the scientific programmer, and presenttiming results on the Intel Paragon. The case study comprises three programs of different complexity with respect to parallelization. We use the dense matrix-matrix product to show that the distribution of arrays and the order of nested loops significantly influence the performance of the parallel program. We use Gaussian elimination with partial pivoting to study the parallelization strategy of the compiler. There are various ways to structure this algorithm for a particular data distribution. This example shows how much effort may be demanded from the programmer to support the compiler in generating an efficient parallel implementation. Finally, we use a small application to show that the more complicated structure of a larger program may introduce problems for the parallelization, even though all subroutines of the application are easy to parallelize by themselves. The application consists of a finite volume discretization on a structured grid and a nested iterative solver. Our case study shows that it is possible to obtain reasonably efficient parallel programs with xHPF, although the compiler

  2. An Introduction to High Performance Fortran

    Directory of Open Access Journals (Sweden)

    John Merlin

    1995-01-01

    Full Text Available High Performance Fortran (HPF is an informal standard for extensions to Fortran 90 to assist its implementation on parallel architectures, particularly for data-parallel computation. Among other things, it includes directives for specifying data distribution across multiple memories, and concurrent execution features. This article provides a tutorial introduction to the main features of HPF.

  3. Strategies and Experiences Using High Performance Fortran

    National Research Council Canada - National Science Library

    Shires, Dale

    2001-01-01

    .... High performance Fortran (HPF) is a relative new addition to the Fortran dialect It is an attempt to provide an efficient high-level Fortran parallel programming language for the latest generation of been debatable...

  4. PGHPF – An Optimizing High Performance Fortran Compiler for Distributed Memory Machines

    Directory of Open Access Journals (Sweden)

    Zeki Bozkus

    1997-01-01

    Full Text Available High Performance Fortran (HPF is the first widely supported, efficient, and portable parallel programming language for shared and distributed memory systems. HPF is realized through a set of directive-based extensions to Fortran 90. It enables application developers and Fortran end-users to write compact, portable, and efficient software that will compile and execute on workstations, shared memory servers, clusters, traditional supercomputers, or massively parallel processors. This article describes a production-quality HPF compiler for a set of parallel machines. Compilation techniques such as data and computation distribution, communication generation, run-time support, and optimization issues are elaborated as the basis for an HPF compiler implementation on distributed memory machines. The performance of this compiler on benchmark programs demonstrates that high efficiency can be achieved executing HPF code on parallel architectures.

  5. Visualization of Distributed Data Structures for High Performance Fortran-Like Languages

    Directory of Open Access Journals (Sweden)

    Rainer Koppler

    1997-01-01

    Full Text Available This article motivates the usage of graphics and visualization for efficient utilization of High Performance Fortran's (HPF's data distribution facilities. It proposes a graphical toolkit consisting of exploratory and estimation tools which allow the programmer to navigate through complex distributions and to obtain graphical ratings with respect to load distribution and communication. The toolkit has been implemented in a mapping design and visualization tool which is coupled with a compilation system for the HPF predecessor Vienna Fortran. Since this language covers a superset of HPF's facilities, the tool may also be used for visualization of HPF data structures.

  6. High Performance Fortran for Aerospace Applications

    National Research Council Canada - National Science Library

    Mehrotra, Piyush

    2000-01-01

    .... HPF is a set of Fortran extensions designed to provide users with a high-level interface for programming data parallel scientific applications while delegating to the compiler/runtime system the task...

  7. Performance Issues in High Performance Fortran Implementations of Sensor-Based Applications

    Directory of Open Access Journals (Sweden)

    David R. O'hallaron

    1997-01-01

    Full Text Available Applications that get their inputs from sensors are an important and often overlooked application domain for High Performance Fortran (HPF. Such sensor-based applications typically perform regular operations on dense arrays, and often have latency and through put requirements that can only be achieved with parallel machines. This article describes a study of sensor-based applications, including the fast Fourier transform, synthetic aperture radar imaging, narrowband tracking radar processing, multibaseline stereo imaging, and medical magnetic resonance imaging. The applications are written in a dialect of HPF developed at Carnegie Mellon, and are compiled by the Fx compiler for the Intel Paragon. The main results of the study are that (1 it is possible to realize good performance for realistic sensor-based applications written in HPF and (2 the performance of the applications is determined by the performance of three core operations: independent loops (i.e., loops with no dependences between iterations, reductions, and index permutations. The article discusses the implications for HPF implementations and introduces some simple tests that implementers and users can use to measure the efficiency of the loops, reductions, and index permutations generated by an HPF compiler.

  8. Object-oriented accelerator design with HPF

    International Nuclear Information System (INIS)

    Ji Qiang; Ryne, R.D.; Habib, S.

    1998-01-01

    In this paper, object-oriented design is applied to codes for beam dynamics simulations in accelerators using High Performance Fortran (HPF). This results in good maintainability, reusability, and extensibility of software, combined with the ease of parallel programming provided by HPF

  9. Object-oriented accelerator design with HPF

    Energy Technology Data Exchange (ETDEWEB)

    Ji Qiang; Ryne, R.D.; Habib, S.

    1998-12-31

    In this paper, object-oriented design is applied to codes for beam dynamics simulations in accelerators using High Performance Fortran (HPF). This results in good maintainability, reusability, and extensibility of software, combined with the ease of parallel programming provided by HPF.

  10. VFC: The Vienna Fortran Compiler

    Directory of Open Access Journals (Sweden)

    Siegfried Benkner

    1999-01-01

    Full Text Available High Performance Fortran (HPF offers an attractive high‐level language interface for programming scalable parallel architectures providing the user with directives for the specification of data distribution and delegating to the compiler the task of generating an explicitly parallel program. Available HPF compilers can handle regular codes quite efficiently, but dramatic performance losses may be encountered for applications which are based on highly irregular, dynamically changing data structures and access patterns. In this paper we introduce the Vienna Fortran Compiler (VFC, a new source‐to‐source parallelization system for HPF+, an optimized version of HPF, which addresses the requirements of irregular applications. In addition to extended data distribution and work distribution mechanisms, HPF+ provides the user with language features for specifying certain information that decisively influence a program’s performance. This comprises data locality assertions, non‐local access specifications and the possibility of reusing runtime‐generated communication schedules of irregular loops. Performance measurements of kernels from advanced applications demonstrate that with a high‐level data parallel language such as HPF+ a performance close to hand‐written message‐passing programs can be achieved even for highly irregular codes.

  11. Kemari: A Portable High Performance Fortran System for Distributed Memory Parallel Processors

    Directory of Open Access Journals (Sweden)

    T. Kamachi

    1997-01-01

    Full Text Available We have developed a compilation system which extends High Performance Fortran (HPF in various aspects. We support the parallelization of well-structured problems with loop distribution and alignment directives similar to HPF's data distribution directives. Such directives give both additional control to the user and simplify the compilation process. For the support of unstructured problems, we provide directives for dynamic data distribution through user-defined mappings. The compiler also allows integration of message-passing interface (MPI primitives. The system is part of a complete programming environment which also comprises a parallel debugger and a performance monitor and analyzer. After an overview of the compiler, we describe the language extensions and related compilation mechanisms in detail. Performance measurements demonstrate the compiler's applicability to a variety of application classes.

  12. A Linear Algebra Framework for Static High Performance Fortran Code Distribution

    Directory of Open Access Journals (Sweden)

    Corinne Ancourt

    1997-01-01

    Full Text Available High Performance Fortran (HPF was developed to support data parallel programming for single-instruction multiple-data (SIMD and multiple-instruction multiple-data (MIMD machines with distributed memory. The programmer is provided a familiar uniform logical address space and specifies the data distribution by directives. The compiler then exploits these directives to allocate arrays in the local memories, to assign computations to elementary processors, and to migrate data between processors when required. We show here that linear algebra is a powerful framework to encode HPF directives and to synthesize distributed code with space-efficient array allocation, tight loop bounds, and vectorized communications for INDEPENDENT loops. The generated code includes traditional optimizations such as guard elimination, message vectorization and aggregation, and overlap analysis. The systematic use of an affine framework makes it possible to prove the compilation scheme correct.

  13. Development of three-dimensional neoclassical transport simulation code with high performance Fortran on a vector-parallel computer

    International Nuclear Information System (INIS)

    Satake, Shinsuke; Okamoto, Masao; Nakajima, Noriyoshi; Takamaru, Hisanori

    2005-11-01

    A neoclassical transport simulation code (FORTEC-3D) applicable to three-dimensional configurations has been developed using High Performance Fortran (HPF). Adoption of computing techniques for parallelization and a hybrid simulation model to the δf Monte-Carlo method transport simulation, including non-local transport effects in three-dimensional configurations, makes it possible to simulate the dynamism of global, non-local transport phenomena with a self-consistent radial electric field within a reasonable computation time. In this paper, development of the transport code using HPF is reported. Optimization techniques in order to achieve both high vectorization and parallelization efficiency, adoption of a parallel random number generator, and also benchmark results, are shown. (author)

  14. DDT: A Research Tool for Automatic Data Distribution in High Performance Fortran

    Directory of Open Access Journals (Sweden)

    Eduard AyguadÉ

    1997-01-01

    Full Text Available This article describes the main features and implementation of our automatic data distribution research tool. The tool (DDT accepts programs written in Fortran 77 and generates High Performance Fortran (HPF directives to map arrays onto the memories of the processors and parallelize loops, and executable statements to remap these arrays. DDT works by identifying a set of computational phases (procedures and loops. The algorithm builds a search space of candidate solutions for these phases which is explored looking for the combination that minimizes the overall cost; this cost includes data movement cost and computation cost. The movement cost reflects the cost of accessing remote data during the execution of a phase and the remapping costs that have to be paid in order to execute the phase with the selected mapping. The computation cost includes the cost of executing a phase in parallel according to the selected mapping and the owner computes rule. The tool supports interprocedural analysis and uses control flow information to identify how phases are sequenced during the execution of the application.

  15. High Performance Object-Oriented Scientific Programming in Fortran 90

    Science.gov (United States)

    Norton, Charles D.; Decyk, Viktor K.; Szymanski, Boleslaw K.

    1997-01-01

    We illustrate how Fortran 90 supports object-oriented concepts by example of plasma particle computations on the IBM SP. Our experience shows that Fortran 90 and object-oriented methodology give high performance while providing a bridge from Fortran 77 legacy codes to modern programming principles. All of our object-oriented Fortran 90 codes execute more quickly thatn the equeivalent C++ versions, yet the abstraction modelling capabilities used for scentific programming are comparably powereful.

  16. High-Level Management of Communication Schedules in HPF-like Languages

    National Research Council Canada - National Science Library

    Benkner, Siegfried

    1997-01-01

    .... For some applications, this approach may result in dramatic performance losses. An important example is the inspector/executor paradigm, which HPF uses to support irregular data accesses in parallel loops...

  17. Application of a parallel 3-dimensional hydrogeochemistry HPF code to a proposed waste disposal site at the Oak Ridge National Laboratory

    International Nuclear Information System (INIS)

    Gwo, Jin-Ping; Yeh, Gour-Tsyh

    1997-01-01

    The objectives of this study are (1) to parallelize a 3-dimensional hydrogeochemistry code and (2) to apply the parallel code to a proposed waste disposal site at the Oak Ridge National Laboratory (ORNL). The 2-dimensional hydrogeochemistry code HYDROGEOCHEM, developed at the Pennsylvania State University for coupled subsurface solute transport and chemical equilibrium processes, was first modified to accommodate 3-dimensional problem domains. A bi-conjugate gradient stabilized linear matrix solver was then incorporated to solve the matrix equation. We chose to parallelize the 3-dimensional code on the Intel Paragons at ORNL by using an HPF (high performance FORTRAN) compiler developed at PGI. The data- and task-parallel algorithms available in the HPF compiler proved to be highly efficient for the geochemistry calculation. This calculation can be easily implemented in HPF formats and is perfectly parallel because the chemical speciation on one finite-element node is virtually independent of those on the others. The parallel code was applied to a subwatershed of the Melton Branch at ORNL. Chemical heterogeneity, in addition to physical heterogeneities of the geological formations, has been identified as one of the major factors that affect the fate and transport of contaminants at ORNL. This study demonstrated an application of the 3-dimensional hydrogeochemistry code on the Melton Branch site. A uranium tailing problem that involved in aqueous complexation and precipitation-dissolution was tested. Performance statistics was collected on the Intel Paragons at ORNL. Implications of these results on the further optimization of the code were discussed

  18. LFK, FORTRAN Application Performance Test

    International Nuclear Information System (INIS)

    McMahon, F.H.

    1991-01-01

    1 - Description of program or function: LFK, the Livermore FORTRAN Kernels, is a computer performance test that measures a realistic floating-point performance range for FORTRAN applications. Informally known as the Livermore Loops test, the LFK test may be used as a computer performance test, as a test of compiler accuracy (via checksums) and efficiency, or as a hardware endurance test. The LFK test, which focuses on FORTRAN as used in computational physics, measures the joint performance of the computer CPU, the compiler, and the computational structures in units of Mega-flops/sec or Mflops. A C language version of subroutine KERNEL is also included which executes 24 samples of C numerical computation. The 24 kernels are a hydrodynamics code fragment, a fragment from an incomplete Cholesky conjugate gradient code, the standard inner product function of linear algebra, a fragment from a banded linear equations routine, a segment of a tridiagonal elimination routine, an example of a general linear recurrence equation, an equation of state fragment, part of an alternating direction implicit integration code, an integrate predictor code, a difference predictor code, a first sum, a first difference, a fragment from a two-dimensional particle-in-cell code, a part of a one-dimensional particle-in-cell code, an example of how casually FORTRAN can be written, a Monte Carlo search loop, an example of an implicit conditional computation, a fragment of a two-dimensional explicit hydrodynamics code, a general linear recurrence equation, part of a discrete ordinates transport program, a simple matrix calculation, a segment of a Planck distribution procedure, a two-dimensional implicit hydrodynamics fragment, and determination of the location of the first minimum in an array. 2 - Method of solution: CPU performance rates depend strongly on the maturity of FORTRAN compiler machine code optimization. The LFK test-bed executes the set of 24 kernels three times, resetting the DO

  19. An Algebraic Machinery for Optimizing Data Motion for HPF

    Directory of Open Access Journals (Sweden)

    Jan-Jan Wu

    1997-01-01

    Full Text Available This paper describes a general compiler optimization technique that reduces communica tion over-head for FORTRAN-90 (and High Performance FORTRAN implementations on massively parallel machines.

  20. Fortran

    CERN Document Server

    Marateck, Samuel L

    1977-01-01

    FORTRAN is written for students who have no prior knowledge of computers or programming. The book aims to teach students how to program using the FORTRAN language.The publication first elaborates on an introduction to computers and programming, introduction to FORTRAN, and calculations and the READ statement. Discussions focus on flow charts, rounding numbers, strings, executing the program, the WRITE and FORMAT statements, performing an addition, input and output devices, and algorithms. The text then takes a look at functions and the IF statement and the DO Loop, the IF-THEN-ELSE and the WHI

  1. The Fortran-P Translator: Towards Automatic Translation of Fortran 77 Programs for Massively Parallel Processors

    Directory of Open Access Journals (Sweden)

    Matthew O'keefe

    1995-01-01

    Full Text Available Massively parallel processors (MPPs hold the promise of extremely high performance that, if realized, could be used to study problems of unprecedented size and complexity. One of the primary stumbling blocks to this promise has been the lack of tools to translate application codes to MPP form. In this article we show how applications codes written in a subset of Fortran 77, called Fortran-P, can be translated to achieve good performance on several massively parallel machines. This subset can express codes that are self-similar, where the algorithm applied to the global data domain is also applied to each subdomain. We have found many codes that match the Fortran-P programming style and have converted them using our tools. We believe a self-similar coding style will accomplish what a vectorizable style has accomplished for vector machines by allowing the construction of robust, user-friendly, automatic translation systems that increase programmer productivity and generate fast, efficient code for MPPs.

  2. Standard Fortran

    International Nuclear Information System (INIS)

    Marshall, N.H.

    1981-01-01

    Because of its vast software investment in Fortran programs, the nuclear community has an inherent interest in the evolution of Fortran. This paper reviews the impact of the new Fortran 77 standard and discusses the projected changes which can be expected in the future

  3. HPF: The Habitable Zone Planet Finder at the Hobby-Eberly Telescope

    Science.gov (United States)

    Wright, Jason T.; Mahadevan, Suvrath; Hearty, Fred; Monson, Andy; Stefansson, Gudmundur; Ramsey, Larry; Ninan, Joe; Bender, Chad; Kaplan, Kyle; Roy, Arpita; Terrien, Ryan; Robertson, Paul; Halverson, Sam; Schwab, Christian; Kanodia, Shubham

    2018-01-01

    The Habitable Zone Planet Finder (HPF) is an ultra-stable NIR (ZYJ) high resolution echelle spectrograph on the 10-m Hobby-Eberly Telescope capable of 1-3 m/s Doppler velocimetry on nearby late M dwarfs (M4-M9). This precision is sufficient to detect terrestrial planets in the Habitable Zones of these relatively unexplored stars. Here we present its capabilities and early commissioning results.

  4. A Performance-Prediction Model for PIC Applications on Clusters of Symmetric MultiProcessors: Validation with Hierarchical HPF+OpenMP Implementation

    Directory of Open Access Journals (Sweden)

    Sergio Briguglio

    2003-01-01

    Full Text Available A performance-prediction model is presented, which describes different hierarchical workload decomposition strategies for particle in cell (PIC codes on Clusters of Symmetric MultiProcessors. The devised workload decomposition is hierarchically structured: a higher-level decomposition among the computational nodes, and a lower-level one among the processors of each computational node. Several decomposition strategies are evaluated by means of the prediction model, with respect to the memory occupancy, the parallelization efficiency and the required programming effort. Such strategies have been implemented by integrating the high-level languages High Performance Fortran (at the inter-node stage and OpenMP (at the intra-node one. The details of these implementations are presented, and the experimental values of parallelization efficiency are compared with the predicted results.

  5. Programming in Fortran M

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.; Olson, R.; Tuecke, S.

    1993-08-01

    Fortran M is a small set of extensions to Fortran that supports a modular approach to the construction of sequential and parallel programs. Fortran M programs use channels to plug together processes which may be written in Fortran M or Fortran 77. Processes communicate by sending and receiving messages on channels. Channels and processes can be created dynamically, but programs remain deterministic unless specialized nondeterministic constructs are used. Fortran M programs can execute on a range of sequential, parallel, and networked computers. This report incorporates both a tutorial introduction to Fortran M and a users guide for the Fortran M compiler developed at Argonne National Laboratory. The Fortran M compiler, supporting software, and documentation are made available free of charge by Argonne National Laboratory, but are protected by a copyright which places certain restrictions on how they may be redistributed. See the software for details. The latest version of both the compiler and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/fortran-m at info.mcs.anl.gov.

  6. Aspects of FORTRAN in large-scale programming

    International Nuclear Information System (INIS)

    Metcalf, M.

    1983-01-01

    In these two lectures I examine the following three questions: i) Why did high-energy physicists begin to use FORTRAN. ii) Why do high-energy physicists continue to use FORTRAN. iii) Will high-energy physicists always use FORTRAN. In order to find answers to these questions, it is necessary to look at the history of the language, its present position, and its likely future, and also to consider its manner of use, the topic of portability, and the competition from other languages. Here we think especially of early competition from ALGOL, the more recent spread in the use of PASCAL, and the appearance of a completely new and ambitious language, ADA. (orig.)

  7. Aspects of FORTRAN in large-scale programming

    CERN Document Server

    Metcalf, M

    1983-01-01

    In these two lectures I shall try to examine the following three questions: i) Why did high-energy physicists begin to use FORTRAN? ii) Why do high-energy physicists continue to use FORTRAN? iii) Will high-energy physicists always use FORTRAN? In order to find answers to these questions, it is necessary to look at the history of the language, its present position, and its likely future, and also to consider its manner of use, the topic of portability, and the competition from other languages. Here we think especially of early competition from ALGOL, the more recent spread in the use of PASCAL, and the appearance of a completely new and ambitious language, ADA.

  8. Alternatives to FORTRAN in control systems

    International Nuclear Information System (INIS)

    Howell, J.A.; Wright, R.M.

    1985-01-01

    Control system software has traditionally been written in assembly language, FORTRAN, or Basic. Today there exist several high-level languages with features that make them convenient and effective in control systems. These features include bit manipulation, user-defined data types, character manipulation, and high-level logical operations. Some of theses languages are quite different from FORTRAN and yet are easy to read and use. We discuss several languages, their features that make them convenient for control systems, and give examples of their use. We focus particular attention on the language C, developed by Bell Laboratories

  9. MPI to Coarray Fortran: Experiences with a CFD Solver for Unstructured Meshes

    Directory of Open Access Journals (Sweden)

    Anuj Sharma

    2017-01-01

    Full Text Available High-resolution numerical methods and unstructured meshes are required in many applications of Computational Fluid Dynamics (CFD. These methods are quite computationally expensive and hence benefit from being parallelized. Message Passing Interface (MPI has been utilized traditionally as a parallelization strategy. However, the inherent complexity of MPI contributes further to the existing complexity of the CFD scientific codes. The Partitioned Global Address Space (PGAS parallelization paradigm was introduced in an attempt to improve the clarity of the parallel implementation. We present our experiences of converting an unstructured high-resolution compressible Navier-Stokes CFD solver from MPI to PGAS Coarray Fortran. We present the challenges, methodology, and performance measurements of our approach using Coarray Fortran. With the Cray compiler, we observe Coarray Fortran as a viable alternative to MPI. We are hopeful that Intel and open-source implementations could be utilized in the future.

  10. Fortran 90 for scientists and engineers

    CERN Document Server

    Hahn, Brian

    1994-01-01

    The introduction of the Fortran 90 standard is the first significant change in the Fortran language in over 20 years. this book is designed for anyone wanting to learn Fortran for the first time or or a programmer who needs to upgrade from Fortran 77 to Fortran 90.Employing a practical, problem-based approach this book provides a comprehensive introduction to the language. More experienced programmers will find it a useful update to the new standard and will benefit from the emphasis on science and engineering applications.

  11. Fortran interface layer of the framework for developing particle simulator FDPS

    Science.gov (United States)

    Namekata, Daisuke; Iwasawa, Masaki; Nitadori, Keigo; Tanikawa, Ataru; Muranushi, Takayuki; Wang, Long; Hosono, Natsuki; Nomura, Kentaro; Makino, Junichiro

    2018-06-01

    Numerical simulations based on particle methods have been widely used in various fields including astrophysics. To date, various versions of simulation software have been developed by individual researchers or research groups in each field, through a huge amount of time and effort, even though the numerical algorithms used are very similar. To improve the situation, we have developed a framework, called FDPS (Framework for Developing Particle Simulators), which enables researchers to develop massively parallel particle simulation codes for arbitrary particle methods easily. Until version 3.0, FDPS provided an API (application programming interface) for the C++ programming language only. This limitation comes from the fact that FDPS is developed using the template feature in C++, which is essential to support arbitrary data types of particle. However, there are many researchers who use Fortran to develop their codes. Thus, the previous versions of FDPS require such people to invest much time to learn C++. This is inefficient. To cope with this problem, we developed a Fortran interface layer in FDPS, which provides API for Fortran. In order to support arbitrary data types of particle in Fortran, we design the Fortran interface layer as follows. Based on a given derived data type in Fortran representing particle, a PYTHON script provided by us automatically generates a library that manipulates the C++ core part of FDPS. This library is seen as a Fortran module providing an API of FDPS from the Fortran side and uses C programs internally to interoperate Fortran with C++. In this way, we have overcome several technical issues when emulating a `template' in Fortran. Using the Fortran interface, users can develop all parts of their codes in Fortran. We show that the overhead of the Fortran interface part is sufficiently small and a code written in Fortran shows a performance practically identical to the one written in C++.

  12. NINJA: Java for High Performance Numerical Computing

    Directory of Open Access Journals (Sweden)

    José E. Moreira

    2002-01-01

    Full Text Available When Java was first introduced, there was a perception that its many benefits came at a significant performance cost. In the particularly performance-sensitive field of numerical computing, initial measurements indicated a hundred-fold performance disadvantage between Java and more established languages such as Fortran and C. Although much progress has been made, and Java now can be competitive with C/C++ in many important situations, significant performance challenges remain. Existing Java virtual machines are not yet capable of performing the advanced loop transformations and automatic parallelization that are now common in state-of-the-art Fortran compilers. Java also has difficulties in implementing complex arithmetic efficiently. These performance deficiencies can be attacked with a combination of class libraries (packages, in Java that implement truly multidimensional arrays and complex numbers, and new compiler techniques that exploit the properties of these class libraries to enable other, more conventional, optimizations. Two compiler techniques, versioning and semantic expansion, can be leveraged to allow fully automatic optimization and parallelization of Java code. Our measurements with the NINJA prototype Java environment show that Java can be competitive in performance with highly optimized and tuned Fortran code.

  13. Modern Fortran in practice

    NARCIS (Netherlands)

    Markus, A.

    2012-01-01

    From its earliest days, the Fortran programming language has been designed with computing efficiency in mind. The latest standard, Fortran 2008, incorporates a host of modern features, including object-orientation, array operations, user-defined types, and provisions for parallel computing. This

  14. FPP: A Fortran preprocessor

    International Nuclear Information System (INIS)

    Boyarski, A.

    1992-11-01

    FPP is a preprocessor which aids in porting Fortran source code across differing platforms. It provides conditional compilation features to enable or disable sections of code, and can modify file names in INCLUDE statements to a syntax suitable for a target platform. FPP is written Fortran 77, and runs on VM/CMS, VAX/VMS, UNIX, and PC/DOS SYSTEMS

  15. A Note on Compiling Fortran

    Energy Technology Data Exchange (ETDEWEB)

    Busby, L. E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-09-01

    Fortran modules tend to serialize compilation of large Fortran projects, by introducing dependencies among the source files. If file A depends on file B, (A uses a module defined by B), you must finish compiling B before you can begin compiling A. Some Fortran compilers (Intel ifort, GNU gfortran and IBM xlf, at least) offer an option to ‘‘verify syntax’’, with the side effect of also producing any associated Fortran module files. As it happens, this option usually runs much faster than the object code generation and optimization phases. For some projects on some machines, it can be advantageous to compile in two passes: The first pass generates the module files, quickly; the second pass produces the object files, in parallel. We achieve a 3.8× speedup in the case study below.

  16. Scientific Programming in Fortran

    Directory of Open Access Journals (Sweden)

    W. Van Snyder

    2007-01-01

    Full Text Available The Fortran programming language was designed by John Backus and his colleagues at IBM to reduce the cost of programming scientific applications. IBM delivered the first compiler for its model 704 in 1957. IBM's competitors soon offered incompatible versions. ANSI (ASA at the time developed a standard, largely based on IBM's Fortran IV in 1966. Revisions of the standard were produced in 1977, 1990, 1995 and 2003. Development of a revision, scheduled for 2008, is under way. Unlike most other programming languages, Fortran is periodically revised to keep pace with developments in language and processor design, while revisions largely preserve compatibility with previous versions. Throughout, the focus on scientific programming, and especially on efficient generated programs, has been maintained.

  17. A Case Study of Some Issues in the Optimization of Fortran 90 Array Notation

    Directory of Open Access Journals (Sweden)

    John D. McCalpin

    1996-01-01

    Full Text Available Some issues in the relationship of coding style and compiler optimization are discussed with regard to Fortran 90 array notation. A review of several important Fortran 90 array constructs and their performance on vector and scalar hardware sets the stage for a more detailed example based on the kernel of a finite difference computational fluid dynamics model, specifically the nonlinear shallow water equations. Special attention is paid to the optimization of memory use and memory traffic. It is shown that the style of coding interacts with the rules of Fortran 90 and the current state of the art of Fortran 90 compilers to produce a fairly wide range of performance levels. Although performance degradations are typically small, a few cases of more serious loss of effciency are identified and discussed.

  18. IFF, Full-Screen Input Menu Generator for FORTRAN Program

    International Nuclear Information System (INIS)

    Seidl, Albert

    1991-01-01

    1 - Description of program or function: The IFF-package contains input modules for use within FORTRAN programs. This package enables the programmer to easily include interactive menu-directed data input (module VTMEN1) and command-word processing (module INPCOM) into a FORTRAN program. 2 - Method of solution: No mathematical operations are performed. 3 - Restrictions on the complexity of the problem: Certain restrictions of use may arise from the dimensioning of arrays. Field lengths are defined via PARAMETER-statements

  19. Using Coarrays to Parallelize Legacy Fortran Applications: Strategy and Case Study

    Directory of Open Access Journals (Sweden)

    Hari Radhakrishnan

    2015-01-01

    Full Text Available This paper summarizes a strategy for parallelizing a legacy Fortran 77 program using the object-oriented (OO and coarray features that entered Fortran in the 2003 and 2008 standards, respectively. OO programming (OOP facilitates the construction of an extensible suite of model-verification and performance tests that drive the development. Coarray parallel programming facilitates a rapid evolution from a serial application to a parallel application capable of running on multicore processors and many-core accelerators in shared and distributed memory. We delineate 17 code modernization steps used to refactor and parallelize the program and study the resulting performance. Our initial studies were done using the Intel Fortran compiler on a 32-core shared memory server. Scaling behavior was very poor, and profile analysis using TAU showed that the bottleneck in the performance was due to our implementation of a collective, sequential summation procedure. We were able to improve the scalability and achieve nearly linear speedup by replacing the sequential summation with a parallel, binary tree algorithm. We also tested the Cray compiler, which provides its own collective summation procedure. Intel provides no collective reductions. With Cray, the program shows linear speedup even in distributed-memory execution. We anticipate similar results with other compilers once they support the new collective procedures proposed for Fortran 2015.

  20. Pattern recognition in molecular dynamics. [FORTRAN

    Energy Technology Data Exchange (ETDEWEB)

    Zurek, W H; Schieve, W C [Texas Univ., Austin (USA)

    1977-07-01

    An algorithm for the recognition of the formation of bound molecular states in the computer simulation of a dilute gas is presented. Applications to various related problems in physics and chemistry are pointed out. Data structure and decision processes are described. Performance of the FORTRAN program based on the algorithm in cooperation with the molecular dynamics program is described and the results are presented.

  1. Extracting UML Class Diagrams from Object-Oriented Fortran: ForUML

    Directory of Open Access Journals (Sweden)

    Aziz Nanthaamornphong

    2015-01-01

    Full Text Available Many scientists who implement computational science and engineering software have adopted the object-oriented (OO Fortran paradigm. One of the challenges faced by OO Fortran developers is the inability to obtain high level software design descriptions of existing applications. Knowledge of the overall software design is not only valuable in the absence of documentation, it can also serve to assist developers with accomplishing different tasks during the software development process, especially maintenance and refactoring. The software engineering community commonly uses reverse engineering techniques to deal with this challenge. A number of reverse engineering-based tools have been proposed, but few of them can be applied to OO Fortran applications. In this paper, we propose a software tool to extract unified modeling language (UML class diagrams from Fortran code. The UML class diagram facilitates the developers' ability to examine the entities and their relationships in the software system. The extracted diagrams enhance software maintenance and evolution. The experiments carried out to evaluate the proposed tool show its accuracy and a few of the limitations.

  2. Exploiting first-class arrays in Fortran for accelerator programming

    International Nuclear Information System (INIS)

    Rasmussen, Craig E.; Weseloh, Wayne N.; Robey, Robert W.; Sottile, Matthew J.; Quinlan, Daniel; Overbey, Jeffrey

    2010-01-01

    Emerging architectures for high performance computing often are well suited to a data parallel programming model. This paper presents a simple programming methodology based on existing languages and compiler tools that allows programmers to take advantage of these systems. We will work with the array features of Fortran 90 to show how this infrequently exploited, standardized language feature is easily transformed to lower level accelerator code. Our transformations are based on a mapping from Fortran 90 to C++ code with OpenCL extensions. The sheer complexity of programming for clusters of many or multi-core processors with tens of millions threads of execution make the simplicity of the data parallel model attractive. Furthermore, the increasing complexity of todays applications (especially when convolved with the increasing complexity of the hardware) and the need for portability across hardware architectures make a higher-level and simpler programming model like data parallel attractive. The goal of this work has been to exploit source-to-source transformations that allow programmers to develop and maintain programs at a high-level of abstraction, without coding to a specific hardware architecture. Furthermore these transformations allow multiple hardware architectures to be targeted without changing the high-level source. It also removes the necessity for application programmers to understand details of the accelerator architecture or to know OpenCL.

  3. Implementing O(N N-Body Algorithms Efficiently in Data-Parallel Languages

    Directory of Open Access Journals (Sweden)

    Yu Hu

    1996-01-01

    Full Text Available The optimization techniques for hierarchical O(N N-body algorithms described here focus on managing the data distribution and the data references, both between the memories of different nodes and within the memory hierarchy of each node. We show how the techniques can be expressed in data-parallel languages, such as High Performance Fortran (HPF and Connection Machine Fortran (CMF. The effectiveness of our techniques is demonstrated on an implementation of Anderson's hierarchical O(N N-body method for the Connection Machine system CM-5/5E. Of the total execution time, communication accounts for about 10–20% of the total time, with the average efficiency for arithmetic operations being about 40% and the total efficiency (including communication being about 35%. For the CM-5E, a performance in excess of 60 Mflop/s per node (peak 160 Mflop/s per node has been measured.

  4. Application of Modern Fortran to Spacecraft Trajectory Design and Optimization

    Science.gov (United States)

    Williams, Jacob; Falck, Robert D.; Beekman, Izaak B.

    2018-01-01

    In this paper, applications of the modern Fortran programming language to the field of spacecraft trajectory optimization and design are examined. Modern object-oriented Fortran has many advantages for scientific programming, although many legacy Fortran aerospace codes have not been upgraded to use the newer standards (or have been rewritten in other languages perceived to be more modern). NASA's Copernicus spacecraft trajectory optimization program, originally a combination of Fortran 77 and Fortran 95, has attempted to keep up with modern standards and makes significant use of the new language features. Various algorithms and methods are presented from trajectory tools such as Copernicus, as well as modern Fortran open source libraries and other projects.

  5. Mixed-Language High-Performance Computing for Plasma Simulations

    Directory of Open Access Journals (Sweden)

    Quanming Lu

    2003-01-01

    Full Text Available Java is receiving increasing attention as the most popular platform for distributed computing. However, programmers are still reluctant to embrace Java as a tool for writing scientific and engineering applications due to its still noticeable performance drawbacks compared with other programming languages such as Fortran or C. In this paper, we present a hybrid Java/Fortran implementation of a parallel particle-in-cell (PIC algorithm for plasma simulations. In our approach, the time-consuming components of this application are designed and implemented as Fortran subroutines, while less calculation-intensive components usually involved in building the user interface are written in Java. The two types of software modules have been glued together using the Java native interface (JNI. Our mixed-language PIC code was tested and its performance compared with pure Java and Fortran versions of the same algorithm on a Sun E6500 SMP system and a Linux cluster of Pentium~III machines.

  6. Object-Oriented Scientific Programming with Fortran 90

    Science.gov (United States)

    Norton, C.

    1998-01-01

    Fortran 90 is a modern language that introduces many important new features beneficial for scientific programming. We discuss our experiences in plasma particle simulation and unstructured adaptive mesh refinement on supercomputers, illustrating the features of Fortran 90 that support the object-oriented methodology.

  7. P3T+: A Performance Estimator for Distributed and Parallel Programs

    Directory of Open Access Journals (Sweden)

    T. Fahringer

    2000-01-01

    Full Text Available Developing distributed and parallel programs on today's multiprocessor architectures is still a challenging task. Particular distressing is the lack of effective performance tools that support the programmer in evaluating changes in code, problem and machine sizes, and target architectures. In this paper we introduce P3T+ which is a performance estimator for mostly regular HPF (High Performance Fortran programs but partially covers also message passing programs (MPI. P3T+ is unique by modeling programs, compiler code transformations, and parallel and distributed architectures. It computes at compile-time a variety of performance parameters including work distribution, number of transfers, amount of data transferred, transfer times, computation times, and number of cache misses. Several novel technologies are employed to compute these parameters: loop iteration spaces, array access patterns, and data distributions are modeled by employing highly effective symbolic analysis. Communication is estimated by simulating the behavior of a communication library used by the underlying compiler. Computation times are predicted through pre-measured kernels on every target architecture of interest. We carefully model most critical architecture specific factors such as cache lines sizes, number of cache lines available, startup times, message transfer time per byte, etc. P3T+ has been implemented and is closely integrated with the Vienna High Performance Compiler (VFC to support programmers develop parallel and distributed applications. Experimental results for realistic kernel codes taken from real-world applications are presented to demonstrate both accuracy and usefulness of P3T+.

  8. Programação orientada a objetos em FORTRAN

    OpenAIRE

    Beck, André Teófilo; Bazán, Felipe Alexander Vargas

    2011-01-01

    Este artigo apresenta conceitos fundamentais de programação orientada a objetos (OO) em FORTRAN. Em geral, os usuários de FORTRAN não estão familiarizados com estes conceitos, pois os compiladores desta linguagem não possuíam suporte para programação OO até o recente lançamento da versão 11.1 do compilador Intel Visual FORTRAN. Este compilador suporta a maioria das características de orientação a objetos do padrão FORTRAN 2003, permitindo a atualização de práticas de programaçã...

  9. MULTITASKER, Multitasking Kernel for C and FORTRAN Under UNIX

    International Nuclear Information System (INIS)

    Brooks, E.D. III

    1988-01-01

    1 - Description of program or function: MULTITASKER implements a multitasking kernel for the C and FORTRAN programming languages that runs under UNIX. The kernel provides a multitasking environment which serves two purposes. The first is to provide an efficient portable environment for the development, debugging, and execution of production multiprocessor programs. The second is to provide a means of evaluating the performance of a multitasking program on model multiprocessor hardware. The performance evaluation features require no changes in the application program source and are implemented as a set of compile- and run-time options in the kernel. 2 - Method of solution: The FORTRAN interface to the kernel is identical in function to the CRI multitasking package provided for the Cray XMP. This provides a migration path to high speed (but small N) multiprocessors once the application has been coded and debugged. With use of the UNIX m4 macro preprocessor, source compatibility can be achieved between the UNIX code development system and the target Cray multiprocessor. The kernel also provides a means of evaluating a program's performance on model multiprocessors. Execution traces may be obtained which allow the user to determine kernel overhead, memory conflicts between various tasks, and the average concurrency being exploited. The kernel may also be made to switch tasks every cpu instruction with a random execution ordering. This allows the user to look for unprotected critical regions in the program. These features, implemented as a set of compile- and run-time options, cause extra execution overhead which is not present in the standard production version of the kernel

  10. Comparison of and conversion between different implementations of the FORTRAN programming language

    Science.gov (United States)

    Treinish, L.

    1980-01-01

    A guideline for computer programmers who may need to exchange FORTRAN programs between several computers is presented. The characteristics of the FORTRAN language available on three different types of computers are outlined, and procedures and other considerations for the transfer of programs from one type of FORTRAN to another are discussed. In addition, the variance of these different FORTRAN's from the FORTRAN 77 standard are discussed.

  11. Replacing Fortran Namelists with JSON

    Science.gov (United States)

    Robinson, T. E., Jr.

    2017-12-01

    Maintaining a log of input parameters for a climate model is very important to understanding potential causes for answer changes during the development stages. Additionally, since modern Fortran is now interoperable with C, a more modern approach to software infrastructure to include code written in C is necessary. Merging these two separate facets of climate modeling requires a quality control for monitoring changes to input parameters and model defaults that can work with both Fortran and C. JSON will soon replace namelists as the preferred key/value pair input in the GFDL model. By adding a JSON parser written in C into the model, the input can be used by all functions and subroutines in the model, errors can be handled by the model instead of by the internal namelist parser, and the values can be output into a single file that is easily parsable by readily available tools. Input JSON files can handle all of the functionality of a namelist while being portable between C and Fortran. Fortran wrappers using unlimited polymorphism are crucial to allow for simple and compact code which avoids the need for many subroutines contained in an interface. Errors can be handled with more detail by providing information about location of syntax errors or typos. The output JSON provides a ground truth for values that the model actually uses by providing not only the values loaded through the input JSON, but also any default values that were not included. This kind of quality control on model input is crucial for maintaining reproducibility and understanding any answer changes resulting from changes in the input.

  12. READDATA: a FORTRAN 77 codeword input package

    International Nuclear Information System (INIS)

    Lander, P.A.

    1983-07-01

    A new codeword input package has been produced as a result of the incompatibility between different dialects of FORTRAN, especially when character variables are passed as parameters. This report is for those who wish to use a codeword input package with FORTRAN 77. The package, called ''Readdata'', attempts to combine the best features of its predecessors such as BINPUT and pseudo-BINPUT. (author)

  13. User manual for two simple postscript output FORTRAN plotting routines

    Science.gov (United States)

    Nguyen, T. X.

    1991-01-01

    Graphics is one of the important tools in engineering analysis and design. However, plotting routines that generate output on high quality laser printers normally come in graphics packages, which tend to be expensive and system dependent. These factors become important for small computer systems or desktop computers, especially when only some form of a simple plotting routine is sufficient. With the Postscript language becoming popular, there are more and more Postscript laser printers now available. Simple, versatile, low cost plotting routines that can generate output on high quality laser printers are needed and standard FORTRAN language plotting routines using output in Postscript language seems logical. The purpose here is to explain two simple FORTRAN plotting routines that generate output in Postscript language.

  14. Development of the static analyzer ANALYSIS/EX for FORTRAN programs

    International Nuclear Information System (INIS)

    Osanai, Seiji; Yokokawa, Mitsuo

    1993-08-01

    The static analyzer 'ANALYSIS' is the software tool for analyzing tree structure and COMMON regions of a FORTRAN program statically. With the installation of the new FORTRAN compiler, FORTRAN77EX(V12), to the computer system at JAERI, a new version of ANALYSIS, 'ANALYSIS/EX', has been developed to enhance its analyzing functions. In addition to the conventional functions of ANALYSIS, the ANALYSIS/EX is capable of analyzing of FORTRAN programs written in the FORTRAN77EX(V12) language grammar such as large-scale nuclear codes. The analyzing function of COMMON regions are also improved so as to obtain the relation between variables in COMMON regions in more detail. In this report, results of improvement and enhanced functions of the static analyzer ANALYSIS/EX are presented. (author)

  15. FASTPLOT, Interface Routines to MS FORTRAN Graphics Library

    International Nuclear Information System (INIS)

    1999-01-01

    1 - Description of program or function: FASTPLOT is a library of routines that can be used to interface with the Microsoft FORTRAN Graphics library (GRAPHICS.LIB). The FASTPLOT routines simplify the development of graphics applications and add capabilities such as histograms, Splines, symbols, and error bars. FASTPLOT also includes routines that can be used to create menus. 2 - Methods: FASTPLOT is a library of routines which must be linked with a user's FORTRAN programs that call any FASTPLOT routines. In addition, the user must link with the Microsoft FORTRAN Graphics library (GRAPHICS.LIB). 3 - Restrictions on the complexity of the problem: None noted

  16. Language interoperability for high-performance parallel scientific components

    International Nuclear Information System (INIS)

    Elliot, N; Kohn, S; Smolinski, B

    1999-01-01

    With the increasing complexity and interdisciplinary nature of scientific applications, code reuse is becoming increasingly important in scientific computing. One method for facilitating code reuse is the use of components technologies, which have been used widely in industry. However, components have only recently worked their way into scientific computing. Language interoperability is an important underlying technology for these component architectures. In this paper, we present an approach to language interoperability for a high-performance parallel, component architecture being developed by the Common Component Architecture (CCA) group. Our approach is based on Interface Definition Language (IDL) techniques. We have developed a Scientific Interface Definition Language (SIDL), as well as bindings to C and Fortran. We have also developed a SIDL compiler and run-time library support for reference counting, reflection, object management, and exception handling (Babel). Results from using Babel to call a standard numerical solver library (written in C) from C and Fortran show that the cost of using Babel is minimal, where as the savings in development time and the benefits of object-oriented development support for C and Fortran far outweigh the costs

  17. Bridging FORTRAN to object oriented paradigm for HEP data modeling task

    International Nuclear Information System (INIS)

    Huang, J.

    1993-12-01

    Object oriented (OO) technology appears to offer tangible benefits to the high energy physics (HEP) community. Yet many physicists view this newest software development used with much reservation and reluctance. Facing the reality of having to support the existing physics applications, which are written in FORTRAN, the software engineers in the Computer Engineering Group of the Physics Research Division at the Superconducting Super Collider Laboratory have accepted the challenge of mixing an old language with the new technology. This paper describes the experience and the techniques devised to fit FORTRAN into the OO paradigm (OOP)

  18. C Versus Fortran-77 for Scientific Programming

    Directory of Open Access Journals (Sweden)

    Tom MacDonald

    1992-01-01

    Full Text Available The predominant programming language for numeric and scientific applications is Fortran-77 and supercomputers are primarily used to run large-scale numeric and scientific applications. Standard C* is not widely used for numerical and scientific programming, yet Standard C provides many desirable linguistic features not present in Fortran-77. Furthermore, the existence of a standard library and preprocessor eliminates the worst portability problems. A comparison of Standard C and Fortran-77 shows several key deficiencies in C that reduce its ability to adequately solve some numerical problems. Some of these problems have already been addressed by the C standard but others remain. Standard C with a few extensions and modifications could be suitable for all numerical applications and could become more popular in supercomputing environments.

  19. Classical Fortran programming for engineering and scientific applications

    CERN Document Server

    Kupferschmid, Michael

    2009-01-01

    IntroductionWhy Study Programming?The Evolution of FORTRANWhy Study FORTRAN?Classical FORTRANAbout This BookAdvice to InstructorsAbout the AuthorAcknowledgmentsDisclaimersHello, World!Case Study: A First FORTRAN ProgramCompiling the ProgramRunning a Program in UNIXOmissionsExpressions and Assignment StatementsConstantsVariables and Variable NamesArithmetic OperatorsFunction ReferencesExpressionsA

  20. IRRIGOGRAPHY AFTER PREPARATIONS OF PATIENTS WITH FORTRANS

    Directory of Open Access Journals (Sweden)

    Irena Jankovic

    2006-01-01

    Full Text Available Fortrans® is a laxative in the form of the powder which is used for making solution for oral application. Laxative effects are achieved over long linear polymer (polyethylene-glikol - PEG 4000 which binds water molecules, increasing thus the volume of fluids in the intestinal tract.Material of study comprises 150 irrigographies made at the Institute of Radiology of the Clinical Center in Nis in the period from January 2004 to Jun 2005.The preparation in these cases was done by Fortrans®. The contrast medium used was barium sulfate.The results of study are presented in illustrations and irrigograms.In conclusion,we can say that Fortrans® provides reliable, effective and simple preparation of patients for irrigography as well as for fast, comfortable and efficient endographic examination (irrigography. The obtained irrigograms are of satisfactory quality, showing sharp contrasts.

  1. MORTRAN-2, FORTRAN Language Extension with User-Supplied Macros

    International Nuclear Information System (INIS)

    Cook, A. James; Shustek, L.J.

    1980-01-01

    1 - Description of problem or function: MORTRAN2 is a FORTRAN language extension that permits a relatively easy transition from FORTRAN to a more convenient and structured language. Its features include free-field format; alphanumeric statement labels; flexible comment convention; nested block structure; for-by-to, do, while, until, loop, if-then-else, if-else, exit, and next statements; multiple assignment statements; conditional compilation; and automatic listing indentation. The language is implemented by a macro-based pre-processor and is further extensible by user-defined macros. 2 - Method of solution: The MORTRAN2 pre-processor may be regarded as a compiler whose object code is ANSI Standard FORTRAN. The MORTRAN2 language is dynamically defined by macros which are input at each use of the pre-processor. 3 - Restrictions on the complexity of the problem: The pre-processor output must be accepted by a FORTRAN compiler

  2. Final Report, Center for Programming Models for Scalable Parallel Computing: Co-Array Fortran, Grant Number DE-FC02-01ER25505

    Energy Technology Data Exchange (ETDEWEB)

    Robert W. Numrich

    2008-04-22

    The major accomplishment of this project is the production of CafLib, an 'object-oriented' parallel numerical library written in Co-Array Fortran. CafLib contains distributed objects such as block vectors and block matrices along with procedures, attached to each object, that perform basic linear algebra operations such as matrix multiplication, matrix transpose and LU decomposition. It also contains constructors and destructors for each object that hide the details of data decomposition from the programmer, and it contains collective operations that allow the programmer to calculate global reductions, such as global sums, global minima and global maxima, as well as vector and matrix norms of several kinds. CafLib is designed to be extensible in such a way that programmers can define distributed grid and field objects, based on vector and matrix objects from the library, for finite difference algorithms to solve partial differential equations. A very important extra benefit that resulted from the project is the inclusion of the co-array programming model in the next Fortran standard called Fortran 2008. It is the first parallel programming model ever included as a standard part of the language. Co-arrays will be a supported feature in all Fortran compilers, and the portability provided by standardization will encourage a large number of programmers to adopt it for new parallel application development. The combination of object-oriented programming in Fortran 2003 with co-arrays in Fortran 2008 provides a very powerful programming model for high-performance scientific computing. Additional benefits from the project, beyond the original goal, include a programto provide access to the co-array model through access to the Cray compiler as a resource for teaching and research. Several academics, for the first time, included the co-array model as a topic in their courses on parallel computing. A separate collaborative project with LANL and PNNL showed how to

  3. Comprehensive Performance Evaluation for Hydrological and Nutrients Simulation Using the Hydrological Simulation Program–Fortran in a Mesoscale Monsoon Watershed, China

    OpenAIRE

    Zhaofu Li; Chuan Luo; Kaixia Jiang; Rongrong Wan; Hengpeng Li

    2017-01-01

    The Hydrological Simulation Program–Fortran (HSPF) is a hydrological and water quality computer model that was developed by the United States Environmental Protection Agency. Comprehensive performance evaluations were carried out for hydrological and nutrient simulation using the HSPF model in the Xitiaoxi watershed in China. Streamflow simulation was calibrated from 1 January 2002 to 31 December 2007 and then validated from 1 January 2008 to 31 December 2010 using daily observed data, and nu...

  4. The development of GPU-based parallel PRNG for Monte Carlo applications in CUDA Fortran

    Directory of Open Access Journals (Sweden)

    Hamed Kargaran

    2016-04-01

    Full Text Available The implementation of Monte Carlo simulation on the CUDA Fortran requires a fast random number generation with good statistical properties on GPU. In this study, a GPU-based parallel pseudo random number generator (GPPRNG have been proposed to use in high performance computing systems. According to the type of GPU memory usage, GPU scheme is divided into two work modes including GLOBAL_MODE and SHARED_MODE. To generate parallel random numbers based on the independent sequence method, the combination of middle-square method and chaotic map along with the Xorshift PRNG have been employed. Implementation of our developed PPRNG on a single GPU showed a speedup of 150x and 470x (with respect to the speed of PRNG on a single CPU core for GLOBAL_MODE and SHARED_MODE, respectively. To evaluate the accuracy of our developed GPPRNG, its performance was compared to that of some other commercially available PPRNGs such as MATLAB, FORTRAN and Miller-Park algorithm through employing the specific standard tests. The results of this comparison showed that the developed GPPRNG in this study can be used as a fast and accurate tool for computational science applications.

  5. The development of GPU-based parallel PRNG for Monte Carlo applications in CUDA Fortran

    Energy Technology Data Exchange (ETDEWEB)

    Kargaran, Hamed, E-mail: h-kargaran@sbu.ac.ir; Minuchehr, Abdolhamid; Zolfaghari, Ahmad [Department of nuclear engineering, Shahid Behesti University, Tehran, 1983969411 (Iran, Islamic Republic of)

    2016-04-15

    The implementation of Monte Carlo simulation on the CUDA Fortran requires a fast random number generation with good statistical properties on GPU. In this study, a GPU-based parallel pseudo random number generator (GPPRNG) have been proposed to use in high performance computing systems. According to the type of GPU memory usage, GPU scheme is divided into two work modes including GLOBAL-MODE and SHARED-MODE. To generate parallel random numbers based on the independent sequence method, the combination of middle-square method and chaotic map along with the Xorshift PRNG have been employed. Implementation of our developed PPRNG on a single GPU showed a speedup of 150x and 470x (with respect to the speed of PRNG on a single CPU core) for GLOBAL-MODE and SHARED-MODE, respectively. To evaluate the accuracy of our developed GPPRNG, its performance was compared to that of some other commercially available PPRNGs such as MATLAB, FORTRAN and Miller-Park algorithm through employing the specific standard tests. The results of this comparison showed that the developed GPPRNG in this study can be used as a fast and accurate tool for computational science applications.

  6. GRESS, FORTRAN Pre-compiler with Differentiation Enhancement

    International Nuclear Information System (INIS)

    1999-01-01

    1 - Description of program or function: The GRESS FORTRAN pre-compiler (SYMG) and run-time library are used to enhance conventional FORTRAN-77 programs with analytic differentiation of arithmetic statements for automatic differentiation in either forward or reverse mode. GRESS 3.0 is functionally equivalent to GRESS 2.1. GRESS 2.1 is an improved and updated version of the previous released GRESS 1.1. Improvements in the implementation of a the CHAIN option have resulted in a 70 to 85% reduction in execution time and up to a 50% reduction in memory required for forward chaining applications. 2 - Method of solution: GRESS uses a pre-compiler to analyze FORTRAN statements and determine the mathematical operations embodied in them. As each arithmetic assignment statement in a program is analyzed, SYMG generates the partial derivatives of the term on the left with respect to each floating-point variable on the right. The result of the pre-compilation step is a new FORTRAN program that can produce derivatives for any REAL (i.e., single or double precision) variable calculated by the model. Consequently, GRESS enhances FORTRAN programs or subprograms by adding the calculation of derivatives along with the original output. Derivatives from a GRESS enhanced model can be used internally (e.g., iteration acceleration) or externally (e.g., sensitivity studies). By calling GRESS run-time routines, derivatives can be propagated through the code via the chain rule (referred to as the CHAIN option) or accumulated to create an adjoint matrix (referred to as the ADGEN option). A third option, GENSUB, makes it possible to process a subset of a program (i.e., a do loop, subroutine, function, a sequence of subroutines, or a whole program) for calculating derivatives of dependent variables with respect to independent variables. A code enhanced with the GENSUB option can use forward mode, reverse mode, or a hybrid of the two modes. 3 - Restrictions on the complexity of the problem: GRESS

  7. SVM Support in the Vienna Fortran Compilation System

    OpenAIRE

    Brezany, Peter; Gerndt, Michael; Sipkova, Viera

    1994-01-01

    Vienna Fortran, a machine-independent language extension to Fortran which allows the user to write programs for distributed-memory systems using global addresses, provides the forall-loop construct for specifying irregular computations that do not cause inter-iteration dependences. Compilers for distributed-memory systems generate code that is based on runtime analysis techniques and is only efficient if, in addition, aggressive compile-time optimizations are applied. Since these optimization...

  8. Basic linear algebra subprograms for FORTRAN usage

    Science.gov (United States)

    Lawson, C. L.; Hanson, R. J.; Kincaid, D. R.; Krogh, F. T.

    1977-01-01

    A package of 38 low level subprograms for many of the basic operations of numerical linear algebra is presented. The package is intended to be used with FORTRAN. The operations in the package are dot products, elementary vector operations, Givens transformations, vector copy and swap, vector norms, vector scaling, and the indices of components of largest magnitude. The subprograms and a test driver are available in portable FORTRAN. Versions of the subprograms are also provided in assembly language for the IBM 360/67, the CDC 6600 and CDC 7600, and the Univac 1108.

  9. Run-Time and Compiler Support for Programming in Adaptive Parallel Environments

    Directory of Open Access Journals (Sweden)

    Guy Edjlali

    1997-01-01

    Full Text Available For better utilization of computing resources, it is important to consider parallel programming environments in which the number of available processors varies at run-time. In this article, we discuss run-time support for data-parallel programming in such an adaptive environment. Executing programs in an adaptive environment requires redistributing data when the number of processors changes, and also requires determining new loop bounds and communication patterns for the new set of processors. We have developed a run-time library to provide this support. We discuss how the run-time library can be used by compilers of high-performance Fortran (HPF-like languages to generate code for an adaptive environment. We present performance results for a Navier-Stokes solver and a multigrid template run on a network of workstations and an IBM SP-2. Our experiments show that if the number of processors is not varied frequently, the cost of data redistribution is not significant compared to the time required for the actual computation. Overall, our work establishes the feasibility of compiling HPF for a network of nondedicated workstations, which are likely to be an important resource for parallel programming in the future.

  10. ARBUS: A FORTRAN tool for generating tree structure diagrams

    International Nuclear Information System (INIS)

    Ferrero, C.; Zanger, M.

    1992-02-01

    The FORTRAN77 stand-alone code ARBUS has been designed to aid the user by providing a tree structure diagram generating utility for computer programs written in FORTRAN language. This report is intended to describe the main purpose and features of ARBUS and to highlight some additional applications of the code by means of practical test cases. (orig.) [de

  11. Cloudy's Journey from FORTRAN to C, Why and How

    Science.gov (United States)

    Ferland, G. J.

    Cloudy is a large-scale plasma simulation code that is widely used across the astronomical community as an aid in the interpretation of spectroscopic data. The cover of the ADAS VI book featured predictions of the code. The FORTRAN 77 source code has always been freely available on the Internet, contributing to its widespread use. The coming of PCs and Linux has fundamentally changed the computing environment. Modern Fortran compilers (F90 and F95) are not freely available. A common-use code must be written in either FORTRAN 77 or C to be Open Source/GNU/Linux friendly. F77 has serious drawbacks - modern language constructs cannot be used, students do not have skills in this language, and it does not contribute to their future employability. It became clear that the code would have to be ported to C to have a viable future. I describe the approach I used to convert Cloudy from FORTRAN 77 with MILSPEC extensions to ANSI/ISO 89 C. Cloudy is now openly available as a C code, and will evolve to C++ as gcc and standard C++ mature. Cloudy looks to a bright future with a modern language.

  12. JLAPACK – Compiling LAPACK FORTRAN to Java

    Directory of Open Access Journals (Sweden)

    David M. Doolin

    1999-01-01

    Full Text Available The JLAPACK project provides the LAPACK numerical subroutines translated from their subset Fortran 77 source into class files, executable by the Java Virtual Machine (JVM and suitable for use by Java programmers. This makes it possible for Java applications or applets, distributed on the World Wide Web (WWW to use established legacy numerical code that was originally written in Fortran. The translation is accomplished using a special purpose Fortran‐to‐Java (source‐to‐source compiler. The LAPACK API will be considerably simplified to take advantage of Java’s object‐oriented design. This report describes the research issues involved in the JLAPACK project, and its current implementation and status.

  13. Aviation Safety Modeling and Simulation (ASMM) Propulsion Fleet Modeling: A Tool for Semi-Automatic Construction of CORBA-based Applications from Legacy Fortran Programs

    Science.gov (United States)

    Sang, Janche

    2003-01-01

    Within NASA's Aviation Safety Program, NASA GRC participates in the Modeling and Simulation Project called ASMM. NASA GRC s focus is to characterize the propulsion systems performance from a fleet management and maintenance perspective by modeling and through simulation predict the characteristics of two classes of commercial engines (CFM56 and GE90). In prior years, the High Performance Computing and Communication (HPCC) program funded, NASA Glenn in developing a large scale, detailed simulations for the analysis and design of aircraft engines called the Numerical Propulsion System Simulation (NPSS). Three major aspects of this modeling included the integration of different engine components, coupling of multiple disciplines, and engine component zooming at appropriate level fidelity, require relatively tight coupling of different analysis codes. Most of these codes in aerodynamics and solid mechanics are written in Fortran. Refitting these legacy Fortran codes with distributed objects can increase these codes reusability. Aviation Safety s modeling and simulation use in characterizing fleet management has similar needs. The modeling and simulation of these propulsion systems use existing Fortran and C codes that are instrumental in determining the performance of the fleet. The research centers on building a CORBA-based development environment for programmers to easily wrap and couple legacy Fortran codes. This environment consists of a C++ wrapper library to hide the details of CORBA and an efficient remote variable scheme to facilitate data exchange between the client and the server model. Additionally, a Web Service model should also be constructed for evaluation of this technology s use over the next two- three years.

  14. Fortran code for SU(3) lattice gauge theory with and without MPI checkerboard parallelization

    Science.gov (United States)

    Berg, Bernd A.; Wu, Hao

    2012-10-01

    We document plain Fortran and Fortran MPI checkerboard code for Markov chain Monte Carlo simulations of pure SU(3) lattice gauge theory with the Wilson action in D dimensions. The Fortran code uses periodic boundary conditions and is suitable for pedagogical purposes and small scale simulations. For the Fortran MPI code two geometries are covered: the usual torus with periodic boundary conditions and the double-layered torus as defined in the paper. Parallel computing is performed on checkerboards of sublattices, which partition the full lattice in one, two, and so on, up to D directions (depending on the parameters set). For updating, the Cabibbo-Marinari heatbath algorithm is used. We present validations and test runs of the code. Performance is reported for a number of currently used Fortran compilers and, when applicable, MPI versions. For the parallelized code, performance is studied as a function of the number of processors. Program summary Program title: STMC2LSU3MPI Catalogue identifier: AEMJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 26666 No. of bytes in distributed program, including test data, etc.: 233126 Distribution format: tar.gz Programming language: Fortran 77 compatible with the use of Fortran 90/95 compilers, in part with MPI extensions. Computer: Any capable of compiling and executing Fortran 77 or Fortran 90/95, when needed with MPI extensions. Operating system: Red Hat Enterprise Linux Server 6.1 with OpenMPI + pgf77 11.8-0, Centos 5.3 with OpenMPI + gfortran 4.1.2, Cray XT4 with MPICH2 + pgf90 11.2-0. Has the code been vectorised or parallelized?: Yes, parallelized using MPI extensions. Number of processors used: 2 to 11664 RAM: 200 Mega bytes per process. Classification: 11

  15. Comparison of PASCAL and FORTRAN for solving problems in the physical sciences

    Science.gov (United States)

    Watson, V. R.

    1981-01-01

    The paper compares PASCAL and FORTRAN for problem solving in the physical sciences, due to requests NASA has received to make PASCAL available on the Numerical Aerodynamic Simulator (scheduled to be operational in 1986). PASCAL disadvantages include the lack of scientific utility procedures equivalent to the IBM scientific subroutine package or the IMSL package which are available in FORTRAN. Advantages include a well-organized, easy to read and maintain writing code, range checking to prevent errors, and a broad selection of data types. It is concluded that FORTRAN may be the better language, although ADA (patterned after PASCAL) may surpass FORTRAN due to its ability to add complex and vector math, and the specify the precision and range of variables.

  16. Fortran Testing and Refactoring Infrastructure, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Tech-X proposes to develop a comprehensive Fortran testing and refactoring infrastructure that allows developers and scientists to leverage the benefits of...

  17. Emulating Multiple Inheritance in Fortran 2003/2008

    Directory of Open Access Journals (Sweden)

    Karla Morris

    2015-01-01

    in Fortran 2003. The design unleashes the power of the associated class relationships for modeling complicated data structures yet avoids the ambiguities that plague some multiple inheritance scenarios.

  18. FORTRAN data files transference from VAX/VMS to ALPHA/UNIX; Traspaso de ficheros FORTRAN de datos de VAX/VMS a ALPHA/UNIX

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez, E.; Milligen, B. Ph van [CIEMAT (Spain)

    1997-09-01

    Several tools have been developed to access the TJ-IU databases, which currently reside in VAX/VMS servers, from the TJ-II Data Acquisition System DEC ALPHA 8400 server. The TJ-I/TJ-IU databases are not homogeneous and contain several types of data files, namely, SADE, CAMAC and FORTRAN unformatted files. The tools presented in this report allow one to transfer CAMAC and those FORTRAN unformatted files defined herein, from a VAX/VMS server, for data manipulation on the ALPHA/Digital UNIX server. (Author)

  19. Fortran Testing and Refactoring Infrastructure, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Tech-X proposes to develop a comprehensive Fortran testing and refactoring infrastructure that allows developers and scientists to leverage the benefits of a...

  20. How to Interface Fortran with Matlab

    OpenAIRE

    Sagastizábal , Claudia; Vige , Guillaume

    1995-01-01

    Projet PROMATH; We describe the general procedure for interfacing Fortran routines with Matlab. We explain how to write a mex-file and the associated gateway function. In particular, each different type of argument is considered in detail. We finish with an illustrative example

  1. Introduction to modern Fortran for the Earth system sciences

    CERN Document Server

    Chirila, Dragos B

    2014-01-01

    This work provides a short "getting started" guide to Fortran 90/95. The main target audience consists of newcomers to the field of numerical computation within Earth system sciences (students, researchers or scientific programmers). Furthermore, readers accustomed to other programming languages may also benefit from this work, by discovering how some programming techniques they are familiar with map to Fortran 95. The main goal is to enable readers to quickly start using Fortran 95 for writing useful programs. It also introduces a gradual discussion of Input/Output facilities relevant for Earth system sciences, from the simplest ones to the more advanced netCDF library (which has become a de facto standard for handling the massive datasets used within Earth system sciences). While related works already treat these disciplines separately (each often providing much more information than needed by the beginning practitioner), the reader finds in this book a shorter guide which links them. Compared to other book...

  2. Automatic generation of Fortran programs for algebraic simulation models

    International Nuclear Information System (INIS)

    Schopf, W.; Rexer, G.; Ruehle, R.

    1978-04-01

    This report documents a generator program by which econometric simulation models formulated in an application-orientated language can be transformed automatically in a Fortran program. Thus the model designer is able to build up, test and modify models without the need of a Fortran programmer. The development of a computer model is therefore simplified and shortened appreciably; in chapter 1-3 of this report all rules are presented for the application of the generator to the model design. Algebraic models including exogeneous and endogeneous time series variables, lead and lag function can be generated. In addition, to these language elements, Fortran sequences can be applied to the formulation of models in the case of complex model interrelations. Automatically the generated model is a module of the program system RSYST III and is therefore able to exchange input and output data with the central data bank of the system and in connection with the method library modules can be used to handle planning problems. (orig.) [de

  3. Reformulation RELAP5-3D in FORTRAN 95 and Results

    International Nuclear Information System (INIS)

    Mesina, George L.

    2010-01-01

    RELAP5-3D is a nuclear power plant code used worldwide for safety analysis, design, and operator training. In keeping with ongoing developments in the computing industry, we have re-architected the code in the FORTRAN 95 language, the current, fully-available, FORTRAN language. These changes include a complete reworking of the database and conversion of the source code to take advantage of new constructs. The improvements and impacts to the code are manifold. It is a completely machine-independent code that produces machine independent fluid property and plot files and expands to the exact size needed to accommodate the user's input. Runtime is generally better for larger input models. Other impacts of code conversion are improved code readability, reduced maintenance and development time, increased adaptability to new computing platforms, and increased code longevity. The conversion methodology, code improvements and testing upgrades are presented in a manner that will be useful to future conversion projects for other such large codes. Comparison between the pre- and post-conversion code are made on the basis of code metrics and code performance.

  4. High performance parallel computers for science

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1989-01-01

    This paper reports that Fermilab's Advanced Computer Program (ACP) has been developing cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 Mflops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction

  5. Innovative Language-Based & Object-Oriented Structured AMR Using Fortran 90 and OpenMP

    Science.gov (United States)

    Norton, C.; Balsara, D.

    1999-01-01

    Parallel adaptive mesh refinement (AMR) is an important numerical technique that leads to the efficient solution of many physical and engineering problems. In this paper, we describe how AMR programing can be performed in an object-oreinted way using the modern aspects of Fortran 90 combined with the parallelization features of OpenMP.

  6. Generalized Portable SHMEM Library for High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Parzyszek, Krzysztof [Iowa State Univ., Ames, IA (United States)

    2003-01-01

    This dissertation describes the efforts to design and implement the Generalized Portable SHMEM library, GPSHMEM, as well as supplementary tools. There are two major components of the GPSHMEM project: the GPSHMEM library itself and the Fortran 77 source-to-source translator. The rest of this thesis is divided into two parts. Part I introduces the shared memory model and the distributed shared memory model. It explains the motivation behind GPSHMEM and presents its functionality and performance results. Part II is entirely devoted to the Fortran 77 translator call fgpp. The need for such a tool is demonstrated, functionality goals are stated, and the design issues are presented along with the development of the solutions.

  7. IMAGEP - A FORTRAN ALGORITHM FOR DIGITAL IMAGE PROCESSING

    Science.gov (United States)

    Roth, D. J.

    1994-01-01

    IMAGEP is a FORTRAN computer algorithm containing various image processing, analysis, and enhancement functions. It is a keyboard-driven program organized into nine subroutines. Within the subroutines are other routines, also, selected via keyboard. Some of the functions performed by IMAGEP include digitization, storage and retrieval of images; image enhancement by contrast expansion, addition and subtraction, magnification, inversion, and bit shifting; display and movement of cursor; display of grey level histogram of image; and display of the variation of grey level intensity as a function of image position. This algorithm has possible scientific, industrial, and biomedical applications in material flaw studies, steel and ore analysis, and pathology, respectively. IMAGEP is written in VAX FORTRAN for DEC VAX series computers running VMS. The program requires the use of a Grinnell 274 image processor which can be obtained from Mark McCloud Associates, Campbell, CA. An object library of the required GMR series software is included on the distribution media. IMAGEP requires 1Mb of RAM for execution. The standard distribution medium for this program is a 1600 BPI 9track magnetic tape in VAX FILES-11 format. It is also available on a TK50 tape cartridge in VAX FILES-11 format. This program was developed in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation.

  8. Application of Pfortran and Co-Array Fortran in the Parallelization of the GROMOS96 Molecular Dynamics Module

    Directory of Open Access Journals (Sweden)

    Piotr Bała

    2001-01-01

    Full Text Available After at least a decade of parallel tool development, parallelization of scientific applications remains a significant undertaking. Typically parallelization is a specialized activity supported only partially by the programming tool set, with the programmer involved with parallel issues in addition to sequential ones. The details of concern range from algorithm design down to low-level data movement details. The aim of parallel programming tools is to automate the latter without sacrificing performance and portability, allowing the programmer to focus on algorithm specification and development. We present our use of two similar parallelization tools, Pfortran and Cray's Co-Array Fortran, in the parallelization of the GROMOS96 molecular dynamics module. Our parallelization started from the GROMOS96 distribution's shared-memory implementation of the replicated algorithm, but used little of that existing parallel structure. Consequently, our parallelization was close to starting with the sequential version. We found the intuitive extensions to Pfortran and Co-Array Fortran helpful in the rapid parallelization of the project. We present performance figures for both the Pfortran and Co-Array Fortran parallelizations showing linear speedup within the range expected by these parallelization methods.

  9. JTpack90: A parallel, object-based, Fortran 90 linear algebra package

    Energy Technology Data Exchange (ETDEWEB)

    Turner, J.A.; Kothe, D.B. [Los Alamos National Lab., NM (United States); Ferrell, R.C. [Cambridge Power Computing Associates, Ltd., Brookline, MA (United States)

    1997-03-01

    The authors have developed an object-based linear algebra package, currently with emphasis on sparse Krylov methods, driven primarily by needs of the Los Alamos National Laboratory parallel unstructured-mesh casting simulation tool Telluride. Support for a number of sparse storage formats, methods, and preconditioners have been implemented, driven primarily by application needs. They describe the object-based Fortran 90 approach, which enhances maintainability, performance, and extensibility, the parallelization approach using a new portable gather/scatter library (PGSLib), current capabilities and future plans, and present preliminary performance results on a variety of platforms.

  10. DTK C/Fortran Interface Development for NEAMS FSI Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Slattery, Stuart R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lebrun-Grandie, Damien T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-09-19

    This report documents the development of DataTransferKit (DTK) C and Fortran interfaces for fluid-structure-interaction (FSI) simulations in NEAMS. In these simulations, the codes Nek5000 and Diablo are being coupled within the SHARP framework to study flow-induced vibration (FIV) in reactor steam generators. We will review the current Nek5000/Diablo coupling algorithm in SHARP and the current state of the solution transfer scheme used in this implementation. We will then present existing DTK algorithms which may be used instead to provide an improvement in both flexibility and scalability of the current SHARP implementation. We will show how these can be used within the current FSI scheme using a new set of interfaces to the algorithms developed by this work. These new interfaces currently expose the mesh-free solution transfer algorithms in DTK, a C++ library, and are written in C and Fortran to enable coupling of both Nek5000 and Diablo in their native Fortran language. They have been compiled and tested on Cooley, the test-bed machine for Mira at ALCF.

  11. Parallel pic plasma simulation through particle decomposition techniques

    International Nuclear Information System (INIS)

    Briguglio, S.; Vlad, G.; Di Martino, B.; Naples, Univ. 'Federico II'

    1998-02-01

    Particle-in-cell (PIC) codes are among the major candidates to yield a satisfactory description of the detail of kinetic effects, such as the resonant wave-particle interaction, relevant in determining the transport mechanism in magnetically confined plasmas. A significant improvement of the simulation performance of such codes con be expected from parallelization, e.g., by distributing the particle population among several parallel processors. Parallelization of a hybrid magnetohydrodynamic-gyrokinetic code has been accomplished within the High Performance Fortran (HPF) framework, and tested on the IBM SP2 parallel system, using a 'particle decomposition' technique. The adopted technique requires a moderate effort in porting the code in parallel form and results in intrinsic load balancing and modest inter processor communication. The performance tests obtained confirm the hypothesis of high effectiveness of the strategy, if targeted towards moderately parallel architectures. Optimal use of resources is also discussed with reference to a specific physics problem [it

  12. Fast high-pressure freezing of protein crystals in their mother liquor

    International Nuclear Information System (INIS)

    Burkhardt, Anja; Warmer, Martin; Panneerselvam, Saravanan; Wagner, Armin; Zouni, Athina; Glöckner, Carina; Reimer, Rudolph; Hohenberg, Heinrich; Meents, Alke

    2012-01-01

    Protein crystals were vitrified using high-pressure freezing in their mother liquor at 210 MPa and 77 K without cryoprotectants or oil coating. The method was successfully applied to photosystem II, which is representative of a membrane protein with a large unit cell and weak crystal contacts. High-pressure freezing (HPF) is a method which allows sample vitrification without cryoprotectants. In the present work, protein crystals were cooled to cryogenic temperatures at a pressure of 210 MPa. In contrast to other HPF methods published to date in the field of cryocrystallography, this protocol involves rapid sample cooling using a standard HPF device. The fast cooling rates allow HPF of protein crystals directly in their mother liquor without the need for cryoprotectants or external reagents. HPF was first attempted with hen egg-white lysozyme and cubic insulin crystals, yielding good to excellent diffraction quality. Non-cryoprotected crystals of the membrane protein photosystem II have been successfully cryocooled for the first time. This indicates that the presented HPF method is well suited to the vitrification of challenging systems with large unit cells and weak crystal contacts

  13. MAPLIB, Thermodynamics Materials Property Generator for FORTRAN Program

    International Nuclear Information System (INIS)

    Schumann, U.; Zimmerer, W. and others

    1978-01-01

    1 - Nature of physical problem solved: MAPLIB is a program system which is able to incorporate the values of the properties of any material in a form suitable for use in other computer programs. The data are implemented in FORTRAN functions. A utility program is provided to assist in library management. 2 - Method of solution: MAPLIB consists of the following parts: 1) Conventions for the data format. 2) Some integrated data. 3) A data access system (FORTRAN subroutine). 4) An utility program for updating and documentation of the actual library content. The central part is a set of FORTRAN functions, e.g. WL H2O v(t,p) (heat conduction of water vapor as a function of temperature and pressure), which compute the required data and which can be called by the user program. The data content of MAPLIB has been delivered by many persons. There was no systematic evaluation of the material. It is the responsibility of every user to check the data for physical accuracy. MAPLIB only serves as a library system for manipulation and storing of such data. 3 - Restrictions on the complexity of the problem: a) See responsibility as explained above. b) Up to 1000 data functions could be implemented. c) If too many data functions are included in MAPLIB, the storage requirements become excessive for application in users programs

  14. High-Performance Java Codes for Computational Fluid Dynamics

    Science.gov (United States)

    Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.

  15. FORTRAN data files transference from VAX/VMS to ALPHA/UNIX

    International Nuclear Information System (INIS)

    Sanchez, E.; Milligen, B.Ph. van

    1997-01-01

    Several tools have been developed to access the TJ-I and TJ-IU databases, which currently reside in VAX/VMS servers, from the TJ-II Data Acquisition System DEC ALPHA 8400 server. The TJ-I/TJ-IU databases are not homogeneous and contain several types of data files, namely, SADE. CAMAC and FORTRAN un formatted files. The tools presented in this report allow one to transfer CAMAC and those FORTRAN un formatted files defined herein. from a VAX/VMS server, for data manipulation on the ALPHA/Digital UNIX server. (Author) 5 refs

  16. Numerical methods of mathematical optimization with Algol and Fortran programs

    CERN Document Server

    Künzi, Hans P; Zehnder, C A; Rheinboldt, Werner

    1971-01-01

    Numerical Methods of Mathematical Optimization: With ALGOL and FORTRAN Programs reviews the theory and the practical application of the numerical methods of mathematical optimization. An ALGOL and a FORTRAN program was developed for each one of the algorithms described in the theoretical section. This should result in easy access to the application of the different optimization methods.Comprised of four chapters, this volume begins with a discussion on the theory of linear and nonlinear optimization, with the main stress on an easily understood, mathematically precise presentation. In addition

  17. OFF, Open source Finite volume Fluid dynamics code: A free, high-order solver based on parallel, modular, object-oriented Fortran API

    Science.gov (United States)

    Zaghi, S.

    2014-07-01

    OFF, an open source (free software) code for performing fluid dynamics simulations, is presented. The aim of OFF is to solve, numerically, the unsteady (and steady) compressible Navier-Stokes equations of fluid dynamics by means of finite volume techniques: the research background is mainly focused on high-order (WENO) schemes for multi-fluids, multi-phase flows over complex geometries. To this purpose a highly modular, object-oriented application program interface (API) has been developed. In particular, the concepts of data encapsulation and inheritance available within Fortran language (from standard 2003) have been stressed in order to represent each fluid dynamics "entity" (e.g. the conservative variables of a finite volume, its geometry, etc…) by a single object so that a large variety of computational libraries can be easily (and efficiently) developed upon these objects. The main features of OFF can be summarized as follows: Programming LanguageOFF is written in standard (compliant) Fortran 2003; its design is highly modular in order to enhance simplicity of use and maintenance without compromising the efficiency; Parallel Frameworks Supported the development of OFF has been also targeted to maximize the computational efficiency: the code is designed to run on shared-memory multi-cores workstations and distributed-memory clusters of shared-memory nodes (supercomputers); the code's parallelization is based on Open Multiprocessing (OpenMP) and Message Passing Interface (MPI) paradigms; Usability, Maintenance and Enhancement in order to improve the usability, maintenance and enhancement of the code also the documentation has been carefully taken into account; the documentation is built upon comprehensive comments placed directly into the source files (no external documentation files needed): these comments are parsed by means of doxygen free software producing high quality html and latex documentation pages; the distributed versioning system referred as git

  18. DISPPAK SUBPAK, MS FORTRAN Extended Subroutine Library

    International Nuclear Information System (INIS)

    Langer, S.

    1991-01-01

    1 - Description of program or function: DISPPAK is a set of routines for use with Microsoft FORTRAN programs that allows the flexible display of information on the screen of an IBM PC in both text and graphics modes. The text mode routines allow the cursor to be placed at an arbitrary point on the screen and text to be displayed at the cursor location, making it possible to create menus and other structured displays. A routine to set the color of the characters that these routines display is also provided. A set of line drawing routines is included for use with IBM's Color Graphics Adapter or an equivalent board (such as the Enhanced Graphics Adapter in CGA emulation mode). These routines support both pixel coordinates and a user-specified set of real number coordinates. SUBPAK is a function library which allows Microsoft FORTRAN programs to calculate random numbers, issue calls to the operating system, read individual characters from the keyboard, perform Boolean and shift operations, and communicate with the I/O ports of the IBM PC. In addition, peek and poke routines, a routine that returns the address of any variable, and routines that can access the system time and date are included. 2 - Method of solution: For the DISPPAK line drawing routines, the user selects a fraction of the screen to use for plotting, chooses the coordinates that refer to the lower-left and upper-right corners, and decides whether the mapping should be linear or logarithmic. Lines are then drawn between endpoints defined in terms of the user coordinate system. Out-of-range coordinates are forced to the border of the window before the line is drawn. 3 - Restrictions on the complexity of the problem: No support is provided for filled areas or text

  19. The Transition and Adoption to Modern Programming Concepts for Scientific Computing in Fortran

    Directory of Open Access Journals (Sweden)

    Charles D. Norton

    2007-01-01

    Full Text Available This paper describes our experiences in the early exploration of modern concepts introduced in Fortran90 for large-scale scientific programming. We review our early work in expressing object-oriented concepts based on the new Fortran90 constructs – foreign to most programmers at the time – our experimental work in applying them to various applications, the impact on the WG5/J3 standards committees to consider formalizing object-oriented constructs for later versions of Fortran, and work in exploring how other modern programming techniques such as Design Patterns can and have impacted our software development. Applications will be drawn from plasma particle simulation and finite element adaptive mesh refinement for solid earth crustal deformation modeling.

  20. Numerical integration subprogrammes in Fortran II-D

    Energy Technology Data Exchange (ETDEWEB)

    Fry, C. R.

    1966-12-15

    This note briefly describes some integration subprogrammes written in FORTRAN II-D for the IBM 1620-II at CARDE. These presented are two Newton-Cotes, Chebyshev polynomial summation, Filon's, Nordsieck's and optimum Runge-Kutta and predictor-corrector methods. A few miscellaneous numerical integration procedures are also mentioned covering statistical functions, oscillating integrands and functions occurring in electrical engineering.

  1. Formula Translation in Blitz++, NumPy and Modern Fortran: A Case Study of the Language Choice Tradeoffs

    Directory of Open Access Journals (Sweden)

    Sylwester Arabas

    2014-01-01

    Full Text Available Three object-oriented implementations of a prototype solver of the advection equation are introduced. The presented programs are based on Blitz++ (C++, NumPy (Python and Fortran's built-in array containers. The solvers constitute implementations of the Multidimensional Positive-Definite Advective Transport Algorithm (MPDATA. The introduced codes serve as examples for how the application of object-oriented programming (OOP techniques and new language constructs from C++11 and Fortran 2008 allow to reproduce the mathematical notation used in the literature within the program code. A discussion on the tradeoffs of the programming language choice is presented. The main angles of comparison are code brevity and syntax clarity (and hence maintainability and auditability as well as performance. All performance tests are carried out using free and open-source compilers. In the case of Python, a significant performance gain is observed when switching from the standard interpreter (CPython to the PyPy implementation of Python. Entire source code of all three implementations is embedded in the text and is licensed under the terms of the GNU GPL license.

  2. Fortran code for generating random probability vectors, unitaries, and quantum states

    Directory of Open Access Journals (Sweden)

    Jonas eMaziero

    2016-03-01

    Full Text Available The usefulness of generating random configurations is recognized in many areas of knowledge. Fortran was born for scientific computing and has been one of the main programming languages in this area since then. And several ongoing projects targeting towards its betterment indicate that it will keep this status in the decades to come. In this article, we describe Fortran codes produced, or organized, for the generation of the following random objects: numbers, probability vectors, unitary matrices, and quantum state vectors and density matrices. Some matrix functions are also included and may be of independent interest.

  3. FORTRAN text correction with the CDC-1604-A console typewriter during reading a punched card program

    International Nuclear Information System (INIS)

    Kotorobaj, F.; Ruzhichka, Ya.; Stolyarskij, Yu.V.

    1977-01-01

    The paper describes FORTRAN text correction with the CDC 1604-A console typewriter during reading a punched card program. This method gives one more possibility of FORTRAN program correction during program's input to the CDC 1604-A computer. This essentially reduced the time necessary for punched card correction with other methods. Possibility of inputting desired number of punched cards one after another allows one writing small FORTRAN programs to computer core storage with simultaneous punching of the cards. The correction program has been written to the CDC 1604 COOP monitor

  4. Multidimentional and Multi-Parameter Fortran-Based Curve Fitting ...

    African Journals Online (AJOL)

    This work briefly describes the mathematics behind the algorithm, and also elaborates how to implement it using FORTRAN 95 programming language. The advantage of this algorithm, when it is extended to surfaces and complex functions, is that it makes researchers to have a better trust during fitting. It also improves the ...

  5. An Introduction to Fortran Programming: An IPI Approach.

    Science.gov (United States)

    Fisher, D. D.; And Others

    This text is designed to give individually paced instruction in Fortran Programing. The text contains fifteen units. Unit titles include: Flowcharts, Input and Output, Loops, and Debugging. Also included is an extensive set of appendices. These were designed to contain a great deal of practical information necessary to the course. These appendices…

  6. FORTRAN programs for transient eddy current calculations using a perturbation-polynomial expansion technique

    International Nuclear Information System (INIS)

    Carpenter, K.H.

    1976-11-01

    A description is given of FORTRAN programs for transient eddy current calculations in thin, non-magnetic conductors using a perturbation-polynomial expansion technique. Basic equations are presented as well as flow charts for the programs implementing them. The implementation is in two steps--a batch program to produce an intermediate data file and interactive programs to produce graphical output. FORTRAN source listings are included for all program elements, and sample inputs and outputs are given for the major programs

  7. On the tradeoffs of programming language choice for numerical modelling in geoscience. A case study comparing modern Fortran, C++/Blitz++ and Python/NumPy.

    Science.gov (United States)

    Jarecka, D.; Arabas, S.; Fijalkowski, M.; Gaynor, A.

    2012-04-01

    The language of choice for numerical modelling in geoscience has long been Fortran. A choice of a particular language and coding paradigm comes with different set of tradeoffs such as that between performance, ease of use (and ease of abuse), code clarity, maintainability and reusability, availability of open source compilers, debugging tools, adequate external libraries and parallelisation mechanisms. The availability of trained personnel and the scale and activeness of the developer community is of importance as well. We present a short comparison study aimed at identification and quantification of these tradeoffs for a particular example of an object oriented implementation of a parallel 2D-advection-equation solver in Python/NumPy, C++/Blitz++ and modern Fortran. The main angles of comparison will be complexity of implementation, performance of various compilers or interpreters and characterisation of the "added value" gained by a particular choice of the language. The choice of the numerical problem is dictated by the aim to make the comparison useful and meaningful to geoscientists. Python is chosen as a language that traditionally is associated with ease of use, elegant syntax but limited performance. C++ is chosen for its traditional association with high performance but even higher complexity and syntax obscurity. Fortran is included in the comparison for its widespread use in geoscience often attributed to its performance. We confront the validity of these traditional views. We point out how the usability of a particular language in geoscience depends on the characteristics of the language itself and the availability of pre-existing software libraries (e.g. NumPy, SciPy, PyNGL, PyNIO, MPI4Py for Python and Blitz++, Boost.Units, Boost.MPI for C++). Having in mind the limited complexity of the considered numerical problem, we present a tentative comparison of performance of the three implementations with different open source compilers including CPython and

  8. TOOLPACK1, Tools for Development and Maintenance of FORTRAN 77 Program

    International Nuclear Information System (INIS)

    Cowell, Wayne R.

    1993-01-01

    1 - Description of program or function: TOOLPACK1 consists of the following categories of software; (1) an integrated collection of tools intended to support the development and maintenance of FORTRAN 77 programs, in particular moderate-sized collections of mathematical software; (2) several user/Toolpack interfaces, one of which is selected for use at any particular installation; (3) three implementations of the tool/system interface, called TIE (Tool Interface to the Environment). The tools are written in FORTRAN 77 and are portable among TIE installations. The source contains symbolic constants as macro names and must be expanded with a suitable macro expander before being compiled and loaded. A portable macro expander is supplied in TOOLPACK1. The tools may be divided into three functional areas: general, documentation, and processing. One tool, the macro processor, Can be used in any of these categories. ISTDC: data comparison tool is designed mainly for comparing files of numeric values, and files with embedded text. ISTET Expands tabs. ISTFI: finds all the include files that a file needs. ISTGP Searches multiple files for occurrences of a regular expression. ISTHP: will provide limited help information about tools. ISTMP: The macro processor may be used to pre-process a file. The processor provides macro replacement, inclusion, conditional replacement, and processing capabilities for complex file processing. ISTSP: TIE-conforming version of the SPLIT utility to split up the concatenated files used on the tape. ISTSV: save/restore utility to save and restore sub-trees of the Portable File Store (PFS). ISTTD: text comparison tool. ISTVC: simple text file version controller. ISTAL: aids is a preprocessor that can be used to generate specific information from intermediate files created by other tools. The information that can be generated includes call-graphs, cross reference listings, segment execution frequencies, and symbol information. ISTAL can also strip

  9. OpenMP-accelerated SWAT simulation using Intel C and FORTRAN compilers: Development and benchmark

    Science.gov (United States)

    Ki, Seo Jin; Sugimura, Tak; Kim, Albert S.

    2015-02-01

    We developed a practical method to accelerate execution of Soil and Water Assessment Tool (SWAT) using open (free) computational resources. The SWAT source code (rev 622) was recompiled using a non-commercial Intel FORTRAN compiler in Ubuntu 12.04 LTS Linux platform, and newly named iOMP-SWAT in this study. GNU utilities of make, gprof, and diff were used to develop the iOMP-SWAT package, profile memory usage, and check identicalness of parallel and serial simulations. Among 302 SWAT subroutines, the slowest routines were identified using GNU gprof, and later modified using Open Multiple Processing (OpenMP) library in an 8-core shared memory system. In addition, a C wrapping function was used to rapidly set large arrays to zero by cross compiling with the original SWAT FORTRAN package. A universal speedup ratio of 2.3 was achieved using input data sets of a large number of hydrological response units. As we specifically focus on acceleration of a single SWAT run, the use of iOMP-SWAT for parameter calibrations will significantly improve the performance of SWAT optimization.

  10. OpenMP GNU and Intel Fortran programs for solving the time-dependent Gross-Pitaevskii equation

    Science.gov (United States)

    Young-S., Luis E.; Muruganandam, Paulsamy; Adhikari, Sadhan K.; Lončar, Vladimir; Vudragović, Dušan; Balaž, Antun

    2017-11-01

    files is given [2]. A readme.txt file, included in the root directory, explains the procedure to compile and run the programs. We tested our programs on a workstation with two 10-core Intel Xeon E5-2650 v3 CPUs. The parameters used for testing are given in sample input files, provided in the corresponding directory together with the programs. In Table 1 we present wall-clock execution times for runs on 1, 6, and 19 CPU cores for programs compiled using Intel and GNU Fortran compilers. The corresponding columns "Intel speedup" and "GNU speedup" give the ratio of wall-clock execution times of runs on 1 and 19 CPU cores, and denote the actual measured speedup for 19 CPU cores. In all cases and for all numbers of CPU cores, although the GNU Fortran compiler gives excellent results, the Intel Fortran compiler turns out to be slightly faster. Note that during these tests we always ran only a single simulation on a workstation at a time, to avoid any possible interference issues. Therefore, the obtained wall-clock times are more reliable than the ones that could be measured with two or more jobs running simultaneously. We also studied the speedup of the programs as a function of the number of CPU cores used. The performance of the Intel and GNU Fortran compilers is illustrated in Fig. 1, where we plot the speedup and actual wall-clock times as functions of the number of CPU cores for 2d and 3d programs. We see that the speedup increases monotonically with the number of CPU cores in all cases and has large values (between 10 and 14 for 3d programs) for the maximal number of cores. This fully justifies the development of OpenMP programs, which enable much faster and more efficient solving of the GP equation. However, a slow saturation in the speedup with the further increase in the number of CPU cores is observed in all cases, as expected. The speedup tends to increase for programs in higher dimensions, as they become more complex and have to process more data. This is why the

  11. Menhir: An Environment for High Performance Matlab

    Directory of Open Access Journals (Sweden)

    Stéphane Chauveau

    1999-01-01

    Full Text Available In this paper we present Menhir a compiler for generating sequential or parallel code from the Matlab language. The compiler has been designed in the context of using Matlab as a specification language. One of the major features of Menhir is its retargetability to generate parallel and sequential C or Fortran code. We present the compilation process and the target system description for Menhir. Preliminary performances are given and compared with MCC, the MathWorks Matlab compiler.

  12. Nonalgebraic symbol manipulators for use in scientific and engineering modeling: introducing the FORSE (FORtran Symobol Expander)

    International Nuclear Information System (INIS)

    Schultz, J.H.; Lettvin, J.D.

    1982-02-01

    A system of nonalgebraic symbol manipulators, called The FORSE (FORtran Symbol Expanders) has been developed to document and prepare input-output for Fortran programs. The use of documentation at the level of the individual equation is defended. The operation of The FORSE is described, along with user instructions and a worked example

  13. Existing Fortran interfaces to Trilinos in preparation for exascale ForTrilinos development

    Energy Technology Data Exchange (ETDEWEB)

    Evans, Katherine J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Young, Mitchell T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Collins, Benjamin S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Johnson, Seth R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Prokopenko, Andrey V. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Heroux, Michael A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-03-01

    This report summarizes the current state of Fortran interfaces to the Trilinos library within several key applications of the Exascale Computing Program (ECP), with the aim of informing developers about strategies to develop ForTrilinos, an exascale-ready, Fortran interface software package within Trilinos. The two software projects assessed within are the DOE Office of Science's Accelerated Climate Model for Energy (ACME) atmosphere component, CAM, and the DOE Office of Nuclear Energy's core-simulator portion of VERA, a nuclear reactor simulation code. Trilinos is an object-oriented, C++ based software project, and spans a collection of algorithms and other enabling technologies such as uncertainty quantification and mesh generation. To date, Trilinos has enabled these codes to achieve large-scale simulation results, however the simulation needs of CAM and VERA-CS will approach exascale over the next five years. A Fortran interface to Trilinos that enables efficient use of programming models and more advanced algorithms is necessary. Where appropriate, the needs of the CAM and VERA-CS software to achieve their simulation goals are called out specifically. With this report, a design document and execution plan for ForTrilinos development can proceed.

  14. Implementation of Neutronics Analysis Code using the Features of Object Oriented Programming via Fortran90/95

    Energy Technology Data Exchange (ETDEWEB)

    Han, Tae Young; Cho, Beom Jin [KEPCO Nuclear Fuel, Daejeon (Korea, Republic of)

    2011-05-15

    The object-oriented programming (OOP) concept was radically established after 1990s and successfully involved in Fortran 90/95. The features of OOP are such as the information hiding, encapsulation, modularity and inheritance, which lead to producing code that satisfy three R's: reusability, reliability and readability. The major OOP concepts, however, except Module are not mainly used in neutronics analysis codes even though the code was written by Fortran 90/95. In this work, we show that the OOP concept can be employed to develop the neutronics analysis code, ASTRA1D (Advanced Static and Transient Reactor Analyzer for 1-Dimension), via Fortran90/95 and those can be more efficient and reasonable programming methods

  15. A FORTRAN program for a least-square fitting

    International Nuclear Information System (INIS)

    Yamazaki, Tetsuo

    1978-01-01

    A practical FORTRAN program for a least-squares fitting is presented. Although the method is quite usual, the program calculates not only the most satisfactory set of values of unknowns but also the plausible errors associated with them. As an example, a measured lateral absorbed-dose distribution in water for a narrow 25-MeV electron beam is fitted to a Gaussian distribution. (auth.)

  16. The formal specification of abstract data types and their implementation in Fortran 90: implementation issues concerning the use of pointers

    Science.gov (United States)

    Maley, D.; Kilpatrick, P. L.; Schreiner, E. W.; Scott, N. S.; Diercksen, G. H. F.

    1996-10-01

    In this paper we continue our investigation into the development of computational-science software based on the identification and formal specification of Abstract Data Types (ADTs) and their implementation in Fortran 90. In particular, we consider the consequences of using pointers when implementing a formally specified ADT in Fortran 90. Our aim is to highlight the resulting conflict between the goal of information hiding, which is central to the ADT methodology, and the space efficiency of the implementation. We show that the issue of storage recovery cannot be avoided by the ADT user, and present a range of implementations of a simple ADT to illustrate various approaches towards satisfactory storage management. Finally, we propose a set of guidelines for implementing ADTs using pointers in Fortran 90. These guidelines offer a way gracefully to provide disposal operations in Fortran 90. Such an approach is desirable since Fortran 90 does not provide automatic garbage collection which is offered by many object-oriented languages including Eiffel, Java, Smalltalk, and Simula.

  17. A Fortran Program for Deep Space Sensor Analysis.

    Science.gov (United States)

    1984-12-14

    used to help p maintain currency to the deep space satellite catelog? Research Question Can a Fortran program be designed to evaluate the effectiveness ...Range ( AFETR ) Range p Measurements Laboratory (RML) is located in Malibar, .- Florida. Like GEODSS, Malibar uses a 48 inch telescope with a...phased out. This mode will evaluate the effect of the loss of the 3 Baker-Nunn sites to mode 3 Mode 5 through Mode 8 Modes 5 through 8 are identical to

  18. An environment for parallel structuring of Fortran programs

    International Nuclear Information System (INIS)

    Sridharan, K.; McShea, M.; Denton, C.; Eventoff, B.; Browne, J.C.; Newton, P.; Ellis, M.; Grossbard, D.; Wise, T.; Clemmer, D.

    1990-01-01

    The paper describes and illustrates an environment for interactive support of the detection and implementation of macro-level parallelism in Fortran programs. The approach couples algorithms for dependence analysis with both innovative techniques for complexity management and capabilities for the measurement and analysis of the parallel computation structures generated through use of the environment. The resulting environment is complementary to the more common approach of seeking local parallelism by loop unrolling, either by an automatic compiler or manually. (orig.)

  19. Simple, parallel, high-performance virtual machines for extreme computations

    International Nuclear Information System (INIS)

    Chokoufe Nejad, Bijan; Ohl, Thorsten; Reuter, Jurgen

    2014-11-01

    We introduce a high-performance virtual machine (VM) written in a numerically fast language like Fortran or C to evaluate very large expressions. We discuss the general concept of how to perform computations in terms of a VM and present specifically a VM that is able to compute tree-level cross sections for any number of external legs, given the corresponding byte code from the optimal matrix element generator, O'Mega. Furthermore, this approach allows to formulate the parallel computation of a single phase space point in a simple and obvious way. We analyze hereby the scaling behaviour with multiple threads as well as the benefits and drawbacks that are introduced with this method. Our implementation of a VM can run faster than the corresponding native, compiled code for certain processes and compilers, especially for very high multiplicities, and has in general runtimes in the same order of magnitude. By avoiding the tedious compile and link steps, which may fail for source code files of gigabyte sizes, new processes or complex higher order corrections that are currently out of reach could be evaluated with a VM given enough computing power.

  20. DB90: A Fortran Callable Relational Database Routine for Scientific and Engineering Computer Programs

    Science.gov (United States)

    Wrenn, Gregory A.

    2005-01-01

    This report describes a database routine called DB90 which is intended for use with scientific and engineering computer programs. The software is written in the Fortran 90/95 programming language standard with file input and output routines written in the C programming language. These routines should be completely portable to any computing platform and operating system that has Fortran 90/95 and C compilers. DB90 allows a program to supply relation names and up to 5 integer key values to uniquely identify each record of each relation. This permits the user to select records or retrieve data in any desired order.

  1. A brief description and comparison of programming languages FORTRAN, ALGOL, COBOL, PL/1, and LISP 1.5 from a critical standpoint

    Science.gov (United States)

    Mathur, F. P.

    1972-01-01

    Several common higher level program languages are described. FORTRAN, ALGOL, COBOL, PL/1, and LISP 1.5 are summarized and compared. FORTRAN is the most widely used scientific programming language. ALGOL is a more powerful language for scientific programming. COBOL is used for most commercial programming applications. LISP 1.5 is primarily a list-processing language. PL/1 attempts to combine the desirable features of FORTRAN, ALGOL, and COBOL into a single language.

  2. Pseudo-BINPUT, a free formal input package for Fortran programmes

    International Nuclear Information System (INIS)

    Gubbins, M.E.

    1977-11-01

    Pseudo - BINPUT is an input package for reading free format data in codeword control in a FORTRAN programme. To a large degree it mimics in function the Winfrith Subroutine Library routine BINPUT. By using calls of the data input package DECIN to mimic the input routine BINPUT, Pseudo - BINPUT combines some of the advantages of both systems. (U.K.)

  3. Installation of a new Fortran compiler and effective programming method on the vector supercomputer

    International Nuclear Information System (INIS)

    Nemoto, Toshiyuki; Suzuki, Koichiro; Watanabe, Kenji; Machida, Masahiko; Osanai, Seiji; Isobe, Nobuo; Harada, Hiroo; Yokokawa, Mitsuo

    1992-07-01

    The Fortran compiler, version 10 has been replaced with the new one, version 12 (V12) on the Fujitsu Computer system at JAERI since May, 1992. The benchmark test for the performance of the V12 compiler is carried out with 16 representative nuclear codes in advance of the installation of the compiler. The performance of the compiler is achieved by the factor of 1.13 in average. The effect of the enhanced functions of the compiler and the compatibility to the nuclear codes are also examined. The assistant tool for vectorization TOP10EX is developed. In this report, the results of the evaluation of the V12 compiler and the usage of the tools for vectorization are presented. (author)

  4. High performance parallel computers for science: New developments at the Fermilab advanced computer program

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.

    1988-08-01

    Fermilab's Advanced Computer Program (ACP) has been developing highly cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 MFlops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction. 10 refs., 7 figs

  5. Mass: Fortran program for calculating mass-absorption coefficients

    International Nuclear Information System (INIS)

    Nielsen, Aa.; Svane Petersen, T.

    1980-01-01

    Determinations of mass-absorption coefficients in the x-ray analysis of trace elements are an important and time consuming part of the arithmetic calculation. In the course of time different metods have been used. The program MASS calculates the mass-absorption coefficients from a given major element analysis at the x-ray wavelengths normally used in trace element determinations and lists the chemical analysis and the mass-absorption coefficients. The program is coded in FORTRAN IV, and is operational on the IBM 370/165 computer, on the UNIVAC 1110 and on PDP 11/05. (author)

  6. VIDEO-PC, SVGA Routines for FORTRAN on PC

    International Nuclear Information System (INIS)

    1994-01-01

    1 - Description of program or function: These primitives are the lowest level routines needed to perform super VGA graphics on a PC. A sample main program is included that exercises the primitives. Both Lahey and Microsoft FORTRAN's have graphics libraries. However, the libraries do not support 256 color graphics at resolutions greater than 320x200. The primitives bypass these libraries while still conforming to standard usage of BIOS. The supported graphics modes depend upon the PC graphics card and its memory. Super VGA resolutions of 640x480 and 800x600 have been tested on an ATI VGA Wonder card with 512 K memory and on several 80486 PC's (unknown manufacturers) at retail stores. 2 - Method of solution: Both Lahey and Microsoft primitives depend upon sending the correct parameters to the PC BIOS (basic input output system) as discussed in the references. 3 - Restrictions on the complexity of the problem: The primitives were developed for 256 color VGA applications. It is known, for instance, that some CGA and 16 color VGA video modes will not operate correctly using these primitives. Potential users should try the test program on their PC/video card combination to determine applicability

  7. High-Level Management of Communication Schedules in HPF-like Languages

    National Research Council Canada - National Science Library

    Benkner, Siegfried

    1997-01-01

    ..., providing the users with a high-level language interface for programming scalable parallel architectures and delegating to the compiler the task of producing an explicitly parallel message-passing program...

  8. Operations analysis (study 2.1). Program SEPSIM (solar electric propulsion stage simulation). [in FORTRAN: space tug

    Science.gov (United States)

    Lang, T. J.

    1974-01-01

    Program SEPSIM is a FORTRAN program which performs deployment, servicing, and retrieval missions to synchronous equatorial orbit using a space tug with a continuous low thrust upper stage known as a solar electric propulsion stage (SEPS). The SEPS ferries payloads back and forth between an intermediate orbit and synchronous orbit, and performs the necessary servicing maneuvers in synchronous orbit. The tug carries payloads between the orbiter and the intermediate orbit, deploys fully fueled SEPS vehicles, and retrieves exhausted SEPS vehicles when, and if, required. The program is presently contained in subroutine form in the Logistical On-orbit VEhicle Servicing (LOVES) Program, but can also be run independently with the addition of a simple driver program.

  9. Optimization of Grillages Using Genetic Algorithms for Integrating Matlab and Fortran Environments

    Directory of Open Access Journals (Sweden)

    Darius Mačiūnas

    2013-02-01

    Full Text Available The purpose of the paper is to present technology applied for the global optimization of grillage-type pile foundations (further grillages. The goal of optimization is to obtain the optimal layout of pile placement in the grillages. The problem can be categorized as a topology optimization problem. The objective function is comprised of maximum reactive force emerging in a pile. The reactive force is minimized during the procedure of optimization during which variables enclose the positions of piles beneath connecting beams. Reactive forces in all piles are computed utilizing an original algorithm implemented in the Fortran programming language. The algorithm is integrated into the MatLab environment where the optimization procedure is executed utilizing a genetic algorithm. The article also describes technology enabling the integration of MatLab and Fortran environments. The authors seek to evaluate the quality of a solution to the problem analyzing experimental results obtained applying the proposed technology.

  10. Optimization of Grillages Using Genetic Algorithms for Integrating Matlab and Fortran Environments

    Directory of Open Access Journals (Sweden)

    Darius Mačiūnas

    2012-12-01

    Full Text Available The purpose of the paper is to present technology applied for the global optimization of grillage-type pile foundations (further grillages. The goal of optimization is to obtain the optimal layout of pile placement in the grillages. The problem can be categorized as a topology optimization problem. The objective function is comprised of maximum reactive force emerging in a pile. The reactive force is minimized during the procedure of optimization during which variables enclose the positions of piles beneath connecting beams. Reactive forces in all piles are computed utilizing an original algorithm implemented in the Fortran programming language. The algorithm is integrated into the MatLab environment where the optimization procedure is executed utilizing a genetic algorithm. The article also describes technology enabling the integration of MatLab and Fortran environments. The authors seek to evaluate the quality of a solution to the problem analyzing experimental results obtained applying the proposed technology.

  11. FORTRAN subroutine for computing the optimal estimate of f(x)

    International Nuclear Information System (INIS)

    Gaffney, P.W.

    1980-10-01

    A FORTRAN subroutine called RANGE is presented that is designed to compute the optimal estimate of a function f given values of the function at n distinct points x 1 2 < ... < x/sub n/ and given a bound on one of the derivatives of f. We donate this estimate by Ω. It is optimal in the sense that the error abs value (f - Ω) has the smallest possible error bound

  12. Peregrine Software Toolchains | High-Performance Computing | NREL

    Science.gov (United States)

    Group (PGI) C/C++ and Fortran (partially supported) The PGI Accelerator compilers include NVIDIA GPU support via the directive-based OpenACC 2.5 programming model, as well as full support for NVIDIA CUDA C

  13. Comprehensive Performance Evaluation for Hydrological and Nutrients Simulation Using the Hydrological Simulation Program-Fortran in a Mesoscale Monsoon Watershed, China.

    Science.gov (United States)

    Li, Zhaofu; Luo, Chuan; Jiang, Kaixia; Wan, Rongrong; Li, Hengpeng

    2017-12-19

    The Hydrological Simulation Program-Fortran (HSPF) is a hydrological and water quality computer model that was developed by the United States Environmental Protection Agency. Comprehensive performance evaluations were carried out for hydrological and nutrient simulation using the HSPF model in the Xitiaoxi watershed in China. Streamflow simulation was calibrated from 1 January 2002 to 31 December 2007 and then validated from 1 January 2008 to 31 December 2010 using daily observed data, and nutrient simulation was calibrated and validated using monthly observed data during the period from July 2009 to July 2010. These results of model performance evaluation showed that the streamflows were well simulated over the study period. The determination coefficient ( R ²) was 0.87, 0.77 and 0.63, and the Nash-Sutcliffe coefficient of efficiency (Ens) was 0.82, 0.76 and 0.65 for the streamflow simulation in annual, monthly and daily time-steps, respectively. Although limited to monthly observed data, satisfactory performance was still achieved during the quantitative evaluation for nutrients. The R ² was 0.73, 0.82 and 0.92, and the Ens was 0.67, 0.74 and 0.86 for nitrate, ammonium and orthophosphate simulation, respectively. Some issues may affect the application of HSPF were also discussed, such as input data quality, parameter values, etc. Overall, the HSPF model can be successfully used to describe streamflow and nutrients transport in the mesoscale watershed located in the East Asian monsoon climate area. This study is expected to serve as a comprehensive and systematic documentation of understanding the HSPF model for wide application and avoiding possible misuses.

  14. High-performance computational fluid dynamics: a custom-code approach

    International Nuclear Information System (INIS)

    Fannon, James; Náraigh, Lennon Ó; Loiseau, Jean-Christophe; Valluri, Prashant; Bethune, Iain

    2016-01-01

    We introduce a modified and simplified version of the pre-existing fully parallelized three-dimensional Navier–Stokes flow solver known as TPLS. We demonstrate how the simplified version can be used as a pedagogical tool for the study of computational fluid dynamics (CFDs) and parallel computing. TPLS is at its heart a two-phase flow solver, and uses calls to a range of external libraries to accelerate its performance. However, in the present context we narrow the focus of the study to basic hydrodynamics and parallel computing techniques, and the code is therefore simplified and modified to simulate pressure-driven single-phase flow in a channel, using only relatively simple Fortran 90 code with MPI parallelization, but no calls to any other external libraries. The modified code is analysed in order to both validate its accuracy and investigate its scalability up to 1000 CPU cores. Simulations are performed for several benchmark cases in pressure-driven channel flow, including a turbulent simulation, wherein the turbulence is incorporated via the large-eddy simulation technique. The work may be of use to advanced undergraduate and graduate students as an introductory study in CFDs, while also providing insight for those interested in more general aspects of high-performance computing. (paper)

  15. High-performance computational fluid dynamics: a custom-code approach

    Science.gov (United States)

    Fannon, James; Loiseau, Jean-Christophe; Valluri, Prashant; Bethune, Iain; Náraigh, Lennon Ó.

    2016-07-01

    We introduce a modified and simplified version of the pre-existing fully parallelized three-dimensional Navier-Stokes flow solver known as TPLS. We demonstrate how the simplified version can be used as a pedagogical tool for the study of computational fluid dynamics (CFDs) and parallel computing. TPLS is at its heart a two-phase flow solver, and uses calls to a range of external libraries to accelerate its performance. However, in the present context we narrow the focus of the study to basic hydrodynamics and parallel computing techniques, and the code is therefore simplified and modified to simulate pressure-driven single-phase flow in a channel, using only relatively simple Fortran 90 code with MPI parallelization, but no calls to any other external libraries. The modified code is analysed in order to both validate its accuracy and investigate its scalability up to 1000 CPU cores. Simulations are performed for several benchmark cases in pressure-driven channel flow, including a turbulent simulation, wherein the turbulence is incorporated via the large-eddy simulation technique. The work may be of use to advanced undergraduate and graduate students as an introductory study in CFDs, while also providing insight for those interested in more general aspects of high-performance computing.

  16. Fortran programs for the time-dependent Gross-Pitaevskii equation in a fully anisotropic trap

    Science.gov (United States)

    Muruganandam, P.; Adhikari, S. K.

    2009-10-01

    Here we develop simple numerical algorithms for both stationary and non-stationary solutions of the time-dependent Gross-Pitaevskii (GP) equation describing the properties of Bose-Einstein condensates at ultra low temperatures. In particular, we consider algorithms involving real- and imaginary-time propagation based on a split-step Crank-Nicolson method. In a one-space-variable form of the GP equation we consider the one-dimensional, two-dimensional circularly-symmetric, and the three-dimensional spherically-symmetric harmonic-oscillator traps. In the two-space-variable form we consider the GP equation in two-dimensional anisotropic and three-dimensional axially-symmetric traps. The fully-anisotropic three-dimensional GP equation is also considered. Numerical results for the chemical potential and root-mean-square size of stationary states are reported using imaginary-time propagation programs for all the cases and compared with previously obtained results. Also presented are numerical results of non-stationary oscillation for different trap symmetries using real-time propagation programs. A set of convenient working codes developed in Fortran 77 are also provided for all these cases (twelve programs in all). In the case of two or three space variables, Fortran 90/95 versions provide some simplification over the Fortran 77 programs, and these programs are also included (six programs in all). Program summaryProgram title: (i) imagetime1d, (ii) imagetime2d, (iii) imagetime3d, (iv) imagetimecir, (v) imagetimesph, (vi) imagetimeaxial, (vii) realtime1d, (viii) realtime2d, (ix) realtime3d, (x) realtimecir, (xi) realtimesph, (xii) realtimeaxial Catalogue identifier: AEDU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data

  17. ptchg: A FORTRAN program for point-charge calculations of electric field gradients (EFGs)

    Science.gov (United States)

    Spearing, Dane R.

    1994-05-01

    ptchg, a FORTRAN program, has been developed to calculate electric field gradients (EFG) around an atomic site in crystalline solids using the point-charge direct-lattice summation method. It uses output from the crystal structure generation program Atoms as its input. As an application of ptchg, a point-charge calculation of the EFG quadrupolar parameters around the oxygen site in SiO 2 cristobalite is demonstrated. Although point-charge calculations of electric field gradients generally are limited to ionic compounds, the computed quadrupolar parameters around the oxygen site in SiO 2 cristobalite, a highly covalent material, are in good agreement with the experimentally determined values from nuclear magnetic resonance (NMR) spectroscopy.

  18. Real time interrupt handling using FORTRAN IV plus under RSX-11M

    International Nuclear Information System (INIS)

    Schultz, D.E.

    1981-01-01

    A real-time data acquisition application for a linear accelerator is described. The important programming features of this application are use of connect to interrupt, a shared library, map to I/O page, and a shared data area. How you can provide rapid interrupt handling using these tools from FORTRAN IV PLUS is explained

  19. Parallelization of MCNP4 code by using simple FORTRAN algorithms

    International Nuclear Information System (INIS)

    Yazid, P.I.; Takano, Makoto; Masukawa, Fumihiro; Naito, Yoshitaka.

    1993-12-01

    Simple FORTRAN algorithms, that rely only on open, close, read and write statements, together with disk files and some UNIX commands have been applied to parallelization of MCNP4. The code, named MCNPNFS, maintains almost all capabilities of MCNP4 in solving shielding problems. It is able to perform parallel computing on a set of any UNIX workstations connected by a network, regardless of the heterogeneity in hardware system, provided that all processors produce a binary file in the same format. Further, it is confirmed that MCNPNFS can be executed also on Monte-4 vector-parallel computer. MCNPNFS has been tested intensively by executing 5 photon-neutron benchmark problems, a spent fuel cask problem and 17 sample problems included in the original code package of MCNP4. Three different workstations, connected by a network, have been used to execute MCNPNFS in parallel. By measuring CPU time, the parallel efficiency is determined to be 58% to 99% and 86% in average. On Monte-4, MCNPNFS has been executed using 4 processors concurrently and has achieved the parallel efficiency of 79% in average. (author)

  20. Cluster computing for lattice QCD simulations

    International Nuclear Information System (INIS)

    Coddington, P.D.; Williams, A.G.

    2000-01-01

    main application is lattice QCD calculations. We have a number of programs for generating and analysing lattice QCD configurations. These programs are written in a data parallel style using Fortran 90 array syntax. Initially they were run on the CM-5 by using CM Fortran compiler directives for specifying data distribution among the processors of the parallel machine. It was a simple task to convert these codes to use the equivalent directives for High Performance Fortran (HPF), which is a standard, portable data parallel language that can be used on clusters. We have used the Portland Group HPF compiler (PGHPF), which offers good support for cluster computing. We benchmarked our codes on a number of different types of machine, before eventually deciding to purchase a large cluster from Sun Microsystems, which was installed at Adelaide University in June 2000. With a peak performance of 144 GFlops, it is currently the fastest computer in Australia. The machine is a new product from Sun, known as a Sun Technical Compute Farm (TCF). It consists of a cluster of Sun E420R workstations, each of which has four 450MHz UltraSPARC II processors, with a peak speed of 3.6 GFlops per workstation. The NCFLGT cluster consists of 40 E420R workstations, giving a total of 160 processors, 160 GBytes of memory, 640 MBytes of cache memory, and 720 GBytes of disk. The standard Sun TCF product comes with either Fast or Gigabit Ethernet, with an option for using a very high-bandwidth, low-latency SCI network targeted at parallel computing applications. For most parallel lattice QCD codes, Ethernet does not offer a low enough communications latency, while SCI is very expensive and is overkill for our applications. We therefore decided upon a third-party solution for the network, and will soon be installing a high-speed Myrinet 2000 network. Currently we only have very preliminary performance results for our lattice QCD codes, which look quite promising. We will present detailed performance

  1. Study on the Soy Protein-Based Wood Adhesive Modified by Hydroxymethyl Phenol

    Directory of Open Access Journals (Sweden)

    Hong Lei

    2016-07-01

    Full Text Available To explain the reason why using phenol-formaldehyde (PF resin improves the water resistance of soy-based adhesive, the performance of soy-based adhesive cross-linked with hydroxymethyl phenol (HPF and the reaction between HPF and a common dipeptide N-(2-l-alanyl-l-glutamine (AG being used as a model compound were studied in this paper. The DSC and DMA results indicated the reaction between HPF and soy-based adhesive. The soy-based adhesive cross-linked with HPF cured at a lower temperature than the adhesive without HPF. The former showed better mechanical performance and heat resistance than the latter. The ESI-MS, FT-IR and 13C-NMR results proved the reaction between HPF and AG. Because of the existence of branched ether groups in the 13C-NMR results of HPF/AG, the reaction between HPF and AG might mainly happened between hydroxymethyl groups and amino groups under a basic condition.

  2. Running the EGS4 Monte Carlo code with Fortran 90 on a pentium computer

    International Nuclear Information System (INIS)

    Caon, M.; Bibbo, G.; Pattison, J.

    1996-01-01

    The possibility to run the EGS4 Monte Carlo code radiation transport system for medical radiation modelling on a microcomputer is discussed. This has been done using a Fortran 77 compiler with a 32-bit memory addressing system running under a memory extender operating system. In addition a virtual memory manager such as QEMM386 was required. It has successfully run on a SUN Sparcstation2. In 1995 faster Pentium-based microcomputers became available as did the Windows 95 operating system which can handle 32-bit programs, multitasking and provides its own virtual memory management. The paper describe how with simple modification to the batch files it was possible to run EGS4 on a Pentium under Fortran 90 and Windows 95. This combination of software and hardware is cheaper and faster than running it on a SUN Sparcstation2. 8 refs., 1 tab

  3. Running the EGS4 Monte Carlo code with Fortran 90 on a pentium computer

    Energy Technology Data Exchange (ETDEWEB)

    Caon, M. [Flinders Univ. of South Australia, Bedford Park, SA (Australia)]|[Univercity of South Australia, SA (Australia); Bibbo, G. [Womens and Childrens hospital, SA (Australia); Pattison, J. [Univercity of South Australia, SA (Australia)

    1996-09-01

    The possibility to run the EGS4 Monte Carlo code radiation transport system for medical radiation modelling on a microcomputer is discussed. This has been done using a Fortran 77 compiler with a 32-bit memory addressing system running under a memory extender operating system. In addition a virtual memory manager such as QEMM386 was required. It has successfully run on a SUN Sparcstation2. In 1995 faster Pentium-based microcomputers became available as did the Windows 95 operating system which can handle 32-bit programs, multitasking and provides its own virtual memory management. The paper describe how with simple modification to the batch files it was possible to run EGS4 on a Pentium under Fortran 90 and Windows 95. This combination of software and hardware is cheaper and faster than running it on a SUN Sparcstation2. 8 refs., 1 tab.

  4. SLACINPT - a FORTRAN program that generates boundary data for the SLAC gun code

    International Nuclear Information System (INIS)

    Michel, W.L.; Hepburn, J.D.

    1982-03-01

    The FORTRAN program SLACINPT was written to simplify the preparation of boundary data for the SLAC gun code. In SLACINPT, the boundary is described by a sequence of straight line or arc segments. From these, the program generates the individual boundary mesh point data, required as input by the SLAC gun code

  5. QEDMOD: Fortran program for calculating the model Lamb-shift operator

    Science.gov (United States)

    Shabaev, V. M.; Tupitsyn, I. I.; Yerokhin, V. A.

    2018-02-01

    We present Fortran package QEDMOD for computing the model QED operator hQED that can be used to account for the Lamb shift in accurate atomic-structure calculations. The package routines calculate the matrix elements of hQED with the user-specified one-electron wave functions. The operator can be used to calculate Lamb shift in many-electron atomic systems with a typical accuracy of few percent, either by evaluating the matrix element of hQED with the many-electron wave function, or by adding hQED to the Dirac-Coulomb-Breit Hamiltonian.

  6. Numerical computation of molecular integrals via optimized (vectorized) FORTRAN code

    International Nuclear Information System (INIS)

    Scott, T.C.; Grant, I.P.; Saunders, V.R.

    1997-01-01

    The calculation of molecular properties based on quantum mechanics is an area of fundamental research whose horizons have always been determined by the power of state-of-the-art computers. A computational bottleneck is the numerical calculation of the required molecular integrals to sufficient precision. Herein, we present a method for the rapid numerical evaluation of molecular integrals using optimized FORTRAN code generated by Maple. The method is based on the exploitation of common intermediates and the optimization can be adjusted to both serial and vectorized computations. (orig.)

  7. The FORTRAN-77 version of the Karlsruhe program system KAPROS

    International Nuclear Information System (INIS)

    Moritz, N.

    1985-02-01

    The FORTRAN-77 KAPROS-kernel includes some major changes compared with the version, which is described in the KfK-report 2254. The changes are documented in this report from the point of view of the system-programmer. This report is meant to be a supplement to the KfK-report 2254, assuming that the reader of this report is familiar with the KfK-report 2254. He also should be familiar with the IBM operating system MVS SP1.3.2 and the usual terms of data processing. (orig.) [de

  8. A Fortran program for the numerical integration of the one-dimensional Schroedinger equation using exponential and Bessel fitting methods

    International Nuclear Information System (INIS)

    Cash, J.R.; Raptis, A.D.; Simos, T.E.

    1990-01-01

    An efficient algorithm is described for the accurate numerical integration of the one-dimensional Schroedinger equation. This algorithm uses a high-order, variable step Runge-Kutta like method in the region where the potential term dominates, and an exponential or Bessel fitted method in the asymptotic region. This approach can be used to compute scattering phase shifts in an efficient and reliable manner. A Fortran program which implements this algorithm is provided and some test results are given. (orig.)

  9. Multi-processing CTH: Porting legacy FORTRAN code to MP hardware

    Energy Technology Data Exchange (ETDEWEB)

    Bell, R.L.; Elrick, M.G.; Hertel, E.S. Jr.

    1996-12-31

    CTH is a family of codes developed at Sandia National Laboratories for use in modeling complex multi-dimensional, multi-material problems that are characterized by large deformations and/or strong shocks. A two-step, second-order accurate Eulerian solution algorithm is used to solve the mass, momentum, and energy conservation equations. CTH has historically been run on systems where the data are directly accessible to the cpu, such as workstations and vector supercomputers. Multiple cpus can be used if all data are accessible to all cpus. This is accomplished by placing compiler directives or subroutine calls within the source code. The CTH team has implemented this scheme for Cray shared memory machines under the Unicos operating system. This technique is effective, but difficult to port to other (similar) shared memory architectures because each vendor has a different format of directives or subroutine calls. A different model of high performance computing is one where many (> 1,000) cpus work on a portion of the entire problem and communicate by passing messages that contain boundary data. Most, if not all, codes that run effectively on parallel hardware were written with a parallel computing paradigm in mind. Modifying an existing code written for serial nodes poses a significantly different set of challenges that will be discussed. CTH, a legacy FORTRAN code, has been modified to allow for solutions on distributed memory parallel computers such as the IBM SP2, the Intel Paragon, Cray T3D, or a network of workstations. The message passing version of CTH will be discussed and example calculations will be presented along with performance data. Current timing studies indicate that CTH is 2--3 times faster than equivalent C++ code written specifically for parallel hardware. CTH on the Intel Paragon exhibits linear speed up with problems that are scaled (constant problem size per node) for the number of parallel nodes.

  10. A FORTRAN realization of the block adjustment of CCD frames

    Science.gov (United States)

    Yu, Yong; Tang, Zhenghong; Li, Jinling; Zhao, Ming

    A FORTRAN version realization of the block adjustment (BA) of overlapping CCD frames is developed. The flowchart is introduced including (a) data collection, (b) preprocessing, and (c) BA and object positioning. The subroutines and their functions are also demonstrated. The program package is tested by simulated data with/without the application of white noises. It is also preliminarily applied to the reduction of optical positions of four extragalactic radio sources. The results show that because of the increase in the sky coverage and number of reference stars, the precision of deducted positions is improved compared with single plate adjustment.

  11. A browser-based tool for conversion between Fortran NAMELIST and XML/HTML

    Science.gov (United States)

    Naito, O.

    A browser-based tool for conversion between Fortran NAMELIST and XML/HTML is presented. It runs on an HTML5 compliant browser and generates reusable XML files to aid interoperability. It also provides a graphical interface for editing and annotating variables in NAMELIST, hence serves as a primitive code documentation environment. Although the tool is not comprehensive, it could be viewed as a test bed for integrating legacy codes into modern systems.

  12. A browser-based tool for conversion between Fortran NAMELIST and XML/HTML

    Directory of Open Access Journals (Sweden)

    O. Naito

    2017-01-01

    Full Text Available A browser-based tool for conversion between Fortran NAMELIST and XML/HTML is presented. It runs on an HTML5 compliant browser and generates reusable XML files to aid interoperability. It also provides a graphical interface for editing and annotating variables in NAMELIST, hence serves as a primitive code documentation environment. Although the tool is not comprehensive, it could be viewed as a test bed for integrating legacy codes into modern systems.

  13. Types: A data abstraction package in FORTRAN

    International Nuclear Information System (INIS)

    Youssef, S.

    1990-01-01

    TYPES is a collection of Fortran programs which allow the creation and manipulation of abstract ''data objects'' without the need for a preprocessor. Each data object is assigned a ''type'' as it is created which implies participation in a set of characteristic operations. Available types include scalars, logicals, ordered sets, stacks, queues, sequences, trees, arrays, character strings, block text, histograms, virtual and allocatable memories. A data object may contain integers, reals, or other data objects in any combination. In addition to the type specific operations, a set of universal utilities allows for copying input/output to disk, naming, editing, displaying, user input, interactive creation, tests for equality of contents or structure, machine to machine translation or source code creation for and data object. TYPES is available on VAX/VMS, SUN 3, SPARC, DEC/Ultrix, Silicon Graphics 4D and Cray/Unicos machines. The capabilities of the package are discussed together with characteristic applications and experience in writing the GVerify package

  14. Far-field Lorenz-Mie scattering in an absorbing host medium: Theoretical formalism and FORTRAN program

    Science.gov (United States)

    Mishchenko, Michael I.; Yang, Ping

    2018-01-01

    In this paper we make practical use of the recently developed first-principles approach to electromagnetic scattering by particles immersed in an unbounded absorbing host medium. Specifically, we introduce an actual computational tool for the calculation of pertinent far-field optical observables in the context of the classical Lorenz-Mie theory. The paper summarizes the relevant theoretical formalism, explains various aspects of the corresponding numerical algorithm, specifies the input and output parameters of a FORTRAN program available at https://www.giss.nasa.gov/staff/mmishchenko/Lorenz-Mie.html, and tabulates benchmark results useful for testing purposes. This public-domain FORTRAN program enables one to solve the following two important problems: (i) simulate theoretically the reading of a remote well-collimated radiometer measuring electromagnetic scattering by an individual spherical particle or a small random group of spherical particles; and (ii) compute the single-scattering parameters that enter the vector radiative transfer equation derived directly from the Maxwell equations.

  15. Far-Field Lorenz-Mie Scattering in an Absorbing Host Medium: Theoretical Formalism and FORTRAN Program

    Science.gov (United States)

    Mishchenko, Michael I.; Yang, Ping

    2018-01-01

    In this paper we make practical use of the recently developed first-principles approach to electromagnetic scattering by particles immersed in an unbounded absorbing host medium. Specifically, we introduce an actual computational tool for the calculation of pertinent far-field optical observables in the context of the classical Lorenzâ€"Mie theory. The paper summarizes the relevant theoretical formalism, explains various aspects of the corresponding numerical algorithm, specifies the input and output parameters of a FORTRAN program available at https://www.giss.nasa.gov/staff/mmishchenko/Lorenz-Mie.html, and tabulates benchmark results useful for testing purposes. This public-domain FORTRAN program enables one to solve the following two important problems: (i) simulate theoretically the reading of a remote well-collimated radiometer measuring electromagnetic scattering by an individual spherical particle or a small random group of spherical particles; and (ii) compute the single-scattering parameters that enter the vector radiative transfer equation derived directly from the Maxwell equations.

  16. Testing New Programming Paradigms with NAS Parallel Benchmarks

    Science.gov (United States)

    Jin, H.; Frumkin, M.; Schultz, M.; Yan, J.

    2000-01-01

    Over the past decade, high performance computing has evolved rapidly, not only in hardware architectures but also with increasing complexity of real applications. Technologies have been developing to aim at scaling up to thousands of processors on both distributed and shared memory systems. Development of parallel programs on these computers is always a challenging task. Today, writing parallel programs with message passing (e.g. MPI) is the most popular way of achieving scalability and high performance. However, writing message passing programs is difficult and error prone. Recent years new effort has been made in defining new parallel programming paradigms. The best examples are: HPF (based on data parallelism) and OpenMP (based on shared memory parallelism). Both provide simple and clear extensions to sequential programs, thus greatly simplify the tedious tasks encountered in writing message passing programs. HPF is independent of memory hierarchy, however, due to the immaturity of compiler technology its performance is still questionable. Although use of parallel compiler directives is not new, OpenMP offers a portable solution in the shared-memory domain. Another important development involves the tremendous progress in the internet and its associated technology. Although still in its infancy, Java promisses portability in a heterogeneous environment and offers possibility to "compile once and run anywhere." In light of testing these new technologies, we implemented new parallel versions of the NAS Parallel Benchmarks (NPBs) with HPF and OpenMP directives, and extended the work with Java and Java-threads. The purpose of this study is to examine the effectiveness of alternative programming paradigms. NPBs consist of five kernels and three simulated applications that mimic the computation and data movement of large scale computational fluid dynamics (CFD) applications. We started with the serial version included in NPB2.3. Optimization of memory and cache usage

  17. Evaluating a grading change at UCSD school of medicine: pass/fail grading is associated with decreased performance on preclinical exams but unchanged performance on USMLE step 1 scores.

    Science.gov (United States)

    McDuff, Susan G R; McDuff, DeForest; Farace, Jennifer A; Kelly, Carolyn J; Savoia, Maria C; Mandel, Jess

    2014-06-30

    To assess the impact of a change in preclerkship grading system from Honors/Pass/Fail (H/P/F) to Pass/Fail (P/F) on University of California, San Diego (UCSD) medical students' academic performance. Academic performance of students in the classes of 2011 and 2012 (constant-grading classes) were collected and compared with performance of students in the class of 2013 (grading-change class) because the grading policy at UCSD SOM was changed for the class of 2013, from H/P/F during the first year (MS1) to P/F during the second year (MS2). For all students, data consisted of test scores from required preclinical courses from MS1 and MS2 years, and USMLE Step 1 scores. Linear regression analysis controlled for other factors that could be predictive of student performance (i.e., MCAT scores, undergraduate GPA, age, gender, etc.) in order to isolate the effect of the changed grading policy on academic performance. The change in grading policy in the MS2 year only, without any corresponding changes to the medical curriculum, presents a unique natural experiment with which to cleanly evaluate the effect of P/F grading on performance outcomes. After controlling for other factors, the grading policy change to P/F grading in the MS2 year had a negative impact on second-year grades relative to first-year grades (the constant-grading classes performed 1.65% points lower during their MS2 year compared to the MS1 year versus 3.25% points lower for the grading-change class, p < 0.0001), but had no observable impact on USMLE Step 1 scores. A change in grading from H/P/F grading to P/F grading was associated with decreased performance on preclinical examinations but no decrease in performance on the USMLE Step 1 examination. These results are discussed in the broader context of the multitude of factors that should be considered in assessing the merits of various grading systems, and ultimately the authors recommend the continuation of pass-fail grading at UCSD School of Medicine.

  18. High Performance Programming Using Explicit Shared Memory Model on the Cray T3D

    Science.gov (United States)

    Saini, Subhash; Simon, Horst D.; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    The Cray T3D is the first-phase system in Cray Research Inc.'s (CRI) three-phase massively parallel processing program. In this report we describe the architecture of the T3D, as well as the CRAFT (Cray Research Adaptive Fortran) programming model, and contrast it with PVM, which is also supported on the T3D We present some performance data based on the NAS Parallel Benchmarks to illustrate both architectural and software features of the T3D.

  19. Autogenic Feedback Training (Body Fortran) with Biofeedback and the Computer for Self-Improvement and Change.

    Science.gov (United States)

    Cassel, Russell N.; Sumintardja, Elmira Nasrudin

    1983-01-01

    Describes autogenic feedback training, which provides the basis whereby an individual is able to improve on well being through use of a technique described as "body fortran," implying that you program self as one programs a computer. Necessary requisites are described including relaxation training and the management of stress. (JAC)

  20. Model Performance Evaluation and Scenario Analysis (MPESA)

    Science.gov (United States)

    Model Performance Evaluation and Scenario Analysis (MPESA) assesses the performance with which models predict time series data. The tool was developed Hydrological Simulation Program-Fortran (HSPF) and the Stormwater Management Model (SWMM)

  1. Programs in Fortran language for reporting the results of the analyses by ICP emission spectroscopy

    International Nuclear Information System (INIS)

    Roca, M.

    1985-01-01

    Three programs, written in FORTRAN IV language, for reporting the results of the analyses by ICP emission spectroscopy from data stored in files on floppy disks have been developed. They are intended, respectively, for the analyses of: 1) waters, 2) granites and slates, and 3) different kinds of geological materials. (Author) 8 refs

  2. The FORTRAN NALAP code adapted to a microcomputer compiler

    International Nuclear Information System (INIS)

    Lobo, Paulo David de Castro; Borges, Eduardo Madeira; Braz Filho, Francisco Antonio; Guimaraes, Lamartine Nogueira Frutuoso

    2010-01-01

    The Nuclear Energy Division of the Institute for Advanced Studies (IEAv) is conducting the TERRA project (TEcnologia de Reatores Rapidos Avancados), Technology for Advanced Fast Reactors project, aimed at a space reactor application. In this work, to attend the TERRA project, the NALAP code adapted to a microcomputer compiler called Compaq Visual Fortran (Version 6.6) is presented. This code, adapted from the light water reactor transient code RELAP 3B, simulates thermal-hydraulic responses for sodium cooled fast reactors. The strategy to run the code in a PC was divided in some steps mainly to remove unnecessary routines, to eliminate old statements, to introduce new ones and also to include extension precision mode. The source program was able to solve three sample cases under conditions of protected transients suggested in literature: the normal reactor shutdown, with a delay of 200 ms to start the control rod movement and a delay of 500 ms to stop the pumps; reactor scram after transient of loss of flow; and transients protected from overpower. Comparisons were made with results from the time when the NALAP code was acquired by the IEAv, back in the 80's. All the responses for these three simulations reproduced the calculations performed with the CDC compiler in 1985. Further modifications will include the usage of gas as coolant for the nuclear reactor to allow a Closed Brayton Cycle Loop - CBCL - to be used as a heat/electric converter. (author)

  3. The FORTRAN NALAP code adapted to a microcomputer compiler

    Energy Technology Data Exchange (ETDEWEB)

    Lobo, Paulo David de Castro; Borges, Eduardo Madeira; Braz Filho, Francisco Antonio; Guimaraes, Lamartine Nogueira Frutuoso, E-mail: plobo.a@uol.com.b, E-mail: eduardo@ieav.cta.b, E-mail: fbraz@ieav.cta.b, E-mail: guimarae@ieav.cta.b [Instituto de Estudos Avancados (IEAv/CTA), Sao Jose dos Campos, SP (Brazil)

    2010-07-01

    The Nuclear Energy Division of the Institute for Advanced Studies (IEAv) is conducting the TERRA project (TEcnologia de Reatores Rapidos Avancados), Technology for Advanced Fast Reactors project, aimed at a space reactor application. In this work, to attend the TERRA project, the NALAP code adapted to a microcomputer compiler called Compaq Visual Fortran (Version 6.6) is presented. This code, adapted from the light water reactor transient code RELAP 3B, simulates thermal-hydraulic responses for sodium cooled fast reactors. The strategy to run the code in a PC was divided in some steps mainly to remove unnecessary routines, to eliminate old statements, to introduce new ones and also to include extension precision mode. The source program was able to solve three sample cases under conditions of protected transients suggested in literature: the normal reactor shutdown, with a delay of 200 ms to start the control rod movement and a delay of 500 ms to stop the pumps; reactor scram after transient of loss of flow; and transients protected from overpower. Comparisons were made with results from the time when the NALAP code was acquired by the IEAv, back in the 80's. All the responses for these three simulations reproduced the calculations performed with the CDC compiler in 1985. Further modifications will include the usage of gas as coolant for the nuclear reactor to allow a Closed Brayton Cycle Loop - CBCL - to be used as a heat/electric converter. (author)

  4. specsim: A Fortran-77 program for conditional spectral simulation in 3D

    Science.gov (United States)

    Yao, Tingting

    1998-12-01

    A Fortran 77 program, specsim, is presented for conditional spectral simulation in 3D domains. The traditional Fourier integral method allows generating random fields with a given covariance spectrum. Conditioning to local data is achieved by an iterative identification of the conditional phase information. A flowchart of the program is given to illustrate the implementation procedures of the program. A 3D case study is presented to demonstrate application of the program. A comparison with the traditional sequential Gaussian simulation algorithm emphasizes the advantages and drawbacks of the proposed algorithm.

  5. A FORTRAN version implementation of block adjustment of CCD frames and its preliminary application

    Science.gov (United States)

    Yu, Y.; Tang, Z.-H.; Li, J.-L.; Zhao, M.

    2005-09-01

    A FORTRAN version implementation of the block adjustment (BA) of overlapping CCD frames is developed and its flowchart is shown. The program is preliminarily applied to obtain the optical positions of four extragalactic radio sources. The results show that because of the increase in the number and sky coverage of reference stars the precision of optical positions with BA is improved compared with the single CCD frame adjustment.

  6. MATH77 - A LIBRARY OF MATHEMATICAL SUBPROGRAMS FOR FORTRAN 77, RELEASE 4.0

    Science.gov (United States)

    Lawson, C. L.

    1994-01-01

    MATH77 is a high quality library of ANSI FORTRAN 77 subprograms implementing contemporary algorithms for the basic computational processes of science and engineering. The portability of MATH77 meets the needs of present-day scientists and engineers who typically use a variety of computing environments. Release 4.0 of MATH77 contains 454 user-callable and 136 lower-level subprograms. Usage of the user-callable subprograms is described in 69 sections of the 416 page users' manual. The topics covered by MATH77 are indicated by the following list of chapter titles in the users' manual: Mathematical Functions, Pseudo-random Number Generation, Linear Systems of Equations and Linear Least Squares, Matrix Eigenvalues and Eigenvectors, Matrix Vector Utilities, Nonlinear Equation Solving, Curve Fitting, Table Look-Up and Interpolation, Definite Integrals (Quadrature), Ordinary Differential Equations, Minimization, Polynomial Rootfinding, Finite Fourier Transforms, Special Arithmetic , Sorting, Library Utilities, Character-based Graphics, and Statistics. Besides subprograms that are adaptations of public domain software, MATH77 contains a number of unique packages developed by the authors of MATH77. Instances of the latter type include (1) adaptive quadrature, allowing for exceptional generality in multidimensional cases, (2) the ordinary differential equations solver used in spacecraft trajectory computation for JPL missions, (3) univariate and multivariate table look-up and interpolation, allowing for "ragged" tables, and providing error estimates, and (4) univariate and multivariate derivative-propagation arithmetic. MATH77 release 4.0 is a subroutine library which has been carefully designed to be usable on any computer system that supports the full ANSI standard FORTRAN 77 language. It has been successfully implemented on a CRAY Y/MP computer running UNICOS, a UNISYS 1100 computer running EXEC 8, a DEC VAX series computer running VMS, a Sun4 series computer running Sun

  7. The inverse of winnowing: a FORTRAN subroutine and discussion of unwinnowing discrete data

    Science.gov (United States)

    Bracken, Robert E.

    2004-01-01

    This report describes an unwinnowing algorithm that utilizes a discrete Fourier transform, and a resulting Fortran subroutine that winnows or unwinnows a 1-dimensional stream of discrete data; the source code is included. The unwinnowing algorithm effectively increases (by integral factors) the number of available data points while maintaining the original frequency spectrum of a data stream. This has utility when an increased data density is required together with an availability of higher order derivatives that honor the original data.

  8. hepawk - A language for scanning high energy physics events

    International Nuclear Information System (INIS)

    Ohl, T.

    1992-01-01

    We present the programming language hepawk, designed for convenient scanning of data structures arising in the simulation of high energy physics events. The interpreter for this language has been implemented in FORTRAN-77, therefore hepawk runs on any machine with a FORTRAN-77 compiler. (orig.)

  9. Survival, growth and reproduction of cryopreserved larvae from a marine invertebrate, the Pacific oyster (Crassostrea gigas.

    Directory of Open Access Journals (Sweden)

    Marc Suquet

    Full Text Available This study is the first demonstration of successful post-thawing development to reproduction stage of diploid cryopreserved larvae in an aquatic invertebrate. Survival, growth and reproductive performances were studied in juvenile and adult Pacific oysters grown from cryopreserved embryos. Cryopreservation was performed at three early stages: trochophore (13±2 hours post fertilization: hpf, early D-larvae (24±2 hpf and late D-larvae (43±2 hpf. From the beginning (88 days at the end of the ongrowing phase (195 days, no mortality was recorded and mean body weights did not differ between the thawed oysters and the control. At the end of the growing-out phase (982 days, survival of the oysters cryopreserved at 13±2 hpf and at 43±2 hpf was significantly higher (P<0.001 than those of the control (non cryopreserved larvae. Only the batches cryopreserved at 24±2 hpf showed lower survival than the control. Reproductive integrity of the mature oysters, formely cryopreserved at 13±2 hpf and 24±2 hpf, was estimated by the sperm movement and the larval development of their offspring in 13 crosses gamete pools (five males and five females in each pool. In all but two crosses out of 13 tested (P<0.001, development rates of the offspring were not significantly different between frozen and unfrozen parents. In all, the growth and reproductive performances of oysters formerly cryopreserved at larval stages are close to those of controls. Furthermore, these performances did not differ between the three initial larval stages of cryopreservation. The utility of larvae cryopreservation is discussed and compared with the cryopreservation of gametes as a technique for selection programs and shellfish cryobanking.

  10. ASSIST - a package of Fortran routines for handling input under specified syntax rules and for management of data structures

    International Nuclear Information System (INIS)

    Sinclair, J.E.

    1991-02-01

    The ASSIST package (A Structured Storage and Input Syntax Tool) provides for Fortran programs a means for handling data structures more general than those provided by the Fortran language, and for obtaining input to the program from a file or terminal according to specified syntax rules. The syntax-controlled input can be interactive, with automatic generation of prompts, and dialogue to correct any input errors. The range of syntax rules possible is sufficient to handle lists of numbers and character strings, keywords, commands with optional clauses, and many kinds of variable-format constructions, such as algebraic expressions. ASSIST was developed for use in two large programs for the analysis of safety of radioactive waste disposal facilities, but it should prove useful for a wide variety of applications. (author)

  11. Portable parallel programming in a Fortran environment

    International Nuclear Information System (INIS)

    May, E.N.

    1989-01-01

    Experience using the Argonne-developed PARMACs macro package to implement a portable parallel programming environment is described. Fortran programs with intrinsic parallelism of coarse and medium granularity are easily converted to parallel programs which are portable among a number of commercially available parallel processors in the class of shared-memory bus-based and local-memory network based MIMD processors. The parallelism is implemented using standard UNIX (tm) tools and a small number of easily understood synchronization concepts (monitors and message-passing techniques) to construct and coordinate multiple cooperating processes on one or many processors. Benchmark results are presented for parallel computers such as the Alliant FX/8, the Encore MultiMax, the Sequent Balance, the Intel iPSC/2 Hypercube and a network of Sun 3 workstations. These parallel machines are typical MIMD types with from 8 to 30 processors, each rated at from 1 to 10 MIPS processing power. The demonstration code used for this work is a Monte Carlo simulation of the response to photons of a ''nearly realistic'' lead, iron and plastic electromagnetic and hadronic calorimeter, using the EGS4 code system. 6 refs., 2 figs., 2 tabs

  12. The fortran programme for the calculation of the absorption and double scattering corrections in cross-section measurements with fast neutrons using the monte Carlo method (1963); Programme fortran pour le calcul des corrections d'absorption et de double diffusion dans les mesures de sections efficaces pour les neutrons rapides par la methode de monte-carlo (1963)

    Energy Technology Data Exchange (ETDEWEB)

    Fernandez, B [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1963-07-01

    A calculation for double scattering and absorption corrections in fast neutron scattering experiments using Monte-Carlo method is given. Application to cylindrical target is presented in FORTRAN symbolic language. (author) [French] Un calcul des corrections de double diffusion et d'absorption dans les experiences de diffusion de neutrons rapides par la methode de Monte-Carlo est presente. L'application au cas d'une cible cylindrique est traitee en langage symbolique FORTRAN. (auteur)

  13. A FORTRAN program for an IBM PC compatible computer for calculating kinematical electron diffraction patterns

    International Nuclear Information System (INIS)

    Skjerpe, P.

    1989-01-01

    This report describes a computer program which is useful in transmission electron microscopy. The program is written in FORTRAN and calculates kinematical electron diffraction patterns in any zone axis from a given crystal structure. Quite large unit cells, containing up to 2250 atoms, can be handled by the program. The program runs on both the Helcules graphic card and the standard IBM CGA card

  14. Programs in Fortran language for reporting the results of the analyses by ICP emission spectroscopy; Programas en lenguaje Fortran para la informacion de los resultados de los analisis efectuados mediante Espectroscopia Optica de emision con fuente de plasma

    Energy Technology Data Exchange (ETDEWEB)

    Roca, M

    1985-07-01

    Three programs, written in FORTRAN IV language, for reporting the results of the analyses by ICP emission spectroscopy from data stored in files on floppy disks have been developed. They are intended, respectively, for the analyses of: 1) waters, 2) granites and slates, and 3) different kinds of geological materials. (Author) 8 refs.

  15. OpenMSI: A High-Performance Web-Based Platform for Mass Spectrometry Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Rubel, Oliver; Greiner, Annette; Cholia, Shreyas; Louie, Katherine; Bethel, E. Wes; Northen, Trent R.; Bowen, Benjamin P.

    2013-10-02

    Mass spectrometry imaging (MSI) enables researchers to directly probe endogenous molecules directly within the architecture of the biological matrix. Unfortunately, efficient access, management, and analysis of the data generated by MSI approaches remain major challenges to this rapidly developing field. Despite the availability of numerous dedicated file formats and software packages, it is a widely held viewpoint that the biggest challenge is simply opening, sharing, and analyzing a file without loss of information. Here we present OpenMSI, a software framework and platform that addresses these challenges via an advanced, high-performance, extensible file format and Web API for remote data access (http://openmsi.nersc.gov). The OpenMSI file format supports storage of raw MSI data, metadata, and derived analyses in a single, self-describing format based on HDF5 and is supported by a large range of analysis software (e.g., Matlab and R) and programming languages (e.g., C++, Fortran, and Python). Careful optimization of the storage layout of MSI data sets using chunking, compression, and data replication accelerates common, selective data access operations while minimizing data storage requirements and are critical enablers of rapid data I/O. The OpenMSI file format has shown to provide >2000-fold improvement for image access operations, enabling spectrum and image retrieval in less than 0.3 s across the Internet even for 50 GB MSI data sets. To make remote high-performance compute resources accessible for analysis and to facilitate data sharing and collaboration, we describe an easy-to-use yet powerful Web API, enabling fast and convenient access to MSI data, metadata, and derived analysis results stored remotely to facilitate high-performance data analysis and enable implementation of Web based data sharing, visualization, and analysis.

  16. Computer applications in physics with FORTRAN, BASIC and C

    CERN Document Server

    Chandra, Suresh

    2014-01-01

    Because of encouraging response for first two editions of the book and for taking into account valuable suggestion from teachers as well as students, the text for Interpolation, Differentiation, Integration, Roots of an Equation, Solution of Simultaneous Equations, Eigenvalues and Eigenvectors of Matrix, Solution of Differential Equations, Solution of Partial Differential Equations, Monte Carlo Method and Simulation, Computation of some Functions is improved throughout and presented in a more systematic manner by using simple language. These techniques have vast applications in Science, Engineering and Technology. The C language is becoming popular in universities, colleges and engineering institutions. Besides the C language, programs are written in FORTRAN and BASIC languages. Consequently, this book has rather wide scope for its use. Each of the topics are developed in a systematic manner; thus making this book useful for graduate, postgraduate and engineering students. KEY FEATURES: Each topic is self exp...

  17. Computer simulation for prediction of performance and thermodynamic parameters of high energy materials

    International Nuclear Information System (INIS)

    Muthurajan, H.; Sivabalan, R.; Talawar, M.B.; Asthana, S.N.

    2004-01-01

    A new code viz., Linear Output Thermodynamic User-friendly Software for Energetic Systems (LOTUSES) developed during this work predicts the theoretical performance parameters such as density, detonation factor, velocity of detonation, detonation pressure and thermodynamic properties such as heat of detonation, heat of explosion, volume of explosion gaseous products. The same code also assists in the prediction of possible explosive decomposition products after explosion and power index. The developed code has been validated by calculating the parameters of standard explosives such as TNT, PETN, RDX, and HMX. Theoretically predicated parameters are accurate to the order of ±5% deviation. To the best of our knowledge, no such code is reported in literature which can predict a wide range of characteristics of known/unknown explosives with minimum input parameters. The code can be used to obtain thermochemical and performance parameters of high energy materials (HEMs) with reasonable accuracy. The code has been developed in Visual Basic having enhanced windows environment, and thereby advantages over the conventional codes, written in Fortran. The theoretically predicted HEMs performance can be directly printed as well as stored in text (.txt) or HTML (.htm) or Microsoft Word (.doc) or Adobe Acrobat (.pdf) format in the hard disk. The output can also be copied into the Random Access Memory as clipboard text which can be imported/pasted in other software as in the case of other codes

  18. FORTRAN program for calculating liquid-phase and gas-phase thermal diffusion column coefficients

    International Nuclear Information System (INIS)

    Rutherford, W.M.

    1980-01-01

    A computer program (COLCO) was developed for calculating thermal diffusion column coefficients from theory. The program, which is written in FORTRAN IV, can be used for both liquid-phase and gas-phase thermal diffusion columns. Column coefficients for the gas phase can be based on gas properties calculated from kinetic theory using tables of omega integrals or on tables of compiled physical properties as functions of temperature. Column coefficients for the liquid phase can be based on compiled physical property tables. Program listings, test data, sample output, and users manual are supplied for appendices

  19. BADGER v1.0: A Fortran equation of state library

    Science.gov (United States)

    Heltemes, T. A.; Moses, G. A.

    2012-12-01

    The BADGER equation of state library was developed to enable inertial confinement fusion plasma codes to more accurately model plasmas in the high-density, low-temperature regime. The code had the capability to calculate 1- and 2-T plasmas using the Thomas-Fermi model and an individual electron accounting model. Ion equation of state data can be calculated using an ideal gas model or via a quotidian equation of state with scaled binding energies. Electron equation of state data can be calculated via the ideal gas model or with an adaptation of the screened hydrogenic model with ℓ-splitting. The ionization and equation of state calculations can be done in local thermodynamic equilibrium or in a non-LTE mode using a variant of the Busquet equivalent temperature method. The code was written as a stand-alone Fortran library for ease of implementation by external codes. EOS results for aluminum are presented that show good agreement with the SESAME library and ionization calculations show good agreement with the FLYCHK code. Program summaryProgram title: BADGERLIB v1.0 Catalogue identifier: AEND_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEND_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 41 480 No. of bytes in distributed program, including test data, etc.: 2 904 451 Distribution format: tar.gz Programming language: Fortran 90. Computer: 32- or 64-bit PC, or Mac. Operating system: Windows, Linux, MacOS X. RAM: 249.496 kB plus 195.630 kB per isotope record in memory Classification: 19.1, 19.7. Nature of problem: Equation of State (EOS) calculations are necessary for the accurate simulation of high energy density plasmas. Historically, most EOS codes used in these simulations have relied on an ideal gas model. This model is inadequate for low

  20. Dynamic Memory De-allocation in Fortran 95/2003 Derived Type Calculus

    Directory of Open Access Journals (Sweden)

    Damian W.I. Rouson

    2005-01-01

    Full Text Available Abstract data types developed for computational science and engineering are frequently modeled after physical objects whose state variables must satisfy governing differential equations. Generalizing the associated algebraic and differential operators to operate on the abstract data types facilitates high-level program constructs that mimic standard mathematical notation. For non-trivial expressions, multiple object instantiations must occur to hold intermediate results during the expression's evaluation. When the dimension of each object's state space is not specified at compile-time, the programmer becomes responsible for dynamically allocating and de-allocating memory for each instantiation. With the advent of allocatable components in Fortran 2003 derived types, the potential exists for these intermediate results to occupy a substantial fraction of a program's footprint in memory. This issue becomes particularly acute at the highest levels of abstraction where coarse-grained data structures predominate. This paper proposes a set of rules for de-allocating memory that has been dynamically allocated for intermediate results in derived type calculus, while distinguishing that memory from more persistent objects. The new rules are applied to the design of a polymorphic time integrator for integrating evolution equations governing dynamical systems. Associated issues of efficiency and design robustness are discussed.

  1. NLEdit: A generic graphical user interface for Fortran programs

    Science.gov (United States)

    Curlett, Brian P.

    1994-01-01

    NLEdit is a generic graphical user interface for the preprocessing of Fortran namelist input files. The interface consists of a menu system, a message window, a help system, and data entry forms. A form is generated for each namelist. The form has an input field for each namelist variable along with a one-line description of that variable. Detailed help information, default values, and minimum and maximum allowable values can all be displayed via menu picks. Inputs are processed through a scientific calculator program that allows complex equations to be used instead of simple numeric inputs. A custom user interface is generated simply by entering information about the namelist input variables into an ASCII file. There is no need to learn a new graphics system or programming language. NLEdit can be used as a stand-alone program or as part of a larger graphical user interface. Although NLEdit is intended for files using namelist format, it can be easily modified to handle other file formats.

  2. Brain stem auditory potentials evoked by clicks in the presence of high-pass filtered noise in dogs.

    Science.gov (United States)

    Poncelet, L; Deltenre, P; Coppens, A; Michaux, C; Coussart, E

    2006-04-01

    This study evaluates the effects of a high-frequency hearing loss simulated by the high-pass-noise masking method, on the click-evoked brain stem-evoked potentials (BAEP) characteristics in dogs. BAEP were obtained in response to rarefaction and condensation click stimuli from 60 dB normal hearing level (NHL, corresponding to 89 dB sound pressure level) to wave V threshold, using steps of 5 dB in eleven 58 to 80-day-old Beagle puppies. Responses were added, providing an equivalent to alternate polarity clicks, and subtracted, providing the rarefaction-condensation potential (RCDP). The procedure was repeated while constant level, high-pass filtered (HPF) noise was superposed to the click. Cut-off frequencies of the successively used filters were 8, 4, 2 and 1 kHz. For each condition, wave V and RCDP thresholds, and slope of the wave V latency-intensity curve (LIC) were collected. The intensity range at which RCDP could not be recorded (pre-RCDP range) was calculated. Compared with the no noise condition, the pre-RCDP range significantly diminished and the wave V threshold significantly increased when the superposed HPF noise reached the 4 kHz area. Wave V LIC slope became significantly steeper with the 2 kHz HPF noise. In this non-invasive model of high-frequency hearing loss, impaired hearing of frequencies from 8 kHz and above escaped detection through click BAEP study in dogs. Frequencies above 13 kHz were however not specifically addressed in this study.

  3. The comparison and selection of programming languages for high energy physics applications

    International Nuclear Information System (INIS)

    White, B.

    1991-06-01

    This paper discusses the issues surrounding the comparison and selection of a programming language to be used in high energy physics software applications. The evaluation method used was specifically devised to address the issues of particular importance to high energy physics (HEP) applications, not just the technical features of the languages considered. The method assumes a knowledge of the requirements of current HEP applications, the data-processing environments expected to support these applications and relevant non-technical issues. The languages evaluated were Ada, C, FORTRAN 77, FORTRAN 99 (formerly 8X), Pascal and PL/1. Particular emphasis is placed upon the past, present and anticipated future role of FORTRAN in HEP software applications. Upon examination of the technical and practical issues, conclusions are reached and some recommendations are made regarding the role of FORTRAN and other programming languages in the current and future development of HEP software. 54 refs

  4. BGSUB and BGFIX: FORTRAN programs to correct Ge(Li) gamma-ray spectra for photopeaks from radionuclides in background

    International Nuclear Information System (INIS)

    Cutshall, N.H.; Larsen, I.L.

    1980-03-01

    Two FORTRAN programs which provide correction and error analysis for background photopeak contributions to low-level gamma-ray spectra are discussed. A peak-by-peak background subtraction approach is used instead of channel-by-channel correction. The accuracy of corrected results near background levels is substantially improved over uncorrected values

  5. Implementation of the Next Generation Attenuation (NGA) ground-motion prediction equations in Fortran and R

    Science.gov (United States)

    Kaklamanos, James; Boore, David M.; Thompson, Eric M.; Campbell, Kenneth W.

    2010-01-01

    This report presents two methods for implementing the earthquake ground-motion prediction equations released in 2008 as part of the Next Generation Attenuation of Ground Motions (NGA-West, or NGA) project coordinated by the Pacific Earthquake Engineering Research Center (PEER). These models were developed for predicting ground-motion parameters for shallow crustal earthquakes in active tectonic regions (such as California). Of the five ground-motion prediction equations (GMPEs) developed during the NGA project, four models are implemented: the GMPEs of Abrahamson and Silva (2008), Boore and Atkinson (2008), Campbell and Bozorgnia (2008), and Chiou and Youngs (2008a); these models are abbreviated as AS08, BA08, CB08, and CY08, respectively. Since site response is widely recognized as an important influence of ground motions, engineering applications typically require that such effects be modeled. The model of Idriss (2008) is not implemented in our programs because it does not explicitly include site response, whereas the other four models include site response and use the same variable to describe the site condition (VS30). We do not intend to discourage the use of the Idriss (2008) model, but we have chosen to implement the other four NGA models in our programs for those users who require ground-motion estimates for various site conditions. We have implemented the NGA models by using two separate programming languages: Fortran and R (R Development Core Team, 2010). Fortran, a compiled programming language, has been used in the scientific community for decades. R is an object-oriented language and environment for statistical computing that is gaining popularity in the statistical and scientific community. Derived from the S language and environment developed at Bell Laboratories, R is an open-source language that is freely available at http://www.r-project.org/ (last accessed 11 January 2011). In R, the functions for computing the NGA equations can be loaded as an

  6. The comparison and selection of programming languages for high energy physics applications

    International Nuclear Information System (INIS)

    White, B.; Stanford Linear Accelerator Center, CA

    1989-01-01

    In this paper a comparison is presented of programming languages in the context of high energy physics software applications. The evaluation method uses was specifically devised to address the issues of particular importance to HEP applications, not just the technical features of the languages considered. The candidate languages evaluated were Ada, C, FORTRAN 77, FORTRAN 8x, Pascal and PL/I. Some conclusions are drawn and recommendations made regarding the role of FORTRAN and other programming languages in the current and future development of HEP software. (orig.)

  7. A Fortran program (RELAX3D) to solve the 3 dimensional Poisson (Laplace) equation

    International Nuclear Information System (INIS)

    Houtman, H.; Kost, C.J.

    1983-09-01

    RELAX3D is an efficient, user friendly, interactive FORTRAN program which solves the Poisson (Laplace) equation Λ 2 =p for a general 3 dimensional geometry consisting of Dirichlet and Neumann boundaries approximated to lie on a regular 3 dimensional mesh. The finite difference equations at these nodes are solved using a successive point-iterative over-relaxation method. A menu of commands, supplemented by HELP facility, controls the dynamic loading of the subroutine describing the problem case, the iterations to converge to a solution, and the contour plotting of any desired slices, etc

  8. FORTRAN computer programs to process Savannah River Laboratory hydrogeochemical and stream-sediment reconnaissance data

    International Nuclear Information System (INIS)

    Zinkl, R.J.; Shettel, D.L. Jr.; D'Andrea, R.F. Jr.

    1980-03-01

    FORTRAN computer programs have been written to read, edit, and reformat the hydrogeochemical and stream-sediment reconnaissance data produced by Savannah River Laboratory for the National Uranium Resource Evaluation program. The data are presorted by Savannah River Laboratory into stream sediment, ground water, and stream water for each 1 0 x 2 0 quadrangle. Extraneous information is eliminated, and missing analyses are assigned a specific value (-99999.0). Negative analyses are below the detection limit; the absolute value of a negative analysis is assumed to be the detection limit

  9. GKS-EZ programming manual for FORTRAN-77

    Energy Technology Data Exchange (ETDEWEB)

    Beach, R.C.

    1992-01-01

    A standard has now been adopted for subroutine packages that drive graphic devices. It is known as the Graphical Kernel system (GKS), and many commercial implementations of it are available. Unfortunately, it is a difficult system to learn, and certain functions that are important for scientific use are not provided. Although GKS can be used to achieve portability of graphic applications between graphic devices, computers, and operating systems, it can also be misused in this respect. In addition, it introduces the very real problem of portability between the various implementations of GKS. This document describes a set of FORTRAN-77 subroutines that may be used to control a wide variety of graphic devices and overcome most of these problems. Some of these subroutines are from GKS itself, while others are higher-level subroutines that call GKS subroutines. These subroutines are collectively known as GKS-EZ. The purpose is to supply someone who is not a specialist in computer graphics with a flexible, robust, and easy to learn graphics system. Users of GKS-EZ should not have much need for a full GKS manual; this document will supply all of the information to use GKS-EZ except for a few items. These missing items include the numeric identification of the supported graphic devices and the procedure for linking the GKS subroutines into a executable module.

  10. Muscle layer histopathology and manometry pattern of primary esophageal motility disorders including achalasia.

    Science.gov (United States)

    Nakajima, N; Sato, H; Takahashi, K; Hasegawa, G; Mizuno, K; Hashimoto, S; Sato, Y; Terai, S

    2017-03-01

    Histopathology of muscularis externa in primary esophageal motility disorders has been characterized previously. We aimed to correlate the results of high-resolution manometry with those of histopathology. During peroral endoscopic myotomy, peroral esophageal muscle biopsy was performed in patients with primary esophageal motility disorders. Immunohistochemical staining for c-kit was performed to assess the interstitial cells of Cajal (ICCs). Hematoxylin Eosin and Azan-Mallory staining were used to detect muscle atrophy, inflammation, and fibrosis, respectively. Slides from 30 patients with the following motility disorders were analyzed: achalasia (type I: 14, type II: 5, type III: 3), one diffuse esophageal spasm (DES), two outflow obstruction (OO), four jackhammer esophagus (JE), and one nutcracker esophagus (NE). ICCs were preserved in high numbers in type III achalasia (n=9.4±1.2 cells/high power field [HPF]), compared to types I (n=3.7±0.3 cells/HPF) and II (n=3.5±1.0 cells/HPF). Moreover, severe fibrosis was only observed in type I achalasia and not in other types of achalasia, OO, or DES. Four of five patients with JE and NE had severe inflammation with eosinophilic infiltration of the esophageal muscle layer (73.8±50.3 eosinophils/HPF) with no epithelial eosinophils. One patient with JE showed a visceral myopathy pattern. Compared to types I and II, type III achalasia showed preserved ICCs, with variable data regarding DES and OO. In disorders considered as primary esophageal motility disorders, a disease category exists, which shows eosinophilic infiltration in the esophageal muscle layer with no eosinophils in the epithelium. © 2016 John Wiley & Sons Ltd.

  11. High-speed vector-processing system of the MELCOM-COSMO 900II

    Energy Technology Data Exchange (ETDEWEB)

    Masuda, K; Mori, H; Fujikake, J; Sasaki, Y

    1983-01-01

    Progress in scientific and technical calculations has lead to a growing demand for high-speed vector calculations. Mitsubishi electric has developed an integrated array processor and automatic-vectorizing fortran compiler as an option for the MELCOM-COSMO 900II computer system. This facilitates the performance of vector calculations and matrix calculations, achieving significant gains in cost-effectiveness. The article outlines the high-speed vector system, includes discussion of compiler structuring, and cites examples of effective system application. 1 reference.

  12. LIONS: a new set of Fortran90 codes for the SPIRAL project at GANIL

    International Nuclear Information System (INIS)

    Bertrand, P.

    1995-01-01

    In this paper a set of new computer programs developed at GANIL is presented. These codes are used to study different parts of the SPIRAL project, in particular the dynamics in the CIME cyclotron and the new extraction system of the ECR ion sources. Three important modules are described: CHA3D for the evaluation of 3D electric fields with or without space charge effects, LIONS for the motion of ions and EXTRACT for the ECRIS extraction. These modules are written in Fortran90 in a ''data parallel scheme''. They work either on UNIX workstations or parallel and vectorial computers. (orig.)

  13. High performance computing applied to simulation of the flow in pipes; Computacao de alto desempenho aplicada a simulacao de escoamento em dutos

    Energy Technology Data Exchange (ETDEWEB)

    Cozin, Cristiane; Lueders, Ricardo; Morales, Rigoberto E.M. [Universidade Tecnologica Federal do Parana (UTFPR), Curitiba, PR (Brazil). Dept. de Engenharia Mecanica

    2008-07-01

    In recent years, computer cluster has emerged as a real alternative to solution of problems which require high performance computing. Consequently, the development of new applications has been driven. Among them, flow simulation represents a real computational burden specially for large systems. This work presents a study of using parallel computing for numerical fluid flow simulation in pipelines. A mathematical flow model is numerically solved. In general, this procedure leads to a tridiagonal system of equations suitable to be solved by a parallel algorithm. In this work, this is accomplished by a parallel odd-oven reduction method found in the literature which is implemented on Fortran programming language. A computational platform composed by twelve processors was used. Many measures of CPU times for different tridiagonal system sizes and number of processors were obtained, highlighting the communication time between processors as an important issue to be considered when evaluating the performance of parallel applications. (author)

  14. New neuro-fuzzy system-based holey polymer fibers drawing process

    Science.gov (United States)

    Mohammed Salim, Omar Nameer

    2017-10-01

    Furnace temperature (T), draw tension (TE), and draw ratio (Dr) are the main parameters that could directly affect holey polymer fiber (HPF) production during the drawing stage. Therefore, a suitable mechanism to control (T), (TE), and (Dr) is required to enhance the HPF production process. The conventional approaches, such as observation and tuning technique, experience many difficulties in realizing the accurate values of (T), (TE), and (Dr) in addition to being expensive and time consuming. Therefore, an artificial intelligence model using the adaptive neuro-fuzzy system (ANFIS) method is proposed as an effective solution to achieve an accurate value of the main parameters that affect HPF drawing. Three ANFIS models are developed and tested to determine which one has the best performance for emulating the operation of HPF drawing tower. The ANFIS model with a gbell MF provides a better performance than Gaussian MF ANFIS model and triangular MF ANFIS model in terms of lower mean absolute error and mean square error. Furthermore, the proposed gbell MF model achieved the highest Q-Q response, which indicates the excellent performance of this model.

  15. Perbandingan Bubble Sort dengan Insertion Sort pada Bahasa Pemrograman C dan Fortran

    Directory of Open Access Journals (Sweden)

    Reina Reina

    2013-12-01

    Full Text Available Sorting is a basic algorithm studied by students of computer science major. Sorting algorithm is the basis of other algorithms such as searching algorithm, pattern matching algorithm. Bubble sort is a popular basic sorting algorithm due to its easiness to be implemented. Besides bubble sort, there is insertion sort. It is lesspopular than bubble sort because it has more difficult algorithm. This paper discusses about process time between insertion sort and bubble sort with two kinds of data. First is randomized data, and the second is data of descending list. Comparison of process time has been done in two kinds of programming language that is C programming language and FORTRAN programming language. The result shows that bubble sort needs more time than insertion sort does.

  16. SPECFUN1, Portable Special FORTRAN Routines with Test Drivers

    International Nuclear Information System (INIS)

    Cody, W.J.

    1993-01-01

    1 - Description of program or function: SPECFUN is a collection of transportable FORTRAN subroutines and test drivers to evaluate certain special functions. The individual subroutines are - Name/Description: ALGAMA: Log gamma function, DAW: Dawson's integral, EI: Exponential integrals, ERF: Error function, ERFC: Complementary error function, GAMMA: Gamma function, I0: Bessel function I-sub-0, I1: Bessel function I-sub-1, J0Y0: Bessel functions J-sub-0 and Y-sub-0, J1Y1: Bessel functions J-sub-1 and Y-sub-1, K0: Bessel function K-sub-0, K1: Bessel function K-sub-1, PSI: Logarithmic derivative of the gamma function, REN: Random number generator, RIBESL: Bessel function I-sub-(N,ALPHA), RJBESL: Bessel function J-sub-(N,ALPHA), RKBESL: Bessel function K-sub-(N,ALPHA), RYBESL: Bessel function Y-sub-(N,ALPHA), MACHAR: Machine-dependent constants. 2 - Method of solution: SPECFUN generally uses rational mini-max approximations for functions of one variable and recurrence relations for functions of two or more variables. 3 - Restrictions on the complexity of the problem: Accuracy is targeted at between 18 and 20 significant decimal digits

  17. SMMP v. 3.0—Simulating proteins and protein interactions in Python and Fortran

    Science.gov (United States)

    Meinke, Jan H.; Mohanty, Sandipan; Eisenmenger, Frank; Hansmann, Ulrich H. E.

    2008-03-01

    We describe a revised and updated version of the program package SMMP. SMMP is an open-source FORTRAN package for molecular simulation of proteins within the standard geometry model. It is designed as a simple and inexpensive tool for researchers and students to become familiar with protein simulation techniques. SMMP 3.0 sports a revised API increasing its flexibility, an implementation of the Lund force field, multi-molecule simulations, a parallel implementation of the energy function, Python bindings, and more. Program summaryTitle of program:SMMP Catalogue identifier:ADOJ_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADOJ_v3_0.html Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html Programming language used:FORTRAN, Python No. of lines in distributed program, including test data, etc.:52 105 No. of bytes in distributed program, including test data, etc.:599 150 Distribution format:tar.gz Computer:Platform independent Operating system:OS independent RAM:2 Mbytes Classification:3 Does the new version supersede the previous version?:Yes Nature of problem:Molecular mechanics computations and Monte Carlo simulation of proteins. Solution method:Utilizes ECEPP2/3, FLEX, and Lund potentials. Includes Monte Carlo simulation algorithms for canonical, as well as for generalized ensembles. Reasons for new version:API changes and increased functionality. Summary of revisions:Added Lund potential; parameters used in subroutines are now passed as arguments; multi-molecule simulations; parallelized energy calculation for ECEPP; Python bindings. Restrictions:The consumed CPU time increases with the size of protein molecule. Running time:Depends on the size of the simulated molecule.

  18. FORTRAN routines for calculating water thermodynamic properties for use in transient thermal-hydraulics codes

    International Nuclear Information System (INIS)

    Green, C.

    1979-12-01

    A set of FORTRAN subroutines is described for calculating water thermodynamic properties. These were written for use in a transient thermal-hydraulics program, where speed of execution is paramount. The choice of which subroutines to optimise depends on the primary variables in the thermal-hydraulics code. In this particular case the subroutine which has been optimised is the one which calculates pressure and specific enthalpy given the specific volume and the specific internal energy. Another two subroutines are described which complete a self-consistent set. These calculate the specific volume and the temperature given the pressure and the specific enthalpy, and the specific enthalpy and the specific volume given the pressure and the temperature (or the quality). The accuracy is high near the saturation lines, typically less than 1% relative error, and decreases as the fluid becomes more subcooled in the liquid region or more superheated in the steam region. This behaviour is inherent in the method which uses quantities defined on the saturation lines and assumes that certain derivatives are constant for excursions away from these saturation lines. The accuracy and speed of the subroutines are discussed in detail in this report. (author)

  19. Domain-Specific Acceleration and Auto-Parallelization of Legacy Scientific Code in FORTRAN 77 using Source-to-Source Compilation

    OpenAIRE

    Vanderbauwhede, Wim; Davidson, Gavin

    2017-01-01

    Massively parallel accelerators such as GPGPUs, manycores and FPGAs represent a powerful and affordable tool for scientists who look to speed up simulations of complex systems. However, porting code to such devices requires a detailed understanding of heterogeneous programming tools and effective strategies for parallelization. In this paper we present a source to source compilation approach with whole-program analysis to automatically transform single-threaded FORTRAN 77 legacy code into Ope...

  20. High performance GPU processing for inversion using uniform grid searches

    Science.gov (United States)

    Venetis, Ioannis E.; Saltogianni, Vasso; Stiros, Stathis; Gallopoulos, Efstratios

    2017-04-01

    Many geophysical problems are described by systems of redundant, highly non-linear systems of ordinary equations with constant terms deriving from measurements and hence representing stochastic variables. Solution (inversion) of such problems is based on numerical, optimization methods, based on Monte Carlo sampling or on exhaustive searches in cases of two or even three "free" unknown variables. Recently the TOPological INVersion (TOPINV) algorithm, a grid search-based technique in the Rn space, has been proposed. TOPINV is not based on the minimization of a certain cost function and involves only forward computations, hence avoiding computational errors. The basic concept is to transform observation equations into inequalities on the basis of an optimization parameter k and of their standard errors, and through repeated "scans" of n-dimensional search grids for decreasing values of k to identify the optimal clusters of gridpoints which satisfy observation inequalities and by definition contain the "true" solution. Stochastic optimal solutions and their variance-covariance matrices are then computed as first and second statistical moments. Such exhaustive uniform searches produce an excessive computational load and are extremely time consuming for common computers based on a CPU. An alternative is to use a computing platform based on a GPU, which nowadays is affordable to the research community, which provides a much higher computing performance. Using the CUDA programming language to implement TOPINV allows the investigation of the attained speedup in execution time on such a high performance platform. Based on synthetic data we compared the execution time required for two typical geophysical problems, modeling magma sources and seismic faults, described with up to 18 unknown variables, on both CPU/FORTRAN and GPU/CUDA platforms. The same problems for several different sizes of search grids (up to 1012 gridpoints) and numbers of unknown variables were solved on

  1. MONTEC, an interactive fortran program to simulate radiation dose and dose-rate responses of populations

    International Nuclear Information System (INIS)

    Perry, K.A.; Szekely, J.G.

    1983-09-01

    The computer program MONTEC was written to simulate the distribution of responses in a population whose members are exposed to multiple radiation doses at variable dose rates. These doses and dose rates are randomly selected from lognormal distributions. The individual radiation responses are calculated from three equations, which include dose and dose-rate terms. Other response-dose/rate relationships or distributions can be incorporated by the user as the need arises. The purpose of this documentation is to provide a complete operating manual for the program. This version is written in FORTRAN-10 for the DEC system PDP-10

  2. Proceedings of workshop on 'future in HEP computing'

    International Nuclear Information System (INIS)

    Karita, Yukio; Amako, Katsuya; Watase, Yoshiyuki

    1993-12-01

    The workshop was held on March 11 and 12, 1993, at the National Laboratory for High Energy Physics (KEK). The large flow from the conventional system centering around large versatile computers to the down-sizing taking distributed processing systems in it is formed, but its destination is not yet seen. As the concrete themes of 'future in HEP computing', problems toward down-sizing and the approach, future perspective of the networks, and adaptation of software engineering and pointing to object were taken up. At the workshop, lectures were given on requirements in HEP computing, possible solutions from Hitachi and Fujitsu, and network computing with work-stations regarding down-sizing and HEP computing; approaches in INS and KEK regarding future computing system in HEP laboratories; user requirement for future network, network service available in 1995-2005, multi-media communication and network protocols regarding future networks; object-oriented approach for software development, OOP for real time data acquisition and accelerator control; ProdiG activities and future of FORTRAN, F90 and HPF regarding OOP and physics, and trends in software development methodology. (K.I.)

  3. HAUFES : a FORTRAN code for the calculation of compound nuclear cross-sections by Hauser-Feshbach theory

    International Nuclear Information System (INIS)

    Viyogi, Y.P.; Ganguly, N.K.

    1975-01-01

    The FORTRAN code described in the report has been developed for the BESM-6 computer with a view to calculate the cross-section of reactions proceeding via the formation of compound nucleus for all open two-body reaction channels using Hauser-Feshbach theory with Moldauer's correction for the fluctuation of level widths. The code can also be used to analyse data from 'crystal blocking' experiments to obtain nuclear level densities. The report describes the input-output specifications along with a short account of the algorithm of the program. (author)

  4. A new Fortran 90 program to compute regular and irregular associated Legendre functions (new version announcement)

    Science.gov (United States)

    Schneider, Barry I.; Segura, Javier; Gil, Amparo; Guan, Xiaoxu; Bartschat, Klaus

    2018-04-01

    This is a revised and updated version of a modern Fortran 90 code to compute the regular Plm (x) and irregular Qlm (x) associated Legendre functions for all x ∈(- 1 , + 1) (on the cut) and | x | > 1 and integer degree (l) and order (m). The necessity to revise the code comes as a consequence of some comments of Prof. James Bremer of the UC//Davis Mathematics Department, who discovered that there were errors in the code for large integer degree and order for the normalized regular Legendre functions on the cut.

  5. The Julia programming language: the future of scientific computing

    Science.gov (United States)

    Gibson, John

    2017-11-01

    Julia is an innovative new open-source programming language for high-level, high-performance numerical computing. Julia combines the general-purpose breadth and extensibility of Python, the ease-of-use and numeric focus of Matlab, the speed of C and Fortran, and the metaprogramming power of Lisp. Julia uses type inference and just-in-time compilation to compile high-level user code to machine code on the fly. A rich set of numeric types and extensive numerical libraries are built-in. As a result, Julia is competitive with Matlab for interactive graphical exploration and with C and Fortran for high-performance computing. This talk interactively demonstrates Julia's numerical features and benchmarks Julia against C, C++, Fortran, Matlab, and Python on a spectral time-stepping algorithm for a 1d nonlinear partial differential equation. The Julia code is nearly as compact as Matlab and nearly as fast as Fortran. This material is based upon work supported by the National Science Foundation under Grant No. 1554149.

  6. LIONS: a new set of Fortran 90 codes for the SPIRAL project at GANIL

    International Nuclear Information System (INIS)

    Bertrand, P.

    1994-01-01

    A set of new computer programs developed at GANIL is presented; these codes are used to study different parts of the SPIRAL project (a new radioactive ion beam facility), and particularly the dynamics in the CIME cyclotron, its injection inflector, and the new extraction system of the ECR ion sources. Three important modules are described: CHA3D for the evaluation of 3D electric fields with or without space charge effects, LIONS for the motion of ions and EXTRACT for the ECRIS extraction. These modules are written in Fortran 90 in a ''data parallel scheme''. They work either on UNIX workstations or parallel and vectorial computers. (author). 5 figs., 5 refs

  7. Simultaneous stimulation of glycolysis and gluconeogenesis by feeding in the anterior intestine of the omnivorous GIFT tilapia, Oreochromis niloticus

    Directory of Open Access Journals (Sweden)

    Yong-Jun Chen

    2017-06-01

    Full Text Available The present study was performed to investigate the roles of anterior intestine in the postprandial glucose homeostasis of the omnivorous Genetically Improved Farmed Tilapia (GIFT. Sub-adult fish (about 173 g were sampled at 0, 1, 3, 8 and 24 h post feeding (HPF after 36 h of food deprivation, and the time course of changes in intestinal glucose transport, glycolysis, glycogenesis and gluconeogenesis at the transcription and enzyme activity level, as well as plasma glucose contents, were analyzed. Compared with 0 HPF (fasting for 36 h, the mRNA levels of both ATP-dependent sodium/glucose cotransporter 1 and facilitated glucose transporter 2 increased during 1-3 HPF, decreased at 8 HPF and then leveled off. These results indicated that intestinal uptake of glucose and its transport across the intestine to blood mainly occurred during 1-3 HPF, which subsequently resulted in the increase of plasma glucose level at the same time. Intestinal glycolysis was stimulated during 1-3 HPF, while glucose storage as glycogen was induced during 3-8 HPF. Unexpectedly, intestinal gluconeogenesis (IGNG was also strongly induced during 1-3 HPF at the state of nutrient assimilation. The mRNA abundance and enzyme activities of glutamic-pyruvic and glutamic-oxaloacetic transaminases increased during 1-3 HPF, suggesting that the precursors of IGNG might originate from some amino acids. Taken together, it was concluded that the anterior intestine played an important role in the regulation of postprandial glucose homeostasis in omnivorous tilapia, as it represented significant glycolytic potential and glucose storage. It was interesting that postprandial IGNG was stimulated by feeding temporarily, and its biological significance remains to be elucidated in fish.

  8. Simultaneous stimulation of glycolysis and gluconeogenesis by feeding in the anterior intestine of the omnivorous GIFT tilapia, Oreochromis niloticus.

    Science.gov (United States)

    Chen, Yong-Jun; Zhang, Ti-Yin; Chen, Hai-Yan; Lin, Shi-Mei; Luo, Li; Wang, De-Shou

    2017-06-15

    The present study was performed to investigate the roles of anterior intestine in the postprandial glucose homeostasis of the omnivorous Genetically Improved Farmed Tilapia (GIFT). Sub-adult fish (about 173 g) were sampled at 0, 1, 3, 8 and 24 h post feeding (HPF) after 36 h of food deprivation, and the time course of changes in intestinal glucose transport, glycolysis, glycogenesis and gluconeogenesis at the transcription and enzyme activity level, as well as plasma glucose contents, were analyzed. Compared with 0 HPF (fasting for 36 h), the mRNA levels of both ATP-dependent sodium/glucose cotransporter 1 and facilitated glucose transporter 2 increased during 1-3 HPF, decreased at 8 HPF and then leveled off. These results indicated that intestinal uptake of glucose and its transport across the intestine to blood mainly occurred during 1-3 HPF, which subsequently resulted in the increase of plasma glucose level at the same time. Intestinal glycolysis was stimulated during 1-3 HPF, while glucose storage as glycogen was induced during 3-8 HPF. Unexpectedly, intestinal gluconeogenesis (IGNG) was also strongly induced during 1-3 HPF at the state of nutrient assimilation. The mRNA abundance and enzyme activities of glutamic-pyruvic and glutamic-oxaloacetic transaminases increased during 1-3 HPF, suggesting that the precursors of IGNG might originate from some amino acids. Taken together, it was concluded that the anterior intestine played an important role in the regulation of postprandial glucose homeostasis in omnivorous tilapia, as it represented significant glycolytic potential and glucose storage. It was interesting that postprandial IGNG was stimulated by feeding temporarily, and its biological significance remains to be elucidated in fish. © 2017. Published by The Company of Biologists Ltd.

  9. Hormetic effect induced by depleted uranium in zebrafish embryos

    International Nuclear Information System (INIS)

    Ng, C.Y.P.; Cheng, S.H.; Yu, K.N.

    2016-01-01

    Highlights: • Studied hormetic effect induced by uranium (U) in embryos of zebrafish (Danio rerio). • Hormesis observed at 24 hpf for exposures to 10 μg/l of depleted U (DU). • Hormesis not observed before 30 hpf for exposures to 100 μg/l of DU. • Hormetic effect induced in zebrafish embryos in a dose-and time-dependent manner. - Abstract: The present work studied the hormetic effect induced by uranium (U) in embryos of zebrafish (Danio rerio) using apoptosis as the biological endpoint. Hormetic effect is characterized by biphasic dose-response relationships showing a low-dose stimulation and a high-dose inhibition. Embryos were dechorionated at 4 h post fertilization (hpf), and were then exposed to 10 or 100 μg/l depleted uranium (DU) in uranyl acetate solutions from 5 to 6 hpf. For exposures to 10 μg/l DU, the amounts of apoptotic signals in the embryos were significantly increased at 20 hpf but were significantly decreased at 24 hpf, which demonstrated the presence of U-induced hormesis. For exposures to 100 μg/l DU, the amounts of apoptotic signals in the embryos were significantly increased at 20, 24 and 30 hpf. Hormetic effect was not shown but its occurrence between 30 and 48 hpf could not be ruled out. In conclusion, hormetic effect could be induced in zebrafish embryos in a concentration- and time-dependent manner.

  10. Hormetic effect induced by depleted uranium in zebrafish embryos

    Energy Technology Data Exchange (ETDEWEB)

    Ng, C.Y.P. [Department of Physics and Materials Science, City University of Hong Kong (Hong Kong); Cheng, S.H., E-mail: bhcheng@cityu.edu.hk [Department of Biomedical Sciences, City University of Hong Kong (Hong Kong); State Key Laboratory in Marine Pollution, City University of Hong Kong (Hong Kong); Yu, K.N., E-mail: peter.yu@cityu.edu.hk [Department of Physics and Materials Science, City University of Hong Kong (Hong Kong); State Key Laboratory in Marine Pollution, City University of Hong Kong (Hong Kong)

    2016-06-15

    Highlights: • Studied hormetic effect induced by uranium (U) in embryos of zebrafish (Danio rerio). • Hormesis observed at 24 hpf for exposures to 10 μg/l of depleted U (DU). • Hormesis not observed before 30 hpf for exposures to 100 μg/l of DU. • Hormetic effect induced in zebrafish embryos in a dose-and time-dependent manner. - Abstract: The present work studied the hormetic effect induced by uranium (U) in embryos of zebrafish (Danio rerio) using apoptosis as the biological endpoint. Hormetic effect is characterized by biphasic dose-response relationships showing a low-dose stimulation and a high-dose inhibition. Embryos were dechorionated at 4 h post fertilization (hpf), and were then exposed to 10 or 100 μg/l depleted uranium (DU) in uranyl acetate solutions from 5 to 6 hpf. For exposures to 10 μg/l DU, the amounts of apoptotic signals in the embryos were significantly increased at 20 hpf but were significantly decreased at 24 hpf, which demonstrated the presence of U-induced hormesis. For exposures to 100 μg/l DU, the amounts of apoptotic signals in the embryos were significantly increased at 20, 24 and 30 hpf. Hormetic effect was not shown but its occurrence between 30 and 48 hpf could not be ruled out. In conclusion, hormetic effect could be induced in zebrafish embryos in a concentration- and time-dependent manner.

  11. CWF and TABLE - Two Fortran programmes for the calculation of Coulomb penetration and shift factors

    International Nuclear Information System (INIS)

    Norton, D.S.; James, M.F.

    1965-12-01

    CWF and TABLE are Fortran programmes, written for the IBM 7090 and English-Electric Leo Marconi KDF9 computers, that calculate the penetration and shift factors for a charged particle in a Coulomb field. The numerical methods used are those of Lutz and Karvelis. The two programmes are very similar. Input to TABLE is in the form of the centre-of-mass co-ordinates. CWF is intended for use in calculating cross-sections for neutron-induced reactions which result in charged particle emission, and the input is in the form of the neutron energy in the laboratory frame of reference, together with other necessary reaction data. (author)

  12. Beam dynamics simulations using a parallel version of PARMILA

    International Nuclear Information System (INIS)

    Ryne, R.D.

    1996-01-01

    The computer code PARMILA has been the primary tool for the design of proton and ion linacs in the United States for nearly three decades. Previously it was sufficient to perform simulations with of order 10000 particles, but recently the need to perform high resolution halo studies for next-generation, high intensity linacs has made it necessary to perform simulations with of order 100 million particles. With the advent of massively parallel computers such simulations are now within reach. Parallel computers already make it possible, for example, to perform beam dynamics calculations with tens of millions of particles, requiring over 10 GByte of core memory, in just a few hours. Also, parallel computers are becoming easier to use thanks to the availability of mature, Fortran-like languages such as Connection Machine Fortran and High Performance Fortran. We will describe our experience developing a parallel version of PARMILA and the performance of the new code

  13. Beam dynamics simulations using a parallel version of PARMILA

    International Nuclear Information System (INIS)

    Ryne, Robert

    1996-01-01

    The computer code PARMILA has been the primary tool for the design of proton and ion linacs in the United States for nearly three decades. Previously it was sufficient to perform simulations with of order 10000 particles, but recently the need to perform high resolution halo studies for next-generation, high intensity linacs has made it necessary to perform simulations with of order 100 million particles. With the advent of massively parallel computers such simulations are now within reach. Parallel computers already make it possible, for example, to perform beam dynamics calculations with tens of millions of particles, requiring over 10 GByte of core memory, in just a few hours. Also, parallel computers are becoming easier to use thanks to the availability of mature, Fortran-like languages such as Connection Machine Fortran and High Performance Fortran. We will describe our experience developing a parallel version of PARMILA and the performance of the new code. (author)

  14. Exshall: A Turkel-Zwas explicit large time-step FORTRAN program for solving the shallow-water equations in spherical coordinates

    Science.gov (United States)

    Navon, I. M.; Yu, Jian

    A FORTRAN computer program is presented and documented applying the Turkel-Zwas explicit large time-step scheme to a hemispheric barotropic model with constraint restoration of integral invariants of the shallow-water equations. We then proceed to detail the algorithms embodied in the code EXSHALL in this paper, particularly algorithms related to the efficiency and stability of T-Z scheme and the quadratic constraint restoration method which is based on a variational approach. In particular we provide details about the high-latitude filtering, Shapiro filtering, and Robert filtering algorithms used in the code. We explain in detail the various subroutines in the EXSHALL code with emphasis on algorithms implemented in the code and present the flowcharts of some major subroutines. Finally, we provide a visual example illustrating a 4-day run using real initial data, along with a sample printout and graphic isoline contours of the height field and velocity fields.

  15. RELABEL2007, Labels FORTRAN Statements in ENDF Format Processing Programs

    International Nuclear Information System (INIS)

    2007-01-01

    1 - Description of program or function: RELABEL labels a ENDF/B pre-processing program so that statement labels are in increasing order in increments of 10 within each routine, and cards are identified in columns 73-80 by three alphanumeric characters in columns 73-75 and sequence numbers in columns 76-80 in increments of 10. IAEA1314/10: This version include the updates up to January 30, 2007. Changes in ENDF/B-VII Format and procedures, as well as the evaluations themselves, make it impossible for versions of the ENDF/B pre-processing codes earlier than PREPRO 2007 (2007 Version) to accurately process current ENDF/B-VII evaluations. The present code can handle all existing ENDF/B-VI evaluations through release 8, which will be the last release of ENDF/B-VI. Modifications from previous versions: Relabel VERS. 2007-1 (JAN. 2007): No change since March 2004 version 2 - Method of solution: 3 - Restrictions on the complexity of the problem: RELABEL is designed to maintain ENDF/B processing programs which use a restricted set of FORTRAN statements. As such, this program is not completely general

  16. High Performance Programming Using Explicit Shared Memory Model on Cray T3D1

    Science.gov (United States)

    Simon, Horst D.; Saini, Subhash; Grassi, Charles

    1994-01-01

    The Cray T3D system is the first-phase system in Cray Research, Inc.'s (CRI) three-phase massively parallel processing (MPP) program. This system features a heterogeneous architecture that closely couples DEC's Alpha microprocessors and CRI's parallel-vector technology, i.e., the Cray Y-MP and Cray C90. An overview of the Cray T3D hardware and available programming models is presented. Under Cray Research adaptive Fortran (CRAFT) model four programming methods (data parallel, work sharing, message-passing using PVM, and explicit shared memory model) are available to the users. However, at this time data parallel and work sharing programming models are not available to the user community. The differences between standard PVM and CRI's PVM are highlighted with performance measurements such as latencies and communication bandwidths. We have found that the performance of neither standard PVM nor CRI s PVM exploits the hardware capabilities of the T3D. The reasons for the bad performance of PVM as a native message-passing library are presented. This is illustrated by the performance of NAS Parallel Benchmarks (NPB) programmed in explicit shared memory model on Cray T3D. In general, the performance of standard PVM is about 4 to 5 times less than obtained by using explicit shared memory model. This degradation in performance is also seen on CM-5 where the performance of applications using native message-passing library CMMD on CM-5 is also about 4 to 5 times less than using data parallel methods. The issues involved (such as barriers, synchronization, invalidating data cache, aligning data cache etc.) while programming in explicit shared memory model are discussed. Comparative performance of NPB using explicit shared memory programming model on the Cray T3D and other highly parallel systems such as the TMC CM-5, Intel Paragon, Cray C90, IBM-SP1, etc. is presented.

  17. Applications Performance Under MPL and MPI on NAS IBM SP2

    Science.gov (United States)

    Saini, Subhash; Simon, Horst D.; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    On July 5, 1994, an IBM Scalable POWER parallel System (IBM SP2) with 64 nodes, was installed at the Numerical Aerodynamic Simulation (NAS) Facility Each node of NAS IBM SP2 is a "wide node" consisting of a RISC 6000/590 workstation module with a clock of 66.5 MHz which can perform four floating point operations per clock with a peak performance of 266 Mflop/s. By the end of 1994, 64 nodes of IBM SP2 will be upgraded to 160 nodes with a peak performance of 42.5 Gflop/s. An overview of the IBM SP2 hardware is presented. The basic understanding of architectural details of RS 6000/590 will help application scientists the porting, optimizing, and tuning of codes from other machines such as the CRAY C90 and the Paragon to the NAS SP2. Optimization techniques such as quad-word loading, effective utilization of two floating point units, and data cache optimization of RS 6000/590 is illustrated, with examples giving performance gains at each optimization step. The conversion of codes using Intel's message passing library NX to codes using native Message Passing Library (MPL) and the Message Passing Interface (NMI) library available on the IBM SP2 is illustrated. In particular, we will present the performance of Fast Fourier Transform (FFT) kernel from NAS Parallel Benchmarks (NPB) under MPL and MPI. We have also optimized some of Fortran BLAS 2 and BLAS 3 routines, e.g., the optimized Fortran DAXPY runs at 175 Mflop/s and optimized Fortran DGEMM runs at 230 Mflop/s per node. The performance of the NPB (Class B) on the IBM SP2 is compared with the CRAY C90, Intel Paragon, TMC CM-5E, and the CRAY T3D.

  18. Algorithm 589. SICEDR: a FORTRAN subroutine for improving the accuracy of computed matrix eigenvalues

    International Nuclear Information System (INIS)

    Dongarra, J.J.

    1982-01-01

    SICEDR is a FORTRAN subroutine for improving the accuracy of a computed real eigenvalue and improving or computing the associated eigenvector. It is first used to generate information during the determination of the eigenvalues by the Schur decomposition technique. In particular, the Schur decomposition technique results in an orthogonal matrix Q and an upper quasi-triangular matrix T, such that A = QTQ/sup T/. Matrices A, Q, and T and the approximate eigenvalue, say lambda, are then used in the improvement phase. SICEDR uses an iterative method similar to iterative improvement for linear systems to improve the accuracy of lambda and improve or compute the eigenvector x in O(n 2 ) work, where n is the order of the matrix A

  19. SAMPO, A Fortran IV Program for Computer Analysis of Gamma Spectrafrom Ge(Li) Detectors, and for Other Spectra with Peaks

    Energy Technology Data Exchange (ETDEWEB)

    Routti, Jorma T.

    1969-10-20

    SAMPO is a Fortran IV program written to perform the data- reduction analysis described by J. T. Routti and S. G. Prussin in Photopeak Method for the Computer Analysis of Gamma-Ray Spectra from Semiconductor Detectors, Nuclear Instruments and Methods 72, 125-142 (1969). The code has also been used to analyze other spectra with peaks and continua. Program SAMPO can be used for an automatic off-line or an interactive on-line analysis. It includes algorithms for line-shape, energy, and efficiency calibrations, and peak-search and peak-fitting routines. Different options are available to make the code applicable to accurate nuclear spectroscopic work as well as to routine data reduction. The mathematical methods and their coding are briefly described. Instructions for using the program and for preparing input data are given and the optimal strategies for running the code are discussed. Instructions are given for using the LRL program library version of SAMPO and for obtaining source decks.

  20. EFFDOS - a FORTRAN-77-code for the calculation of the effective dose equivalent

    International Nuclear Information System (INIS)

    Baer, M.; Honcu, S.; Huebschmann, W.

    1984-01-01

    The FORTRAN-77-code EFFDOS calculates the effective dose equivalent according to ICRP 26 due to the longterm emission of radionuclides into the atmosphere for the following exposure pathways: inhalation, ingestion, γ-ground irradiation (γ-irradiation by radionuclides deposited on the ground) and β- or γ-submersion (irradiation by the passing radioactive cloud). For calculating the effective dose equivalent at a single spot it is necessary to put in the diffusion factor and - if need be - the washout factor; otherwise EFFDOS calculates the input data for the computer codes ISOLA III and WOLGA-1, which then are enabled to compute the atmospheric diffusion, ground deposition and local dose equivalent distribution for the requested exposure pathway. Atmospheric diffusion, deposition and radionuclide transfer are calculated according to the ''Allgemeine Berechnungsgrundlage ....'' recommended by the German Fed. Ministry of Interior. A sample calculated is added. (orig.) [de

  1. Numerical performance and throughput benchmark for electronic structure calculations in PC-Linux systems with new architectures, updated compilers, and libraries.

    Science.gov (United States)

    Yu, Jen-Shiang K; Hwang, Jenn-Kang; Tang, Chuan Yi; Yu, Chin-Hui

    2004-01-01

    A number of recently released numerical libraries including Automatically Tuned Linear Algebra Subroutines (ATLAS) library, Intel Math Kernel Library (MKL), GOTO numerical library, and AMD Core Math Library (ACML) for AMD Opteron processors, are linked against the executables of the Gaussian 98 electronic structure calculation package, which is compiled by updated versions of Fortran compilers such as Intel Fortran compiler (ifc/efc) 7.1 and PGI Fortran compiler (pgf77/pgf90) 5.0. The ifc 7.1 delivers about 3% of improvement on 32-bit machines compared to the former version 6.0. Performance improved from pgf77 3.3 to 5.0 is also around 3% when utilizing the original unmodified optimization options of the compiler enclosed in the software. Nevertheless, if extensive compiler tuning options are used, the speed can be further accelerated to about 25%. The performances of these fully optimized numerical libraries are similar. The double-precision floating-point (FP) instruction sets (SSE2) are also functional on AMD Opteron processors operated in 32-bit compilation, and Intel Fortran compiler has performed better optimization. Hardware-level tuning is able to improve memory bandwidth by adjusting the DRAM timing, and the efficiency in the CL2 mode is further accelerated by 2.6% compared to that of the CL2.5 mode. The FP throughput is measured by simultaneous execution of two identical copies of each of the test jobs. Resultant performance impact suggests that IA64 and AMD64 architectures are able to fulfill significantly higher throughput than the IA32, which is consistent with the SpecFPrate2000 benchmarks.

  2. RODDRP - A FORTRAN program for use in control rod calibration by the rod drop method

    International Nuclear Information System (INIS)

    Wilson, W.E.

    1972-01-01

    The different methods to measure reactivity which are applicable to control rod calibration are discussed. They include: 1) the positive period method, 2) the rod drop method, 3) the source-jerk method, 4) the rod oscillation method, and 5) the pulsed neutron method. The instrument setup used at WSU for rod drop measurements is presented. To speed up the analysis of power fall-off trace, a FORTRAN IV program called RODDRP was written to simultaneously solve the in-hour equation and relative neutron flux. The procedure for calculating the worth of the rod that produced the power trace is given. The reactivity for each time relative flux point is obtained. Conclusions about the status of the equipment are made

  3. Gravity gradient preprocessing at the GOCE HPF

    Science.gov (United States)

    Bouman, J.; Rispens, S.; Gruber, T.; Schrama, E.; Visser, P.; Tscherning, C. C.; Veicherts, M.

    2009-04-01

    One of the products derived from the GOCE observations are the gravity gradients. These gravity gradients are provided in the Gradiometer Reference Frame (GRF) and are calibrated in-flight using satellite shaking and star sensor data. In order to use these gravity gradients for application in Earth sciences and gravity field analysis, additional pre-processing needs to be done, including corrections for temporal gravity field signals to isolate the static gravity field part, screening for outliers, calibration by comparison with existing external gravity field information and error assessment. The temporal gravity gradient corrections consist of tidal and non-tidal corrections. These are all generally below the gravity gradient error level, which is predicted to show a 1/f behaviour for low frequencies. In the outlier detection the 1/f error is compensated for by subtracting a local median from the data, while the data error is assessed using the median absolute deviation. The local median acts as a high-pass filter and it is robust as is the median absolute deviation. Three different methods have been implemented for the calibration of the gravity gradients. All three methods use a high-pass filter to compensate for the 1/f gravity gradient error. The baseline method uses state-of-the-art global gravity field models and the most accurate results are obtained if star sensor misalignments are estimated along with the calibration parameters. A second calibration method uses GOCE GPS data to estimate a low degree gravity field model as well as gravity gradient scale factors. Both methods allow to estimate gravity gradient scale factors down to the 10-3 level. The third calibration method uses high accurate terrestrial gravity data in selected regions to validate the gravity gradient scale factors, focussing on the measurement band. Gravity gradient scale factors may be estimated down to the 10-2 level with this method.

  4. Distribution of intrahepatic T, NK and CD3(+)CD56(+)NKT cells alters after liver transplantation: Shift from innate to adaptive immunity?

    Science.gov (United States)

    Werner, Jens M; Lang, Corinna; Scherer, Marcus N; Farkas, Stefan A; Geissler, Edward K; Schlitt, Hans J; Hornung, Matthias

    2011-07-01

    The liver is an immunological organ containing a large number of T, NK and NKT cells, but little is known about intrahepatic immunity after LTx. Here, we investigated whether the distribution of T, NK and CD3(+)CD56(+)NKT cells is altered in transplanted livers under different circumstances. Core biopsies of transplanted livers were stained with antibodies against CD3 and CD56. Several cell populations including T (CD3(+)CD56(-)), NK (CD3(-)CD56(+)) and NKT cells (CD3(+)CD56(+)) were studied by fluorescence microscopy. Cell numbers were analyzed in relation to the time interval after LTx, immunosuppressive therapy and stage of acute graft rejection (measured with the rejection activity index: RAI) compared to tumor free liver tissue from patients after liver resection due to metastatic disease as control. Recruitment of CD3(+)CD56(+)NKT cells revealed a significant decrease during high RAI scores in comparison to low and middle RAI scores (RAI 7-9: 0.03±0.01/HPF vs. RAI 4-6: 0.1±0.005/HPF). CD3(+)CD56(+)NKT cells were also lower during immunosuppressive therapy with tacrolimus (0.03±0.01/HPF) than with cyclosporine (0.1±0.003/HPF), cyclosporine/MMF (0.1±0.003/HPF) or sirolimus (0.1±0.01/HPF) treatment. Intrahepatic T cell numbers increased significantly 50days after LTx compared to control liver tissue (4.5±0.2/HPF vs. 1.9±0.1/HPF). In contrast, NK cells (0.3±0.004/HPF) were significantly fewer in all biopsies after LTx compared to the control (0.7±0.04/HPF). These data indicate significant alterations in the hepatic recruitment of T, NK and CD3(+)CD56(+)NKT cells after LTx. The increase in T cells and the decrease in NK and CD3(+)CD56(+)NKT cells suggest a shift from innate to adaptive hepatic immunity in the liver graft. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. PUBG; purex solvent extraction process model. [IBM3033; CDC CYBER175; FORTRAN IV

    Energy Technology Data Exchange (ETDEWEB)

    Geldard, J.F.; Beyerlein, A.L.

    PUBG is a chemical model of the Purex solvent extraction system, by which plutonium and uranium are recovered from spent nuclear fuel rods. The system comprises a number of mixer-settler banks. This discrete stage structure is the basis of the algorithms used in PUBG. The stages are connected to provide for countercurrent flow of the aqueous and organic phases. PUBG uses the common convention that has the aqueous phase enter at the lowest numbered stage and exit at the highest one; the organic phase flows oppositely. The volumes of the mixers are smaller than those of the settlers. The mixers generate a fine dispersion of one phase in the other. The high interfacial area is intended to provide for rapid mass transfer of the plutonium and uranium from one phase to the other. The separation of this dispersion back into the two phases occurs in the settlers. The species considered by PUBG are Hydrogen (1+), Plutonium (4+), Uranyl Oxide (2+), Plutonium (3+), Nitrate Anion, and reductant in the aqueous phase and Hydrogen (1+), Uranyl Oxide (2+), Plutonium (4+), and TBP (tri-n-butylphosphate) in the organic phase. The reductant used in the Purex process is either Uranium (4+) or HAN (hydroxylamine nitrate).IBM3033;CDC CYBER175; FORTRAN IV; OS/MVS or OS/MVT (IBM3033), NOS 1.3 (CDC CYBER175); The IBM3033 version requires 150K bytes of memory for execution; 62,000 (octal) words are required by the CDC CYBER175 version..

  6. H5Part A Portable High Performance Parallel Data Interface for Particle Simulations

    CERN Document Server

    Adelmann, Andreas; Shalf, John M; Siegerist, Cristina

    2005-01-01

    Largest parallel particle simulations, in six dimensional phase space generate wast amont of data. It is also desirable to share data and data analysis tools such as ParViT (Particle Visualization Toolkit) among other groups who are working on particle-based accelerator simulations. We define a very simple file schema built on top of HDF5 (Hierarchical Data Format version 5) as well as an API that simplifies the reading/writing of the data to the HDF5 file format. HDF5 offers a self-describing machine-independent binary file format that supports scalable parallel I/O performance for MPI codes on a variety of supercomputing systems and works equally well on laptop computers. The API is available for C, C++, and Fortran codes. The file format will enable disparate research groups with very different simulation implementations to share data transparently and share data analysis tools. For instance, the common file format will enable groups that depend on completely different simulation implementations to share c...

  7. METHUSELAH II - A Fortran program and nuclear data library for the physics assessment of liquid-moderated reactors

    International Nuclear Information System (INIS)

    Brinkworth, M.J.; Griffiths, J.A.

    1966-03-01

    METHUSELAH II is a Fortran program with a nuclear data library, used to calculate cell reactivity and burn-up in liquid-moderated reactors. It has been developed from METHUSELAH I by revising the nuclear data library, and by introducing into the program improvements relating to nuclear data, improvements in efficiency and accuracy, and additional facilities which include a neutron balance edit, specialised outputs, fuel cycling, and fuel costing. These developments are described and information is given on the coding and usage of versions of METHUSELAH II for the IBM 7030 (STRETCH), IBM 7090, and KDF9 computers. (author)

  8. DATA-ENTRY-3: some observations and pragmatics of a structured design. [In FORTRAN for PDP-11/10

    Energy Technology Data Exchange (ETDEWEB)

    Sparks, D.

    1977-08-01

    The FORTRAN program DATA-ENTRY-3 was developed from the COBOL program DATA-ENTRY-1, which solves a large class of elementary data-capture, data-formating, and data-editing problems of managerial accounting. Most of the work involved finding methods to make DATA-ENTRY-3, which is written for a small-machine environment (PDP-11/10 under the RT-11 operating system), logically equivalent to DATA-ENTRY-1, which is written for a large-machine environment (CDC 6600 under a time-sharing operating system). This report explains how structured programing helped, and briefly describes the function of each subroutine.

  9. High performance homes

    DEFF Research Database (Denmark)

    Beim, Anne; Vibæk, Kasper Sánchez

    2014-01-01

    Can prefabrication contribute to the development of high performance homes? To answer this question, this chapter defines high performance in more broadly inclusive terms, acknowledging the technical, architectural, social and economic conditions under which energy consumption and production occur....... Consideration of all these factors is a precondition for a truly integrated practice and as this chapter demonstrates, innovative project delivery methods founded on the manufacturing of prefabricated buildings contribute to the production of high performance homes that are cost effective to construct, energy...

  10. EDDY - a FORTRAN program to extract significant features from eddy-current test data - the basis of the CANSCAN system

    International Nuclear Information System (INIS)

    Jarvis, R.G.; Cranston, R.J.

    1982-09-01

    The FORTRAN program EDDY is designed to analyse data: from eddy-current scans of steam generator tubes. It is written in modular form, for future development, and it uses signal-recognition techniques that the authors developed in the profilometry of irradiated fuel elements. During a scan, significant signals are detected and extracted for immediate attention or more detailed analysis later. A version of the program was used in the CANSCAN system 'for automated eddy-current in-service inspection of nuclear steam generator tubing'

  11. MODLIB, library of Fortran modules for nuclear reaction codes

    International Nuclear Information System (INIS)

    Talou, Patrick

    2006-01-01

    1 - Description of program or function: ModLib is a library of Fortran (90-compatible) modules to be used in existing and future nuclear reaction codes. The development of the library is an international effort being undertaken under the auspices of the long-term Subgroup A of the OECD/NEA Working Party on Evaluation and Cooperation. The aim is to constitute a library of well-tested and well-documented pieces of codes that can be used with confidence in all our coding efforts. This effort will undoubtedly help avoid the duplication of work, and most certainly facilitate the very important inter-comparisons between existing codes. 2 - Methods: - Width f luctuations [Talou, Chadwick]: calculates width fluctuation correction factors (output) for a set of transmission coefficients (input). Three methods are available: HRTW, Moldauer, and Verbaarschot (also called GOE approach). So far, no distinction is made according to the type of the coefficients channels (particle emission, gamma-ray emission, fission). - Gamma s trength [Herman]: calculates gamma-ray transmission coefficients using a Giant Resonance formalism. - Level d ensity [Koning]: computes the Gilbert-Cameron-Ignatyuk formalism for the continuum nuclear level density. - CHECKR, FIZCON, INTER, PSYCHE, STANEF [Dunford]: these modules are used in the MODLIB project but are not included in this package. They are available from the NEA Data Bank Computer Program Service under Package Ids: CHECKR (USCD1208), FIZCON (USCD1209), INTER (USCD1212), PSYCHE (USCD1216), STANEF (USCD1218)

  12. Electron - A fortran programme for the coupled channel calculation of nuclear electromagnetic (e,e') form factors and cross sections in the self-consistent random-phase approximation

    International Nuclear Information System (INIS)

    Cavinato, M.; Marangoni, M.; Saruis, A.M.

    1984-01-01

    Description is given of the Electron programme for IBM 370/168 computer, written in Fortran 4. language. The programme calculates (e,e') cross-sections and longitudinal/transverse form factors for closed shell nuclei in the framework of a self-consistent RPA theory

  13. Coincidence: Fortran code for calculation of (e, e'x) differential cross-sections, nuclear structure functions and polarization asymmetry in self-consistent random phase approximation with Skyrme interaction

    Energy Technology Data Exchange (ETDEWEB)

    Cavinato, M.; Marangoni, M.; Saruis, A.M.

    1990-10-01

    This report describes the COINCIDENCE code written for the IBM 3090/300E computer in Fortran 77 language. The output data of this code are the (e, e'x) threefold differential cross-sections, the nuclear structure functions, the polarization asymmetry and the angular correlation coefficients. In the real photon limit, the output data are the angular distributions for plane polarized incident photons. The code reads from tape the transition matrix elements previously calculated, by in continuum self-consistent RPA (random phase approximation) theory with Skyrme interactions. This code has been used to perform a numerical analysis of coincidence (e, e'x) reactions with polarized electrons on the /sup 16/O nucleous.

  14. Use of Wingz spreadsheet as an interface to total-system performance assessment

    International Nuclear Information System (INIS)

    Chambers, W.F.; Treadway, A.H.

    1992-01-01

    A commercial spreadsheet has been used as an interface to a set of simple models to simulate possible nominal flow and failure scenarios at the potential high-level nuclear waste repository at Yucca Mountain, Nevada. Individual models coded in FORTRAN are linked to the spreadsheet. Complementary cumulative probability distribution functions resulting from the models are plotted through scripts associated with the spreadsheet. All codes are maintained under a source code control system for quality assurance. The spreadsheet and the simple models can be run on workstations, PCs, and Macintoshes. The software system is designed so that the FORTRAN codes can be run on several machines if a network environment is available

  15. Program NICOLET to integrate energy loss in superconducting coils. [In FORTRAN for CDC-6600

    Energy Technology Data Exchange (ETDEWEB)

    Vogel, H.F.

    1978-08-01

    A voltage pickup coil, inductively coupled to the magnetic field of the superconducting coil under test, is connected so its output may be compared with the terminal voltage of the coil under test. The integrated voltage difference is indicative of the resistive volt-seconds. When multiplied with the main coil current, the volt-seconds yield the loss. In other words, a hysteresis loop is obtained if the integrated voltage difference phi = ..integral delta..Vdt is plotted as a function of the coil current, i. First, time functions of the two signals phi(t) and i(t) are recorded on a dual-trace digital oscilloscope, and these signals are then recorded on magnetic tape. On a CDC-6600, the recorded information is decoded and plotted, and the hysteresis loops are integrated by the set of FORTRAN programs NICOLET described in this report. 4 figures.

  16. Assessment of Toxicological Perturbations and Variants of Pancreatic Islet Development in the Zebrafish Model

    Directory of Open Access Journals (Sweden)

    Karilyn E. Sant

    2016-09-01

    Full Text Available The pancreatic islets, largely comprised of insulin-producing beta cells, play a critical role in endocrine signaling and glucose homeostasis. Because they have low levels of antioxidant defenses and a high perfusion rate, the endocrine islets may be a highly susceptible target tissue of chemical exposures. However, this endpoint, as well as the integrity of the surrounding exocrine pancreas, is often overlooked in studies of developmental toxicology. Disruption of development by toxicants can alter cell fate and migration, resulting in structural alterations that are difficult to detect in mammalian embryo systems, but that are easily observed in the zebrafish embryo model (Danio rerio. Using endogenously expressed fluorescent protein markers for developing zebrafish beta cells and exocrine pancreas tissue, we documented differences in islet area and incidence rates of islet morphological variants in zebrafish embryos between 48 and 96 h post fertilization (hpf, raised under control conditions commonly used in embryotoxicity assays. We identified critical windows for chemical exposures during which increased incidences of endocrine pancreas abnormalities were observed following exposure to cyclopamine (2–12 hpf, Mono-2-ethylhexyl phthalate (MEHP (3–48 hpf, and Perfluorooctanesulfonic acid (PFOS (3–48 hpf. Both islet area and length of the exocrine pancreas were sensitive to oxidative stress from exposure to the oxidant tert-butyl hydroperoxide during a highly proliferative critical window (72 hpf. Finally, pancreatic dysmorphogenesis following developmental exposures is discussed with respect to human disease.

  17. PATTERNS OF SEVEN AND COMPLICATED MALARIA IN CHILDREN

    African Journals Online (AJOL)

    GB

    nervous system causes (1-4). This condition is prevalent worldwide (1-5) largely because of the ubiquitous ... to examine at least 100 high power fields(HPF). The findings of : 1-10 parasites per 100HPF was recorded as +. 11-100 ..... Rutter N, Smales ORC. Role of routine investigations in children presenting with their first.

  18. Java/JNI/C/Fortran makefile project for a Java plug-in and related Android app in Eclipse ADT bundle : A side-by-side comparison

    NARCIS (Netherlands)

    De Beer, R.; Van Ormondt, D.

    2015-01-01

    We have developed a Java/Fortran based application, called MonteCarlo, that enables the users can carry out Monte Carlo studies in the field of in vivo MRS. The application is supposed to be used as a tool for the jMRUI platform, being the in vivo MRS software system of the TRANSACT European Union

  19. Cpl6: The New Extensible, High-Performance Parallel Coupler forthe Community Climate System Model

    Energy Technology Data Exchange (ETDEWEB)

    Craig, Anthony P.; Jacob, Robert L.; Kauffman, Brain; Bettge,Tom; Larson, Jay; Ong, Everest; Ding, Chris; He, Yun

    2005-03-24

    Coupled climate models are large, multiphysics applications designed to simulate the Earth's climate and predict the response of the climate to any changes in the forcing or boundary conditions. The Community Climate System Model (CCSM) is a widely used state-of-art climate model that has released several versions to the climate community over the past ten years. Like many climate models, CCSM employs a coupler, a functional unit that coordinates the exchange of data between parts of climate system such as the atmosphere and ocean. This paper describes the new coupler, cpl6, contained in the latest version of CCSM,CCSM3. Cpl6 introduces distributed-memory parallelism to the coupler, a class library for important coupler functions, and a standardized interface for component models. Cpl6 is implemented entirely in Fortran90 and uses Model Coupling Toolkit as the base for most of its classes. Cpl6 gives improved performance over previous versions and scales well on multiple platforms.

  20. State Estimation for Landing Maneuver on High Performance Aircraft

    Science.gov (United States)

    Suresh, P. S.; Sura, Niranjan K.; Shankar, K.

    2018-01-01

    State estimation methods are popular means for validating aerodynamic database on aircraft flight maneuver performance characteristics. In this work, the state estimation method during landing maneuver is explored for the first of its kind, using upper diagonal adaptive extended Kalman filter (UD-AEKF) with fuzzy based adaptive tunning of process noise matrix. The mathematical model for symmetrical landing maneuver consists of non-linear flight mechanics equation representing Aircraft longitudinal dynamics. The UD-AEKF algorithm is implemented in MATLAB environment and the states with bias is considered to be the initial conditions just prior to the flare. The measurement data is obtained from a non-linear 6 DOF pilot in loop simulation using FORTRAN. These simulated measurement data is additively mixed with process and measurement noises, which are used as an input for UD-AEKF. Then, the governing states that dictate the landing loads at the instant of touch down are compared. The method is verified using flight data wherein, the vertical acceleration at the aircraft center of gravity (CG) is compared. Two possible outcome of purely relying on the aircraft measured data is highlighted. It is observed that, with the implementation of adaptive fuzzy logic based extended Kalman filter tuned to adapt for aircraft landing dynamics, the methodology improves the data quality of the states that are sourced from noisy measurements.

  1. Non-induction of radioadaptive response in zebrafish embryos by neutrons

    International Nuclear Information System (INIS)

    Ng, Candy Y.P.; Kong, Eva Y.; Kobayashi, Alisa; Suya, Noriyoshi; Uchihori, Yukio; Cheng, Shuk Han; Konishi, Teruaki; Yu, Kwan Ngok

    2016-01-01

    In vivo neutron-induced radioadaptive response (RAR) was studied using zebrafish (Danio rerio) embryos. The Neutron exposure Accelerator System for Biological Effect Experiments (NASBEE) facility at the National Institute of Radiological Sciences (NIRS), Japan, was employed to provide 2-MeV neutrons. Neutron doses of 0.6, 1, 25, 50 and 100 mGy were chosen as priming doses. An X-ray dose of 2 Gy was chosen as the challenging dose. Zebrafish embryos were dechorionated at 4 h post fertilization (hpf), irradiated with a chosen neutron dose at 5 hpf and the X-ray dose at 10 hpf. The responses of embryos were assessed at 25 hpf through the number of apoptotic signals. None of the neutron doses studied could induce RAR. Non-induction of RAR in embryos having received 0.6- and 1-mGy neutron doses was attributed to neutron-induced hormesis, which maintained the number of damaged cells at below the threshold for RAR induction. On the other hand, non-induction of RAR in embryos having received 25-, 50- and 100-mGy neutron doses was explained by gamma-ray hormesis, which mitigated neutron-induced damages through triggering high-fidelity DNA repair and removal of aberrant cells through apoptosis. Separate experimental results were obtained to verify that high-energy photons could disable RAR. Specifically, 5- or 10-mGy X-rays disabled the RAR induced by a priming dose of 0.88 mGy of alpha particles delivered to 5-hpf zebrafish embryos against a challenging dose of 2 Gy of X-rays delivered to the embryos at 10 hpf

  2. Development of a GPU-based high-performance radiative transfer model for the Infrared Atmospheric Sounding Interferometer (IASI)

    International Nuclear Information System (INIS)

    Huang Bormin; Mielikainen, Jarno; Oh, Hyunjong; Allen Huang, Hung-Lung

    2011-01-01

    speedup for 1 GPU and 1455x speedup for all 4 GPUs, both with respect to the original CPU-based single-threaded Fortran code with the -O 2 compiling optimization. The significant 1455x speedup using a computer with four GPUs means that the proposed GPU-based high-performance forward model is able to compute one day's amount of 1,296,000 IASI spectra within nearly 10 min, whereas the original single CPU-based version will impractically take more than 10 days. This model runs over 80% of the theoretical memory bandwidth with asynchronous data transfer. A novel CPU-GPU pipeline implementation of the IASI radiative transfer model is proposed. The GPU-based high-performance IASI radiative transfer model is suitable for the assimilation of the IASI radiance observations into the operational numerical weather forecast model.

  3. High Performance Marine Vessels

    CERN Document Server

    Yun, Liang

    2012-01-01

    High Performance Marine Vessels (HPMVs) range from the Fast Ferries to the latest high speed Navy Craft, including competition power boats and hydroplanes, hydrofoils, hovercraft, catamarans and other multi-hull craft. High Performance Marine Vessels covers the main concepts of HPMVs and discusses historical background, design features, services that have been successful and not so successful, and some sample data of the range of HPMVs to date. Included is a comparison of all HPMVs craft and the differences between them and descriptions of performance (hydrodynamics and aerodynamics). Readers will find a comprehensive overview of the design, development and building of HPMVs. In summary, this book: Focuses on technology at the aero-marine interface Covers the full range of high performance marine vessel concepts Explains the historical development of various HPMVs Discusses ferries, racing and pleasure craft, as well as utility and military missions High Performance Marine Vessels is an ideal book for student...

  4. Measuring the volume of the hippocampus in healthy Chinese adults of the Han nationality on the high-resolution MRI

    International Nuclear Information System (INIS)

    Zhang Yong; Chen Nan; Wang Xing; Li Kuncheng; Zhuo Yan; Chen Lin

    2010-01-01

    Objective: To measure the volume of hippocampal formation (HPF) in healthy Chinese Han adults and provide database for researching on a variety of diseases associated with alteration of hippocampal structure. Methods: This is a clinical multi-center study. One thousand Chinese healthy volunteers (age range=18 to 70) recruited from 15 hospitals were divided into 5 groups, i. e., Group A (age range=18 to 30), B (age range=31 to 40), C (age range =41 to 50), D (age range =51 to 60), and E (age range = 61 to 70). Each group contained 100 males and 100 females. All of the volunteers were scanned by MR using T 1 weighted three-dimensional magnetization prepared rapid acquisition gradient echo sequence. The margin of HPF were outlined manually for each side. Using multiple linear regression, relationships between hippocampal volume and sex, age, weight and height were analyzed respectively. Independent two sample t test was used to study the differences between male and female and between left and right. The differences of hippocampal volume among age groups were analyzed by ANOVA. Results: Hippocampal volume for left and right side were (4752±659) and (5032±660) mm 3 respectively. The volume of HPF is significant correlated with gender and age, but without relevance to height and weight (left and right r=0.283,0.311, F=30.127,37.050,P 3 respectively for men, and (4647±624) and (4904±630) mm 3 for women. The right hippocampal volume was larger than the left (t=7.030,6.696, P 3 respectively, while the volumes of the fight hippocampus were (5340± 647), (5276±582), (5264±620), (5133±661), (4894±699) mm 3 respectively. Among age groups, the differences were statistically significant (left and right F=5.737,7.607, P 0.05). There was no significant difference of hippocampal among different groups in women (P>0.05). Conclusions: With high-resolution MRI, the volume of the HPF was accurately measured, so as to provide the basic data for research of the hippocampus

  5. LIAR -- A computer program for the modeling and simulation of high performance linacs

    International Nuclear Information System (INIS)

    Assmann, R.; Adolphsen, C.; Bane, K.; Emma, P.; Raubenheimer, T.; Siemann, R.; Thompson, K.; Zimmermann, F.

    1997-04-01

    The computer program LIAR (LInear Accelerator Research Code) is a numerical modeling and simulation tool for high performance linacs. Amongst others, it addresses the needs of state-of-the-art linear colliders where low emittance, high-intensity beams must be accelerated to energies in the 0.05-1 TeV range. LIAR is designed to be used for a variety of different projects. LIAR allows the study of single- and multi-particle beam dynamics in linear accelerators. It calculates emittance dilutions due to wakefield deflections, linear and non-linear dispersion and chromatic effects in the presence of multiple accelerator imperfections. Both single-bunch and multi-bunch beams can be simulated. Several basic and advanced optimization schemes are implemented. Present limitations arise from the incomplete treatment of bending magnets and sextupoles. A major objective of the LIAR project is to provide an open programming platform for the accelerator physics community. Due to its design, LIAR allows straight-forward access to its internal FORTRAN data structures. The program can easily be extended and its interactive command language ensures maximum ease of use. Presently, versions of LIAR are compiled for UNIX and MS Windows operating systems. An interface for the graphical visualization of results is provided. Scientific graphs can be saved in the PS and EPS file formats. In addition a Mathematica interface has been developed. LIAR now contains more than 40,000 lines of source code in more than 130 subroutines. This report describes the theoretical basis of the program, provides a reference for existing features and explains how to add further commands. The LIAR home page and the ONLINE version of this manual can be accessed under: http://www.slac.stanford.edu/grp/arb/rwa/liar.htm

  6. A FORTRAN program for numerical solution of the Altarelli-Parisi equations by the Laguerre method

    International Nuclear Information System (INIS)

    Kumano, S.; Londergan, J.T.

    1992-01-01

    We review the Laguerre method for solving the Altarelli-Parisi equations. The Laguerre method allows one to expand quark/parton distributions and splitting functions in orthonormal polynomials. The desired quark distributions are themselves expanded in terms of evolution operators, and we derive the integrodifferential equations satisfied by the evolution operators. We give relevant equations for both flavor nonsinglet and singlet distributions, for both spin-independent and spin-dependent distributions. We discuss stability and accuracy of the results using this method. For intermediate values of Bjorken x (0.03< x<0.7), one can obtain accurate results with a modest number of Laguerre polynomials (N≅20); we discuss requirements for convergence also for the regions of large or small x. A FORTRAN program is provided which implements the Laguerre method; test results are given for both the spin-independent and spin-dependent cases. (orig.)

  7. PetClaw: Parallelization and Performance Optimization of a Python-Based Nonlinear Wave Propagation Solver Using PETSc

    KAUST Repository

    Alghamdi, Amal Mohammed

    2012-04-01

    Clawpack, a conservation laws package implemented in Fortran, and its Python-based version, PyClaw, are existing tools providing nonlinear wave propagation solvers that use state of the art finite volume methods. Simulations using those tools can have extensive computational requirements to provide accurate results. Therefore, a number of tools, such as BearClaw and MPIClaw, have been developed based on Clawpack to achieve significant speedup by exploiting parallel architectures. However, none of them has been shown to scale on a large number of cores. Furthermore, these tools, implemented in Fortran, achieve parallelization by inserting parallelization logic and MPI standard routines throughout the serial code in a non modular manner. Our contribution in this thesis research is three-fold. First, we demonstrate an advantageous use case of Python in implementing easy-to-use modular extensible scalable scientific software tools by developing an implementation of a parallelization framework, PetClaw, for PyClaw using the well-known Portable Extensible Toolkit for Scientific Computation, PETSc, through its Python wrapper petsc4py. Second, we demonstrate the possibility of getting acceptable Python code performance when compared to Fortran performance after introducing a number of serial optimizations to the Python code including integrating Clawpack Fortran kernels into PyClaw for low-level computationally intensive parts of the code. As a result of those optimizations, the Python overhead in PetClaw for a shallow water application is only 12 percent when compared to the corresponding Fortran Clawpack application. Third, we provide a demonstration of PetClaw scalability on up to the entirety of Shaheen; a 16-rack Blue Gene/P IBM supercomputer that comprises 65,536 cores and located at King Abdullah University of Science and Technology (KAUST). The PetClaw solver achieved above 0.98 weak scaling efficiency for an Euler application on the whole machine excluding the

  8. Local adaptive tone mapping for video enhancement

    Science.gov (United States)

    Lachine, Vladimir; Dai, Min (.

    2015-03-01

    As new technologies like High Dynamic Range cameras, AMOLED and high resolution displays emerge on consumer electronics market, it becomes very important to deliver the best picture quality for mobile devices. Tone Mapping (TM) is a popular technique to enhance visual quality. However, the traditional implementation of Tone Mapping procedure is limited by pixel's value to value mapping, and the performance is restricted in terms of local sharpness and colorfulness. To overcome the drawbacks of traditional TM, we propose a spatial-frequency based framework in this paper. In the proposed solution, intensity component of an input video/image signal is split on low pass filtered (LPF) and high pass filtered (HPF) bands. Tone Mapping (TM) function is applied to LPF band to improve the global contrast/brightness, and HPF band is added back afterwards to keep the local contrast. The HPF band may be adjusted by a coring function to avoid noise boosting and signal overshooting. Colorfulness of an original image may be preserved or enhanced by chroma components correction by means of saturation function. Localized content adaptation is further improved by dividing an image to a set of non-overlapped regions and modifying each region individually. The suggested framework allows users to implement a wide range of tone mapping applications with perceptional local sharpness and colorfulness preserved or enhanced. Corresponding hardware circuit may be integrated in camera, video or display pipeline with minimal hardware budget

  9. Highly Palatable Food during Adolescence Improves Anxiety-Like Behaviors and Hypothalamic-Pituitary-Adrenal Axis Dysfunction in Rats that Experienced Neonatal Maternal Separation

    Directory of Open Access Journals (Sweden)

    Jong-Ho Lee

    2014-06-01

    Full Text Available BackgroundThis study was conducted to examine the effects of ad libitum consumption of highly palatable food (HPF during adolescence on the adverse behavioral outcome of neonatal maternal separation.MethodsMale Sprague-Dawley pups were separated from dam for 3 hours daily during the first 2 weeks of birth (maternal separation, MS or left undisturbed (nonhandled, NH. Half of MS pups received free access to chocolate cookies in addition to ad libitum chow from postnatal day 28 (MS+HPF. Pups were subjected to behavioral tests during young adulthood. The plasma corticosterone response to stress challenge was analyzed by radioimmunoassay.ResultsDaily caloric intake and body weight gain did not differ among the experimental groups. Ambulatory activities were decreased defecation activity and rostral grooming were increased in MS controls (fed with chow only compared with NH rats. MS controls spent less time in open arms, and more time in closed arms during the elevated plus maze test, than NH rats. Immobility duration during the forced swim test was increased in MS controls compared with NH rats. Cookie access normalized the behavioral scores of ambulatory and defecation activities and grooming, but not the scores during the elevated plus maze and swim tests in MS rats. Stress-induced corticosterone increase was blunted in MS rats fed with chow only, and cookie access normalized it.ConclusionProlonged access to HPF during adolescence and youth partly improves anxiety-related, but not depressive, symptoms in rats that experienced neonatal maternal separation, possibly in relation with improved function of the hypothalamic-pituitary-adrenal (HPA axis.

  10. StagBL : A Scalable, Portable, High-Performance Discretization and Solver Layer for Geodynamic Simulation

    Science.gov (United States)

    Sanan, P.; Tackley, P. J.; Gerya, T.; Kaus, B. J. P.; May, D.

    2017-12-01

    StagBL is an open-source parallel solver and discretization library for geodynamic simulation,encapsulating and optimizing operations essential to staggered-grid finite volume Stokes flow solvers.It provides a parallel staggered-grid abstraction with a high-level interface in C and Fortran.On top of this abstraction, tools are available to define boundary conditions and interact with particle systems.Tools and examples to efficiently solve Stokes systems defined on the grid are provided in small (direct solver), medium (simple preconditioners), and large (block factorization and multigrid) model regimes.By working directly with leading application codes (StagYY, I3ELVIS, and LaMEM) and providing an API and examples to integrate with others, StagBL aims to become a community tool supplying scalable, portable, reproducible performance toward novel science in regional- and planet-scale geodynamics and planetary science.By implementing kernels used by many research groups beneath a uniform abstraction layer, the library will enable optimization for modern hardware, thus reducing community barriers to large- or extreme-scale parallel simulation on modern architectures. In particular, the library will include CPU-, Manycore-, and GPU-optimized variants of matrix-free operators and multigrid components.The common layer provides a framework upon which to introduce innovative new tools.StagBL will leverage p4est to provide distributed adaptive meshes, and incorporate a multigrid convergence analysis tool.These options, in addition to a wealth of solver options provided by an interface to PETSc, will make the most modern solution techniques available from a common interface. StagBL in turn provides a PETSc interface, DMStag, to its central staggered grid abstraction.We present public version 0.5 of StagBL, including preliminary integration with application codes and demonstrations with its own demonstration application, StagBLDemo. Central to StagBL is the notion of an

  11. Soft X-ray synchrotron radiation spectroscopy study of molecule-based nanoparticles

    Energy Technology Data Exchange (ETDEWEB)

    Lee, E. S.; Kim, D. H.; Kang, J. S.; Kim, P. [The Catholic University of Korea, Bucheon (Korea, Republic of); Kim, K. H. [Korea University, Jochiwon (Korea, Republic of); Baik, J. Y.; Shin, H. J. [POSTECH, Pohang (Korea, Republic of)

    2014-11-15

    The electronic structures of molecule-based nanoparticles, such as biomineralized Helicobacter pylori ferritin (Hpf), Heme, and RbCo[Fe(CN){sub 6}]H{sub 2}O (RbCoFe) Prussian blue analogue, have been investigated by employing photoemission spectroscopy and soft X-ray absorption spectroscopy. Fe ions are found to be nearly trivalent in Hpf and Heme nanoparticles, which provides evidence that the amount of magnetite (Fe{sub 3}O{sub 4}) should be negligible in the Hpf core and that the biomineralization of Fe oxides in the high-Fe-bound-state Hpf core arises from a hematite-like formation. On the other hand, Fe ions are nearly divalent and Co ions are Co{sup 2+}-Co{sup 3+} mixed-valent in RbCoFe. Therefore this finding suggests that the mechanism of the photo-induced transition in RbCoFe Prussian blue analogue is not a simple spin-state transition of Fe{sup 2+}-Co{sup 3+} → Fe{sup 3+}-Co{sup 2+}. It is likely that Co{sup 2+} ions have the high-spin configuration while Fe{sup 2+} ions have the low-spin configuration.

  12. An Evaluation of Java for Numerical Computing

    Directory of Open Access Journals (Sweden)

    Brian Blount

    1999-01-01

    Full Text Available This paper describes the design and implementation of high performance numerical software in Java. Our primary goals are to characterize the performance of object‐oriented numerical software written in Java and to investigate whether Java is a suitable language for such endeavors. We have implemented JLAPACK, a subset of the LAPACK library in Java. LAPACK is a high‐performance Fortran 77 library used to solve common linear algebra problems. JLAPACK is an object‐oriented library, using encapsulation, inheritance, and exception handling. It performs within a factor of four of the optimized Fortran version for certain platforms and test cases. When used with the native BLAS library, JLAPACK performs comparably with the Fortran version using the native BLAS library. We conclude that high‐performance numerical software could be written in Java if a handful of concerns about language features and compilation strategies are adequately addressed.

  13. High performance systems

    Energy Technology Data Exchange (ETDEWEB)

    Vigil, M.B. [comp.

    1995-03-01

    This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.

  14. A comparison of data management systems used in high energy physics

    International Nuclear Information System (INIS)

    Hansl-Kozanecka, T.

    1992-04-01

    Data-management systems for defining data and manipulating them with FORTRAN programs have become increasingly important. We compare three systems that were developed within the high-energy physics community: BOS, JAZELLE and ZEBRA. (orig.)

  15. Performance and Feasibility Analysis of a Wind Turbine Power System for Use on Mars

    Science.gov (United States)

    Lichter, Matthew D.; Viterna, Larry

    1999-01-01

    A wind turbine power system for future missions to the Martian surface was studied for performance and feasibility. A C++ program was developed from existing FORTRAN code to analyze the power capabilities of wind turbines under different environments and design philosophies. Power output, efficiency, torque, thrust, and other performance criteria could be computed given design geometries, atmospheric conditions, and airfoil behavior. After reviewing performance of such a wind turbine, a conceptual system design was modeled to evaluate feasibility. More analysis code was developed to study and optimize the overall structural design. Findings of this preliminary study show that turbine power output on Mars could be as high as several hundred kilowatts. The optimized conceptual design examined here would have a power output of 104 kW, total mass of 1910 kg, and specific power of 54.6 W/kg.

  16. A fortran program for elastic scattering of deuterons with an optical model containing tensorial potentials; Programme fortran pour la diffusion elastique de deutons avec un modele optique contenant des termes tensoriels

    Energy Technology Data Exchange (ETDEWEB)

    Raynal, J [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1963-07-01

    The optical model has been applied with success to the elastic scattering of particles of spin 0 and 1/2 and to a lesser degree to that of deuterons. For particles of spin l/2, an LS coupling term is ordinarily used; this term is necessary to obtain a polarization; for deuterons, this coupling has been already introduced, but the possible forms of potentials are more numerous (in this case, scalar products of a second rank spin tensor with a tensor of the same rank in space or momentum can occur). These terms which may be necessary are primarily important for the tensor polarization. This problem is of particular interest at Saclay since a beam of polarized deuterons has become available. The FORTRAN program SPM 037 permits the study of the effect of tensorial potentials constructed from the distance of the deuteron from the target and its angular momentum with respect to it. This report should make possible the use and even the modification of the program. It consists of: a description of the problem and of the notation employed, a presentation of the methods adopted, an indication of the necessary data and how they should be introduced, and finally tables of symbols which are in equivalence or common statements: these tables must be considered when making any modification. (author) [French] Le modele optique a ete applique avec succes a la diffusion elastique des particules de spin nul et 1/2 et dans une moindre mesure a celle des deutons. Pour les particules de spin 1/2, on utilise habituellement un couplage LS, necessaire pour calculer la polarisation; pour les deutons, ce couplage a deja ete introduit, mais les formes de potentiel possibles sont plus nombreuses (intervention de produits scalaires d'un tenseur d'ordre 2 de spin avec un tenseur du meme ordre d'espace ou d'impulsion) et celles qui peuvent etre eventuellement necessaires ont une importance capitale pour la polarisation tensorielle. Ce probleme revet a Saclay un interet particulier depuis la mise

  17. A fortran program for elastic scattering of deuterons with an optical model containing tensorial potentials; Programme fortran pour la diffusion elastique de deutons avec un modele optique contenant des termes tensoriels

    Energy Technology Data Exchange (ETDEWEB)

    Raynal, J. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1963-07-01

    The optical model has been applied with success to the elastic scattering of particles of spin 0 and 1/2 and to a lesser degree to that of deuterons. For particles of spin l/2, an LS coupling term is ordinarily used; this term is necessary to obtain a polarization; for deuterons, this coupling has been already introduced, but the possible forms of potentials are more numerous (in this case, scalar products of a second rank spin tensor with a tensor of the same rank in space or momentum can occur). These terms which may be necessary are primarily important for the tensor polarization. This problem is of particular interest at Saclay since a beam of polarized deuterons has become available. The FORTRAN program SPM 037 permits the study of the effect of tensorial potentials constructed from the distance of the deuteron from the target and its angular momentum with respect to it. This report should make possible the use and even the modification of the program. It consists of: a description of the problem and of the notation employed, a presentation of the methods adopted, an indication of the necessary data and how they should be introduced, and finally tables of symbols which are in equivalence or common statements: these tables must be considered when making any modification. (author) [French] Le modele optique a ete applique avec succes a la diffusion elastique des particules de spin nul et 1/2 et dans une moindre mesure a celle des deutons. Pour les particules de spin 1/2, on utilise habituellement un couplage LS, necessaire pour calculer la polarisation; pour les deutons, ce couplage a deja ete introduit, mais les formes de potentiel possibles sont plus nombreuses (intervention de produits scalaires d'un tenseur d'ordre 2 de spin avec un tenseur du meme ordre d'espace ou d'impulsion) et celles qui peuvent etre eventuellement necessaires ont une importance capitale pour la polarisation tensorielle. Ce probleme revet a Saclay un interet

  18. Increased depression-like behaviors with dysfunctions in the stress axis and the reward center by free access to highly palatable food.

    Science.gov (United States)

    Park, E; Kim, J Y; Lee, J-H; Jahng, J W

    2014-03-14

    This study was conducted to examine the behavioral consequences of unlimited consumption of highly palatable food (HPF) and investigate its underlying neural mechanisms. Male Sprague-Dawley rats had free access to chocolate cookie rich in fat (HPF) in addition to ad libitum chow and the control group received chow only. Rats were subjected to behavioral tests during the 2nd week of food condition; i.e. ambulatory activity test on the 8th, elevated plus maze test (EPM) on the 10th and forced swim test (FST) on the 14th day of food condition. After 8 days of food condition, another group of rats were placed in a restraint box and tail bloods were collected at 0, 20, 60, and 120 time points during 2h of restraint period, used for the plasma corticosterone assay. At the end of restraint session, rats were sacrificed and the tissue sections of the nucleus accumbens (NAc) were processed for c-Fos immunohistochemistry. Ambulatory activities and the scores of EPM were not significantly affected by unlimited cookie consumption. However, immobility duration during FST was increased, and swim decreased, in the rats received free cookie access compared with control rats. Stress-induced corticosterone increase was exaggerated in cookie-fed rats, while the stress-induced c-Fos expression in the NAc was blunted, compared to control rats. Results suggest that free access to HPF may lead to the development of depression-like behaviors in rats, likely in relation with dysfunctions in the hypothalamic-pituitary-adrenal axis and the reward center. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  19. Beam dynamics calculations and particle tracking using massively parallel processors

    International Nuclear Information System (INIS)

    Ryne, R.D.; Habib, S.

    1995-01-01

    During the past decade massively parallel processors (MPPs) have slowly gained acceptance within the scientific community. At present these machines typically contain a few hundred to one thousand off-the-shelf microprocessors and a total memory of up to 32 GBytes. The potential performance of these machines is illustrated by the fact that a month long job on a high end workstation might require only a few hours on an MPP. The acceptance of MPPs has been slow for a variety of reasons. For example, some algorithms are not easily parallelizable. Also, in the past these machines were difficult to program. But in recent years the development of Fortran-like languages such as CM Fortran and High Performance Fortran have made MPPs much easier to use. In the following we will describe how MPPs can be used for beam dynamics calculations and long term particle tracking

  20. Hot Chips and Hot Interconnects for High End Computing Systems

    Science.gov (United States)

    Saini, Subhash

    2005-01-01

    I will discuss several processors: 1. The Cray proprietary processor used in the Cray X1; 2. The IBM Power 3 and Power 4 used in an IBM SP 3 and IBM SP 4 systems; 3. The Intel Itanium and Xeon, used in the SGI Altix systems and clusters respectively; 4. IBM System-on-a-Chip used in IBM BlueGene/L; 5. HP Alpha EV68 processor used in DOE ASCI Q cluster; 6. SPARC64 V processor, which is used in the Fujitsu PRIMEPOWER HPC2500; 7. An NEC proprietary processor, which is used in NEC SX-6/7; 8. Power 4+ processor, which is used in Hitachi SR11000; 9. NEC proprietary processor, which is used in Earth Simulator. The IBM POWER5 and Red Storm Computing Systems will also be discussed. The architectures of these processors will first be presented, followed by interconnection networks and a description of high-end computer systems based on these processors and networks. The performance of various hardware/programming model combinations will then be compared, based on latest NAS Parallel Benchmark results (MPI, OpenMP/HPF and hybrid (MPI + OpenMP). The tutorial will conclude with a discussion of general trends in the field of high performance computing, (quantum computing, DNA computing, cellular engineering, and neural networks).

  1. Microscopy of Stained Urethral Smear in Male Urethritis; Which Cutoff Should be Used?

    Science.gov (United States)

    Moi, Harald; Hartgill, Usha; Skullerud, Kristin Helene; Reponen, Elina J; Syvertsen, Line; Moghaddam, Amir

    2017-03-01

    The microscopical diagnosis of male urethritis was recently questioned by Rietmeijer and Mettenbrink, lowering the diagnostic criteria of the diagnosis to ≥2 polymorphonuclear leucocytes (PMNL) per high power field (HPF), and adopted by Centers for Disease Control and Prevention in their 2015 STD Treatment Guidelines. The European Non-Gonococcal Urethritis Guideline advocates a limit of ≥5 PMNL/HPF. To determine if syndromic treatment of urethritis should be considered with a cutoff value of ≥2 PMNL/HPF in urethral smear. The design was a cross-sectional study investigating the presence and degree of urethritis relative to specific infections in men attending an STI clinic as drop-in patients. The material included 2 cohorts: a retrospective study of 13,295 men and a prospective controlled study including 356 men. We observed a mean chlamydia prevalence of 2.3% in the 0-9 stratum, and a 12-fold higher prevalence (27.3%) in the strata above 9. Of the chlamydia cases, 89.8% were diagnosed in strata above 9. For Mycoplasma genitalium, the prevalence was 1.4% in the 0-9 stratum and 11.2% in the stratum ≥10, and 83.6% were diagnosed in strata above 9. For gonorrhea, a significant increase in the prevalence occurred between the 0-30 strata and >30 strata from 0.2% to 20.7%. The results of the prospective study were similar. Our data do not support lowering the cutoff to ≥2 PMNL/HPF. However, a standardization of urethral smear microscopy seems to be impossible. The cutoff value should discriminate between low and high prevalence of chlamydia, mycoplasma, and gonorrhea to include as many as possible with a specific infection in syndromic treatment, without overtreating those with few PMNL/HPF and high possibility of having nonspecific or no urethritis.

  2. Experiences in Data-Parallel Programming

    Directory of Open Access Journals (Sweden)

    Terry W. Clark

    1997-01-01

    Full Text Available To efficiently parallelize a scientific application with a data-parallel compiler requires certain structural properties in the source program, and conversely, the absence of others. A recent parallelization effort of ours reinforced this observation and motivated this correspondence. Specifically, we have transformed a Fortran 77 version of GROMOS, a popular dusty-deck program for molecular dynamics, into Fortran D, a data-parallel dialect of Fortran. During this transformation we have encountered a number of difficulties that probably are neither limited to this particular application nor do they seem likely to be addressed by improved compiler technology in the near future. Our experience with GROMOS suggests a number of points to keep in mind when developing software that may at some time in its life cycle be parallelized with a data-parallel compiler. This note presents some guidelines for engineering data-parallel applications that are compatible with Fortran D or High Performance Fortran compilers.

  3. High performance work practices, innovation and performance

    DEFF Research Database (Denmark)

    Jørgensen, Frances; Newton, Cameron; Johnston, Kim

    2013-01-01

    Research spanning nearly 20 years has provided considerable empirical evidence for relationships between High Performance Work Practices (HPWPs) and various measures of performance including increased productivity, improved customer service, and reduced turnover. What stands out from......, and Africa to examine these various questions relating to the HPWP-innovation-performance relationship. Each paper discusses a practice that has been identified in HPWP literature and potential variables that can facilitate or hinder the effects of these practices of innovation- and performance...

  4. Applications Performance on NAS Intel Paragon XP/S - 15#

    Science.gov (United States)

    Saini, Subhash; Simon, Horst D.; Copper, D. M. (Technical Monitor)

    1994-01-01

    The Numerical Aerodynamic Simulation (NAS) Systems Division received an Intel Touchstone Sigma prototype model Paragon XP/S- 15 in February, 1993. The i860 XP microprocessor with an integrated floating point unit and operating in dual -instruction mode gives peak performance of 75 million floating point operations (NIFLOPS) per second for 64 bit floating point arithmetic. It is used in the Paragon XP/S-15 which has been installed at NAS, NASA Ames Research Center. The NAS Paragon has 208 nodes and its peak performance is 15.6 GFLOPS. Here, we will report on early experience using the Paragon XP/S- 15. We have tested its performance using both kernels and applications of interest to NAS. We have measured the performance of BLAS 1, 2 and 3 both assembly-coded and Fortran coded on NAS Paragon XP/S- 15. Furthermore, we have investigated the performance of a single node one-dimensional FFT, a distributed two-dimensional FFT and a distributed three-dimensional FFT Finally, we measured the performance of NAS Parallel Benchmarks (NPB) on the Paragon and compare it with the performance obtained on other highly parallel machines, such as CM-5, CRAY T3D, IBM SP I, etc. In particular, we investigated the following issues, which can strongly affect the performance of the Paragon: a. Impact of the operating system: Intel currently uses as a default an operating system OSF/1 AD from the Open Software Foundation. The paging of Open Software Foundation (OSF) server at 22 MB to make more memory available for the application degrades the performance. We found that when the limit of 26 NIB per node out of 32 MB available is reached, the application is paged out of main memory using virtual memory. When the application starts paging, the performance is considerably reduced. We found that dynamic memory allocation can help applications performance under certain circumstances. b. Impact of data cache on the i860/XP: We measured the performance of the BLAS both assembly coded and Fortran

  5. Chemical-Free Technique to Study the Ultrastructure of Primary Cilium.

    Science.gov (United States)

    Mohieldin, Ashraf M; AbouAlaiwi, Wissam A; Gao, Min; Nauli, Surya M

    2015-11-02

    A primary cilium is a hair-like structure with a width of approximately 200 nm. Over the past few decades, the main challenge in the study of the ultrastructure of cilia has been the high sensitivity of cilia to chemical fixation, which is required for many imaging techniques. In this report, we demonstrate a combined high-pressure freezing (HPF) and freeze-fracture transmission electron microscopy (FFTEM) technique to examine the ultrastructure of a cilium. Our objective is to develop an optimal high-resolution imaging approach that preserves cilia structures in their best natural form without alteration of cilia morphology by chemical fixation interference. Our results showed that a cilium has a swelling-like structure (termed bulb), which was previously considered a fixation artifact. The intramembrane particles observed via HPF/FFTEM indicated the presence of integral membrane proteins and soluble matrix proteins along the ciliary bulb, which is part of an integral structure within the ciliary membrane. We propose that HPF/FFTEM is an important and more suitable chemical-free method to study the ultrastructure of primary cilia.

  6. Effects of high pressure freezing (HPF) on denaturation of natural actomyosin extracted from prawn (Metapenaeus ensis).

    Science.gov (United States)

    Cheng, Lina; Sun, Da-Wen; Zhu, Zhiwei; Zhang, Zhihang

    2017-08-15

    Effects of protein denaturation caused by high pressure freezing, involving Pressure-Factors (pressure, time) and Freezing-Factors (temperature, phase transition, recrystallization, ice crystal types), are complicated. In the current study, the conformation and functional changes of natural actomyosin (NAM) under pressure assisted freezing (PAF, 100,150,300,400,500MPa P -20°C/25min ), pressure shift freezing (PSF, 200MPa P -20°C/25min ), and immersion freezing ( 0.1MPa P -20°C/5min ) after pressure was released to 0.1MPa, as compared to normal immersion freezing process (IF, 0.1MPa P -20°C/30min ). Results indicated that PSF ( 200MPa P -20°C/30min ) could reduce the denaturation of frozen NAM and a pressure of 300MPa was the critical point to induce such a denaturation. During the periods of B→D in PSF or B→C→D in PAF, the generation and growth of ice crystals played an important role on changing the secondary and tertiary structure of the treated NAM. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. RavenDB high performance

    CERN Document Server

    Ritchie, Brian

    2013-01-01

    RavenDB High Performance is comprehensive yet concise tutorial that developers can use to.This book is for developers & software architects who are designing systems in order to achieve high performance right from the start. A basic understanding of RavenDB is recommended, but not required. While the book focuses on advanced topics, it does not assume that the reader has a great deal of prior knowledge of working with RavenDB.

  8. MG132, a proteasome inhibitor, induces human pulmonary fibroblast cell death via increasing ROS levels and GSH depletion.

    Science.gov (United States)

    Park, Woo Hyun; Kim, Suhn Hee

    2012-04-01

    MG132 as a proteasome inhibitor can induce apoptotic cell death in lung cancer cells. However, little is known about the toxicological cellular effects of MG132 on normal primary lung cells. Here, we investigated the effects of N-acetyl cysteine (NAC) and vitamin C (well known antioxidants) or L-buthionine sulfoximine (BSO; an inhibitor of GSH synthesis) on MG132-treated human pulmonary fibroblast (HPF) cells in relation to cell death, reactive oxygen species (ROS) and glutathione (GSH). MG132 induced growth inhibition and death in HPF cells, accompanied by the loss of mitochondrial membrane potential (MMP; ∆ψm). MG132 increased ROS levels and GSH-depleted cell numbers in HPF cells. Both antioxidants, NAC and vitamin C, prevented growth inhibition, death and MMP (∆ψm) loss in MG132-treated HPF cells and also attenuated ROS levels in these cells. BSO showed a strong increase in ROS levels in MG132-treated HPF cells and slightly enhanced the growth inhibition, cell death, MMP (∆ψm) loss and GSH depletion. In addition, NAC decreased anonymous ubiquitinated protein levels in MG132-treated HPF cells. Furthermore, superoxide dismutase (SOD) 2, catalase (CTX) and GSH peroxidase (GPX) siRNAs enhanced HPF cell death by MG132, which was not correlated with ROS and GSH level changes. In conclusion, MG132 induced the growth inhibition and death of HPF cells, which were accompanied by increasing ROS levels and GSH depletion. Both NAC and vitamin C attenuated HPF cell death by MG132, whereas BSO slightly enhanced the death.

  9. HTCAP: a FORTRAN IV program for calculating coated-particle operating temperatures in HFIR target irradiation experiments

    International Nuclear Information System (INIS)

    Kania, M.J.

    1976-05-01

    A description is presented of HTCAP, a computer code that calculates in-reactor operating temperatures of loose coated ThO 2 particles in the HFIR target series of irradiation tests. Three computational models are employed to determine the following: (1) fission heat generation rates, (2) capsule heat transfer analysis, and (3) maximum particle surface temperature within the design of an HT capsule. Maximum particle operating temperatures are calculated at daily intervals during each irradiation cycle. The application of HTCAP to sleeve CP-62 of HT-15 is discussed, and the results are compared with those obtained in an earlier thermal analysis on the same capsule. Agreement is generally within +-5 percent, while decreasing the computational time by more than an order of magnitude. A complete FORTRAN listing and summary of required input data are presented in appendices. Included is a listing of the input data and a tabular output from the thermal analysis of sleeve CP-62 of HT-15

  10. Using Kokkos for Performant Cross-Platform Acceleration of Liquid Rocket Simulations

    Science.gov (United States)

    2017-05-08

    defined functors (like Thrust or Intel TBB) Backends for Nvidia GPU, Intel Xeon, Xeon Phi , IBM Power8, others. “View” data structure provides optimal... Architecture of my Kokkos framework Designed for minimally-invasive operation alongside large Fortran code. Everything is controlled from Fortran through a...Controls Kokkos initialization/finalization void initialize(…); void finalize(…); TVProperties* gettvproperties(); Architecture of my Kokkos framework

  11. Users manual for an expert system (HSPEXP) for calibration of the hydrological simulation program; Fortran

    Science.gov (United States)

    Lumb, A.M.; McCammon, R.B.; Kittle, J.L.

    1994-01-01

    Expert system software was developed to assist less experienced modelers with calibration of a watershed model and to facilitate the interaction between the modeler and the modeling process not provided by mathematical optimization. A prototype was developed with artificial intelligence software tools, a knowledge engineer, and two domain experts. The manual procedures used by the domain experts were identified and the prototype was then coded by the knowledge engineer. The expert system consists of a set of hierarchical rules designed to guide the calibration of the model through a systematic evaluation of model parameters. When the prototype was completed and tested, it was rewritten for portability and operational use and was named HSPEXP. The watershed model Hydrological Simulation Program--Fortran (HSPF) is used in the expert system. This report is the users manual for HSPEXP and contains a discussion of the concepts and detailed steps and examples for using the software. The system has been tested on watersheds in the States of Washington and Maryland, and the system correctly identified the model parameters to be adjusted and the adjustments led to improved calibration.

  12. High-performance computing using FPGAs

    CERN Document Server

    Benkrid, Khaled

    2013-01-01

    This book is concerned with the emerging field of High Performance Reconfigurable Computing (HPRC), which aims to harness the high performance and relative low power of reconfigurable hardware–in the form Field Programmable Gate Arrays (FPGAs)–in High Performance Computing (HPC) applications. It presents the latest developments in this field from applications, architecture, and tools and methodologies points of view. We hope that this work will form a reference for existing researchers in the field, and entice new researchers and developers to join the HPRC community.  The book includes:  Thirteen application chapters which present the most important application areas tackled by high performance reconfigurable computers, namely: financial computing, bioinformatics and computational biology, data search and processing, stencil computation e.g. computational fluid dynamics and seismic modeling, cryptanalysis, astronomical N-body simulation, and circuit simulation.     Seven architecture chapters which...

  13. IgG4-related disease-experience of 100 consecutive cases from a specialist centre.

    Science.gov (United States)

    Bateman, Adrian C; Culver, Emma L

    2017-04-01

    To describe the features of 100 consecutive cases referred to a single UK institution in which a diagnosis of IgG4-related disease (IgG4-RD) was under consideration. The histological features were reviewed by a single histopathologist, and cases were categorized according to the 2012 Boston criteria: Category 1-histologically highly suggestive of IgG4-RD; Category 2-probable histopathological features of IgG4-RD; and Category 3-insufficient histopathological evidence of IgG4-RD. A 'global assessment' was performed with the available clinical information: Assessment group 1-'definite/very likely IgG4-RD'; Assessment group 2-'possible IgG4-RD'; Assessment group 3-'not IgG4-RD'; and Assessment group 4-insufficient information. The mean IgG4+ plasma cell count and IgG4+/IgG+ ratio were highest in Category 1 [134/high-power field (HPF); 57%] and Assessment group 1 (113/HPF; 52%), and lowest in Category 3 (11/HPF; 18%) and Assessment group 3 (43/HPF; 31%) (Category comparison of IgG4+ count and ratio, both P IgG4+ count, P IgG4-RD diagnosis was rare in Category 1 (7%) but common in Category 2 (60%) and Category 3 (47%). Stromal reactions to neoplasia and chronic oral ulceration were simulants of IgG4-RD. The Boston criteria are linked to the likelihood of IgG4-RD. Other conditions may show some histological features of IgG4-RD. The likelihood of IgG4-RD is much greater when the histological features reach the threshold for Category 1 than when they reach the thresholds for Categories 2 and 3. Despite the utility of the Boston criteria, this study highlights the crucial importance of careful clinicopathological correlation when a diagnosis of IgG4-RD is under consideration. © 2016 John Wiley & Sons Ltd.

  14. Structure of Vibrio cholerae ribosome hibernation promoting factor

    International Nuclear Information System (INIS)

    De Bari, Heather; Berry, Edward A.

    2013-01-01

    The X-ray crystal structure of ribosome hibernation promoting factor from V. cholerae has been determined at 2.0 Å resolution. The crystal was phased by two-wavelength MAD using cocrystallized cobalt. The X-ray crystal structure of ribosome hibernation promoting factor (HPF) from Vibrio cholerae is presented at 2.0 Å resolution. The crystal was phased by two-wavelength MAD using cocrystallized cobalt. The asymmetric unit contained two molecules of HPF linked by four Co atoms. The metal-binding sites observed in the crystal are probably not related to biological function. The structure of HPF has a typical β–α–β–β–β–α fold consistent with previous structures of YfiA and HPF from Escherichia coli. Comparison of the new structure with that of HPF from E. coli bound to the Thermus thermophilus ribosome [Polikanov et al. (2012 ▶), Science, 336, 915–918] shows that no significant structural changes are induced in HPF by binding

  15. High-Performance Networking

    CERN Multimedia

    CERN. Geneva

    2003-01-01

    The series will start with an historical introduction about what people saw as high performance message communication in their time and how that developed to the now to day known "standard computer network communication". It will be followed by a far more technical part that uses the High Performance Computer Network standards of the 90's, with 1 Gbit/sec systems as introduction for an in depth explanation of the three new 10 Gbit/s network and interconnect technology standards that exist already or emerge. If necessary for a good understanding some sidesteps will be included to explain important protocols as well as some necessary details of concerned Wide Area Network (WAN) standards details including some basics of wavelength multiplexing (DWDM). Some remarks will be made concerning the rapid expanding applications of networked storage.

  16. Adaptive control of energy storage systems for power smoothing applications

    DEFF Research Database (Denmark)

    Meng, Lexuan; Dragicevic, Tomislav; Guerrero, Josep M.

    2017-01-01

    Energy storage systems (ESSs) are desired and widely applied for power smoothing especially in systems with renewable generation and pulsed loads. High-pass-filter (HPF) is commonly applied in those applications in which the HPF extracts the high frequency fluctuating power and uses...... that as the power reference for ESS. The cut-off frequency, as the critical parameter, actually decides the power/energy compensated by ESS. Practically the state-of-charge (SoC) of the ESS has to be limited for safety and life-cycle considerations. In this paper an adaptive cut-off frequency design is proposed...

  17. High Performance Concrete

    Directory of Open Access Journals (Sweden)

    Traian Oneţ

    2009-01-01

    Full Text Available The paper presents the last studies and researches accomplished in Cluj-Napoca related to high performance concrete, high strength concrete and self compacting concrete. The purpose of this paper is to raid upon the advantages and inconveniences when a particular concrete type is used. Two concrete recipes are presented, namely for the concrete used in rigid pavement for roads and another one for self-compacting concrete.

  18. HPTA: High-Performance Text Analytics

    OpenAIRE

    Vandierendonck, Hans; Murphy, Karen; Arif, Mahwish; Nikolopoulos, Dimitrios S.

    2017-01-01

    One of the main targets of data analytics is unstructured data, which primarily involves textual data. High-performance processing of textual data is non-trivial. We present the HPTA library for high-performance text analytics. The library helps programmers to map textual data to a dense numeric representation, which can be handled more efficiently. HPTA encapsulates three performance optimizations: (i) efficient memory management for textual data, (ii) parallel computation on associative dat...

  19. Pressurized planar electrochromatography, high-performance thin-layer chromatography and high-performance liquid chromatography--comparison of performance.

    Science.gov (United States)

    Płocharz, Paweł; Klimek-Turek, Anna; Dzido, Tadeusz H

    2010-07-16

    Kinetic performance, measured by plate height, of High-Performance Thin-Layer Chromatography (HPTLC), High-Performance Liquid Chromatography (HPLC) and Pressurized Planar Electrochromatography (PPEC) was compared for the systems with adsorbent of the HPTLC RP18W plate from Merck as the stationary phase and the mobile phase composed of acetonitrile and buffer solution. The HPLC column was packed with the adsorbent, which was scrapped from the chromatographic plate mentioned. An additional HPLC column was also packed with adsorbent of 5 microm particle diameter, C18 type silica based (LiChrosorb RP-18 from Merck). The dependence of plate height of both HPLC and PPEC separating systems on flow velocity of the mobile phase and on migration distance of the mobile phase in TLC system was presented applying test solute (prednisolone succinate). The highest performance, amongst systems investigated, was obtained for the PPEC system. The separation efficiency of the systems investigated in the paper was additionally confirmed by the separation of test component mixture composed of six hormones. 2010 Elsevier B.V. All rights reserved.

  20. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  1. High Performance Computing in Science and Engineering '17 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael; HLRS 2017

    2018-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  2. TSPP - A Collection of FORTRAN Programs for Processing and Manipulating Time Series

    Science.gov (United States)

    Boore, David M.

    2008-01-01

    This report lists a number of FORTRAN programs that I have developed over the years for processing and manipulating strong-motion accelerograms. The collection is titled TSPP, which stands for Time Series Processing Programs. I have excluded 'strong-motion accelerograms' from the title, however, as the boundary between 'strong' and 'weak' motion has become blurred with the advent of broadband sensors and high-dynamic range dataloggers, and many of the programs can be used with any evenly spaced time series, not just acceleration time series. This version of the report is relatively brief, consisting primarily of an annotated list of the programs, with two examples of processing, and a few comments on usage. I do not include a parameter-by-parameter guide to the programs. Future versions might include more examples of processing, illustrating the various parameter choices in the programs. Although these programs have been used by the U.S. Geological Survey, no warranty, expressed or implied, is made by the USGS as to the accuracy or functioning of the programs and related program material, nor shall the fact of distribution constitute any such warranty, and no responsibility is assumed by the USGS in connection therewith. The programs are distributed on an 'as is' basis, with no warranty of support from me. These programs were written for my use and are being publically distributed in the hope that others might find them as useful as I have. I would, however, appreciate being informed about bugs, and I always welcome suggestions for improvements to the codes. Please note that I have made little effort to optimize the coding of the programs or to include a user-friendly interface (many of the programs in this collection have been included in the software usdp (Utility Software for Data Processing), being developed by Akkar et al. (personal communication, 2008); usdp includes a graphical user interface). Speed of execution has been sacrificed in favor of a code that

  3. High-Performance Operating Systems

    DEFF Research Database (Denmark)

    Sharp, Robin

    1999-01-01

    Notes prepared for the DTU course 49421 "High Performance Operating Systems". The notes deal with quantitative and qualitative techniques for use in the design and evaluation of operating systems in computer systems for which performance is an important parameter, such as real-time applications......, communication systems and multimedia systems....

  4. High performance fuel technology development

    Energy Technology Data Exchange (ETDEWEB)

    Koon, Yang Hyun; Kim, Keon Sik; Park, Jeong Yong; Yang, Yong Sik; In, Wang Kee; Kim, Hyung Kyu [KAERI, Daejeon (Korea, Republic of)

    2012-01-15

    {omicron} Development of High Plasticity and Annular Pellet - Development of strong candidates of ultra high burn-up fuel pellets for a PCI remedy - Development of fabrication technology of annular fuel pellet {omicron} Development of High Performance Cladding Materials - Irradiation test of HANA claddings in Halden research reactor and the evaluation of the in-pile performance - Development of the final candidates for the next generation cladding materials. - Development of the manufacturing technology for the dual-cooled fuel cladding tubes. {omicron} Irradiated Fuel Performance Evaluation Technology Development - Development of performance analysis code system for the dual-cooled fuel - Development of fuel performance-proving technology {omicron} Feasibility Studies on Dual-Cooled Annular Fuel Core - Analysis on the property of a reactor core with dual-cooled fuel - Feasibility evaluation on the dual-cooled fuel core {omicron} Development of Design Technology for Dual-Cooled Fuel Structure - Definition of technical issues and invention of concept for dual-cooled fuel structure - Basic design and development of main structure components for dual- cooled fuel - Basic design of a dual-cooled fuel rod.

  5. High performance homes

    DEFF Research Database (Denmark)

    Beim, Anne; Vibæk, Kasper Sánchez

    2014-01-01

    . Consideration of all these factors is a precondition for a truly integrated practice and as this chapter demonstrates, innovative project delivery methods founded on the manufacturing of prefabricated buildings contribute to the production of high performance homes that are cost effective to construct, energy...

  6. Strategy Guideline: High Performance Residential Lighting

    Energy Technology Data Exchange (ETDEWEB)

    Holton, J.

    2012-02-01

    The Strategy Guideline: High Performance Residential Lighting has been developed to provide a tool for the understanding and application of high performance lighting in the home. The high performance lighting strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner's expectations for high quality lighting.

  7. High performance conductometry

    International Nuclear Information System (INIS)

    Saha, B.

    2000-01-01

    Inexpensive but high performance systems have emerged progressively for basic and applied measurements in physical and analytical chemistry on one hand, and for on-line monitoring and leak detection in plants and facilities on the other. Salient features of the developments will be presented with specific examples

  8. Developmental Toxicity of Dextromethorphan in Zebrafish Embryos/Larvae

    Science.gov (United States)

    Xu, Zheng; Williams, Frederick E.; Liu, Ming-Cheh

    2012-01-01

    Dextromethorphan is widely used in over-the-counter cough and cold medications. Its efficacy and safety for infants and young children remains to be clarified. The present study was designed to use the zebrafish as a model to investigate the potential toxicity of dextromethorphan during the embryonic and larval development. Three sets of zebrafish embryos/larvae were exposed to dextromethorphan at 24 hours post fertilization (hpf), 48 hpf, and 72 hpf, respectively, during the embryonic/larval development. Compared with the 48 and 72 hpf exposure sets, the embryos/larvae in the 24 hpf exposure set showed much higher mortality rates which increased in a dose-dependent manner. Bradycardia and reduced blood flow were observed for the embryos/larvae treated with increasing concentrations of dextromethorphan. Morphological effects of dextromethorphan exposure, including yolk sac and cardiac edema, craniofacial malformation, lordosis, non-inflated swim bladder, and missing gill, were also more frequent and severe among zebrafish embryos/larvae exposed to dextromethorphan at 24 hpf. Whether the more frequent and severe developmental toxicity of dextromethorphan observed among the embryos/larvae in the 24 hpf exposure set, as compared with the 48 and 72 hpf exposure sets, is due to the developmental expression of the Phase I and Phase II enzymes involved in the metabolism of dextromethorphan remains to be clarified. A reverse transcription-polymerase chain reaction (RT-PCR) analysis, nevertheless, revealed developmental stage-dependent expression of mRNAs encoding SULT3 ST1 and SULT3 ST3, two enzymes previously shown to be capable of sulfating dextrorphan, an active metabolite of dextromethorphan. PMID:20737414

  9. INL High Performance Building Strategy

    Energy Technology Data Exchange (ETDEWEB)

    Jennifer D. Morton

    2010-02-01

    High performance buildings, also known as sustainable buildings and green buildings, are resource efficient structures that minimize the impact on the environment by using less energy and water, reduce solid waste and pollutants, and limit the depletion of natural resources while also providing a thermally and visually comfortable working environment that increases productivity for building occupants. As Idaho National Laboratory (INL) becomes the nation’s premier nuclear energy research laboratory, the physical infrastructure will be established to help accomplish this mission. This infrastructure, particularly the buildings, should incorporate high performance sustainable design features in order to be environmentally responsible and reflect an image of progressiveness and innovation to the public and prospective employees. Additionally, INL is a large consumer of energy that contributes to both carbon emissions and resource inefficiency. In the current climate of rising energy prices and political pressure for carbon reduction, this guide will help new construction project teams to design facilities that are sustainable and reduce energy costs, thereby reducing carbon emissions. With these concerns in mind, the recommendations described in the INL High Performance Building Strategy (previously called the INL Green Building Strategy) are intended to form the INL foundation for high performance building standards. This revised strategy incorporates the latest federal and DOE orders (Executive Order [EO] 13514, “Federal Leadership in Environmental, Energy, and Economic Performance” [2009], EO 13423, “Strengthening Federal Environmental, Energy, and Transportation Management” [2007], and DOE Order 430.2B, “Departmental Energy, Renewable Energy, and Transportation Management” [2008]), the latest guidelines, trends, and observations in high performance building construction, and the latest changes to the Leadership in Energy and Environmental Design

  10. High Performance Networks for High Impact Science

    Energy Technology Data Exchange (ETDEWEB)

    Scott, Mary A.; Bair, Raymond A.

    2003-02-13

    This workshop was the first major activity in developing a strategic plan for high-performance networking in the Office of Science. Held August 13 through 15, 2002, it brought together a selection of end users, especially representing the emerging, high-visibility initiatives, and network visionaries to identify opportunities and begin defining the path forward.

  11. High performance fuel technology development : Development of high performance cladding materials

    International Nuclear Information System (INIS)

    Park, Jeongyong; Jeong, Y. H.; Park, S. Y.

    2012-04-01

    The superior in-pile performance of the HANA claddings have been verified by the successful irradiation test and in the Halden research reactor up to the high burn-up of 67GWD/MTU. The in-pile corrosion and creep resistances of HANA claddings were improved by 40% and 50%, respectively, over Zircaloy-4. HANA claddings have been also irradiated in the commercial reactor up to 2 reactor cycles, showing the corrosion resistance 40% better than that of ZIRLO in the same fuel assembly. Long-term out-of-pile performance tests for the candidates of the next generation cladding materials have produced the highly reliable test results. The final candidate alloys were selected and they showed the corrosion resistance 50% better than the foreign advanced claddings, which is beyond the original target. The LOCA-related properties were also improved by 20% over the foreign advanced claddings. In order to establish the optimal manufacturing process for the inner and outer claddings of the dual-cooled fuel, 18 different kinds of specimens were fabricated with various cold working and annealing conditions. Based on the performance tests and various out-of-pile test results obtained from the specimens, the optimal manufacturing process was established for the inner and outer cladding tubes of the dual-cooled fuel

  12. Significance and outcome of nuclear anaplasia and mitotic index in prostatic adenocarcinomas.

    Science.gov (United States)

    Kır, Gozde; Sarbay, Billur Cosan; Gumus, Eyup

    2016-10-01

    The Gleason grading system measures architectural differentiation and disregards nuclear atypia and the cell proliferation index. Several studies have reported that nuclear grade and mitotic index (MI) are prognostically useful. This study included 232 radical prostatectomy specimens. Nuclear anaplasia (NA) was determined on the basis of nucleomegali (at least 20µm); vesicular chromatin; eosinophilic macronucleoli, nuclear lobulation, and irregular thickened nuclear membranei. The proportion of area of NA was recorded in each tumor in 10% increments. The MI was defined as the number of mitotic figures in 10 consecutive high-power fields (HPF). In univariate analysis, significant differences included associations between biochemical prostate-specific antigen recurrence (BCR) and Gleason score, extraprostatic extension, positive surgical margin, the presence of high-pathologic stage, NA≥10% of tumor area, MI≥3/10 HPF, and preoperative prostate-specific antigen. In a stepwise Cox regression model, a positive surgical margin, the presence of a NA≥10% of tumor area, and a MI of≥3/10 HPF were independent predictors of BCR after radical prostatectomy. NA≥10% of tumor area appeared to have a stronger association with outcome than MI≥3/10 HPF, as still associated with BCR when Gleason score was in the model. The results of our study showed that, in addition to the conventional Gleason grading system, NA, and MI are useful prognostic parameters while evaluating long-term prognosis in prostatic adenocarcinoma. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. The UK core performance code package

    International Nuclear Information System (INIS)

    Hutt, P.K.; Gaines, N.; McEllin, M.; White, R.J.; Halsall, M.J.

    1991-01-01

    Over the last few years work has been co-ordinated by Nuclear Electric, originally part of the Central Electricity Generating Board, with contributions from the United Kingdom Atomic Energy Authority and British Nuclear Fuels Limited, to produce a generic, easy-to-use and integrated package of core performance codes able to perform a comprehensive range of calculations for fuel cycle design, safety analysis and on-line operational support for Light Water Reactor and Advanced Gas Cooled Reactor plant. The package consists of modern rationalized generic codes for lattice physics (WIMS), whole reactor calculations (PANTHER), thermal hydraulics (VIPRE) and fuel performance (ENIGMA). These codes, written in FORTRAN77, are highly portable and new developments have followed modern quality assurance standards. These codes can all be run ''stand-alone'' but they are also being integrated within a new UNIX-based interactive system called the Reactor Physics Workbench (RPW). The RPW provides an interactive user interface and a sophisticated data management system. It offers quality assurance features to the user and has facilities for defining complex calculational sequences. The Paper reviews the current capabilities of these components, their integration within the package and outlines future developments underway. Finally, the Paper describes the development of an on-line version of this package which is now being commissioned on UK AGR stations. (author)

  14. Carbon nanomaterials for high-performance supercapacitors

    OpenAIRE

    Tao Chen; Liming Dai

    2013-01-01

    Owing to their high energy density and power density, supercapacitors exhibit great potential as high-performance energy sources for advanced technologies. Recently, carbon nanomaterials (especially, carbon nanotubes and graphene) have been widely investigated as effective electrodes in supercapacitors due to their high specific surface area, excellent electrical and mechanical properties. This article summarizes the recent progresses on the development of high-performance supercapacitors bas...

  15. Clojure high performance programming

    CERN Document Server

    Kumar, Shantanu

    2013-01-01

    This is a short, practical guide that will teach you everything you need to know to start writing high performance Clojure code.This book is ideal for intermediate Clojure developers who are looking to get a good grip on how to achieve optimum performance. You should already have some experience with Clojure and it would help if you already know a little bit of Java. Knowledge of performance analysis and engineering is not required. For hands-on practice, you should have access to Clojure REPL with Leiningen.

  16. Delivering high performance BWR fuel reliably

    International Nuclear Information System (INIS)

    Schardt, J.F.

    1998-01-01

    Utilities are under intense pressure to reduce their production costs in order to compete in the increasingly deregulated marketplace. They need fuel, which can deliver high performance to meet demanding operating strategies. GE's latest BWR fuel design, GE14, provides that high performance capability. GE's product introduction process assures that this performance will be delivered reliably, with little risk to the utility. (author)

  17. High performance bio-integrated devices

    Science.gov (United States)

    Kim, Dae-Hyeong; Lee, Jongha; Park, Minjoon

    2014-06-01

    In recent years, personalized electronics for medical applications, particularly, have attracted much attention with the rise of smartphones because the coupling of such devices and smartphones enables the continuous health-monitoring in patients' daily life. Especially, it is expected that the high performance biomedical electronics integrated with the human body can open new opportunities in the ubiquitous healthcare. However, the mechanical and geometrical constraints inherent in all standard forms of high performance rigid wafer-based electronics raise unique integration challenges with biotic entities. Here, we describe materials and design constructs for high performance skin-mountable bio-integrated electronic devices, which incorporate arrays of single crystalline inorganic nanomembranes. The resulting electronic devices include flexible and stretchable electrophysiology electrodes and sensors coupled with active electronic components. These advances in bio-integrated systems create new directions in the personalized health monitoring and/or human-machine interfaces.

  18. AMBIENT CONDITIONS EFFECTS ON PERFORMANCE OF GAS TURBINE COGENERATION POWER PLANTS

    OpenAIRE

    Necmi Ozdemir*

    2016-01-01

    In this study, the performances of a simple and an air preheated cogeneration cycles in ambient conditions are compared with each other. A computer program written by the author in FORTRAN codes is used for the calculation of the enthalpy and entropy values of the streams, Exergy analysis is done and compared for the simple and the air preheated cogeneration cycles for different ambient conditions. The two cogeneration cycles are evaluated in terms of heat powers and electric, electrical to h...

  19. High Performance Macromolecular Material

    National Research Council Canada - National Science Library

    Forest, M

    2002-01-01

    .... In essence, most commercial high-performance polymers are processed through fiber spinning, following Nature and spider silk, which is still pound-for-pound the toughest liquid crystalline polymer...

  20. Delivering high performance BWR fuel reliably

    Energy Technology Data Exchange (ETDEWEB)

    Schardt, J.F. [GE Nuclear Energy, Wilmington, NC (United States)

    1998-07-01

    Utilities are under intense pressure to reduce their production costs in order to compete in the increasingly deregulated marketplace. They need fuel, which can deliver high performance to meet demanding operating strategies. GE's latest BWR fuel design, GE14, provides that high performance capability. GE's product introduction process assures that this performance will be delivered reliably, with little risk to the utility. (author)

  1. Contribution of G protein-coupled estrogen receptor 1 (GPER) to 17β-estradiol-induced developmental toxicity in zebrafish.

    Science.gov (United States)

    Diamante, Graciel; Menjivar-Cervantes, Norma; Leung, Man Sin; Volz, David C; Schlenk, Daniel

    2017-05-01

    Exposure to 17β-estradiol (E2) influences the regulation of multiple signaling pathways, and E2-mediated disruption of signaling events during early development can lead to malformations such as cardiac defects. In this study, we investigated the potential role of the G-protein estrogen receptor 1 (GPER) in E2-induced developmental toxicity. Zebrafish embryos were exposed to E2 from 2h post fertilization (hpf) to 76 hpf with subsequent transcriptional measurements of heart and neural crest derivatives expressed 2 (hand2), leucine rich repeat containing 10 (lrrc10), and gper at 12, 28 and 76 hpf. Alteration in the expression of lrrc10, hand2 and gper was observed at 12 hpf and 76 hpf, but not at 28 hpf. Expression of these genes was also altered after exposure to G1 (a GPER agonist) at 76 hpf. Expression of lrrc10, hand2 and gper all coincided with the formation of cardiac edema at 76 hpf as well as other developmental abnormalities. While co-exposure of G1 with G36 (a GPER antagonist) rescued G1-induced abnormalities and altered gene expression, co-exposure of E2 with G36, or ICI 182,780 (an estrogen receptor antagonist) did not rescue E2-induced cardiac deformities or gene expression. In addition, no effects on the concentrations of downstream ER and GPER signaling molecules (cAMP or calcium) were observed in embryo homogenates after E2 treatment. These data suggest that the impacts of E2 on embryonic development at this stage are complex and may involve multiple receptor and/or signaling pathways. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Developmental mechanisms of arsenite toxicity in zebrafish (Danio rerio) embryos

    International Nuclear Information System (INIS)

    Li Dan; Lu Cailing; Wang Ju; Hu Wei; Cao Zongfu; Sun Daguang; Xia Hongfei; Ma Xu

    2009-01-01

    Arsenic usually accumulates in soil, water and airborne particles, from which it is taken up by various organisms. Exposure to arsenic through food and drinking water is a major public health problem affecting some countries. At present there are limited laboratory data on the effects of arsenic exposure on early embryonic development and the mechanisms behind its toxicity. In this study, we used zebrafish as a model system to investigate the effects of arsenite on early development. Zebrafish embryos were exposed to a range of sodium arsenite concentrations (0-10.0 mM) between 4 and 120 h post-fertilization (hpf). Survival and early development of the embryos were not obviously influenced by arsenite concentrations below 0.5 mM. However, embryos exposed to higher concentrations (0.5-10.0 mM) displayed reduced survival and abnormal development including delayed hatching, retarded growth and changed morphology. Alterations in neural development included weak tactile responses to light (2.0-5.0 mM, 30 hpf), malformation of the spinal cord and disordered motor axon projections (2.0 mM, 48 hpf). Abnormal cardiac function was observed as bradycardia (0.5-2.0 mM, 60 hpf) and altered ventricular shape (2.0 mM, 48 hpf). Furthermore, altered cell proliferation (2.0 mM, 24 hpf) and apoptosis status (2.0 mM, 24 and 48 hpf), as well as abnormal genomic DNA methylation patterning (2.0 mM, 24 and 48 hpf) were detected in the arsenite-treated embryos. All of these indicate a possible relationship between arsenic exposure and developmental failure in early embryogenesis. Our studies suggest that the negative effects of arsenic on vertebrate embryogenesis are substantial

  3. VASCOMP 2. The V/STOL aircraft sizing and performance computer program. Volume 6: User's manual, revision 3

    Science.gov (United States)

    Schoen, A. H.; Rosenstein, H.; Stanzione, K.; Wisniewski, J. S.

    1980-01-01

    This report describes the use of the V/STOL Aircraft Sizing and Performance Computer Program (VASCOMP II). The program is useful in performing aircraft parametric studies in a quick and cost efficient manner. Problem formulation and data development were performed by the Boeing Vertol Company and reflects the present preliminary design technology. The computer program, written in FORTRAN IV, has a broad range of input parameters, to enable investigation of a wide variety of aircraft. User oriented features of the program include minimized input requirements, diagnostic capabilities, and various options for program flexibility.

  4. Carpet Aids Learning in High Performance Schools

    Science.gov (United States)

    Hurd, Frank

    2009-01-01

    The Healthy and High Performance Schools Act of 2002 has set specific federal guidelines for school design, and developed a federal/state partnership program to assist local districts in their school planning. According to the Collaborative for High Performance Schools (CHPS), high-performance schools are, among other things, healthy, comfortable,…

  5. High-performance-vehicle technology. [fighter aircraft propulsion

    Science.gov (United States)

    Povinelli, L. A.

    1979-01-01

    Propulsion needs of high performance military aircraft are discussed. Inlet performance, nozzle performance and cooling, and afterburner performance are covered. It is concluded that nonaxisymmetric nozzles provide cleaner external lines and enhanced maneuverability, but the internal flows are more complex. Swirl afterburners show promise for enhanced performance in the high altitude, low Mach number region.

  6. A FORTRAN-compatible program package for the control of CAMAC-systems by a PDP-11 (CA11-A/DEC, Type 1533A/BORER)

    International Nuclear Information System (INIS)

    Lengauer, C.

    1975-01-01

    The described software serves for the control of CAMAC-systems by a PDP-11 Computer with one DEC CA11-A Branch-Driver, respectively up to ten BORER Type 1533A Single-Crate-Controllers under the Operating System DOS V08. The software consists of three parts: 1) a subroutine library for programming in FORTRAN, 2) a macro library for programming in Assembler (for time-critical problems), 3) a loadable CAMAC-Driver for controlling the system by input of single CAMAC-commands at the terminal. Programs which apply the first two parts can be written independently of the CAMAC-Controller used at runtime. (orig.) [de

  7. Academic performance in high school as factor associated to academic performance in college

    Directory of Open Access Journals (Sweden)

    Mileidy Salcedo Barragán

    2008-12-01

    Full Text Available This study intends to find the relationship between academic performance in High School and College, focusing on Natural Sciences and Mathematics. It is a descriptive correlational study, and the variables were academic performance in High School, performance indicators and educational history. The correlations between variables were established with Spearman’s correlation coefficient. Results suggest that there is a positive relationship between academic performance in High School and Educational History, and a very weak relationship between performance in Science and Mathematics in High School and performance in College.

  8. High Performance Grinding and Advanced Cutting Tools

    CERN Document Server

    Jackson, Mark J

    2013-01-01

    High Performance Grinding and Advanced Cutting Tools discusses the fundamentals and advances in high performance grinding processes, and provides a complete overview of newly-developing areas in the field. Topics covered are grinding tool formulation and structure, grinding wheel design and conditioning and applications using high performance grinding wheels. Also included are heat treatment strategies for grinding tools, using grinding tools for high speed applications, laser-based and diamond dressing techniques, high-efficiency deep grinding, VIPER grinding, and new grinding wheels.

  9. A FORTRAN program for the use of digital terrain elevation models of the Instituto Nacional de Estadistica, Geografia e Informatica of Mexico (INEGI); Programa en FORTRAN para el manejo de modelos digitales de elevacion del terreno del Instituto Nacional de Estadistica, Geografia e Informatica de Mexico (INEGI)

    Energy Technology Data Exchange (ETDEWEB)

    Garcia Estrada, Gerardo [Gerencia de Proyectos Geotermoelectricos de la Comision Federal de Electricidad, Morelia (Mexico)

    1996-09-01

    A FORTRAN program is presented for the use of digital terrain elevation models with raster format of the Instituto Nacional de Estadistica, Geografia e Informatica of Mexico (INEGI). This program allows the selection of a data window that can be delimited, optionally, giving the extreme coordinates in degrees, minutes and seconds or in UTM (Universal Transversal Mercator) coordinates. Digital terrain data are selected to produce an output file in SURFER binary grid format with decimal degrees coordinates. Optionally an x, y, z output file in ASCII code permits the griding with commercial software to produce a map with planar rectangular coordinates. During the window selection a simple filtering process is performed to diminish numerical errors of the original file, and if it is wanted, an undersampling can be conducted to prepare less detailed maps of great coverage. This program has been extensively tested in the Gerencia de Proyectos Geotermoelectricos de la Comision federal de Electricidad (CFE) in Mexico, where it is used to prepare base maps, automatically traced topographic profiles and boundary condition for thermal modelling. Another direct uses are the calculus of terrain and isostatic corrections for gravity studies, topographic height estimating based on known horizontal coordinates, climatic effects modelling, automatic calculus of material volumes and many more. [Espanol] Se presenta un programa FORTRAN para el uso de modelos digitales de elevacion del terreno con el formato raster del Instituto Nacional de Estadistica, Geografia e Informatica de Mexico (INEGI). El programa permite la seleccion de una ventana de datos, la cual puede elegirse optativamente dando las coordenadas extremas en coordenadas geograficas en grados, minutos y segundos o en coordenadas UTM (proyeccion Universal Transversa de Mercator). Se seleccionan los datos del modelo digital y se produce una rejilla lista para su despliegue en formato binario UTM cuyo enrejillado permite

  10. Highlighting High Performance: Whitman Hanson Regional High School; Whitman, Massachusetts

    Energy Technology Data Exchange (ETDEWEB)

    2006-06-01

    This brochure describes the key high-performance building features of the Whitman-Hanson Regional High School. The brochure was paid for by the Massachusetts Technology Collaborative as part of their Green Schools Initiative. High-performance features described are daylighting and energy-efficient lighting, indoor air quality, solar and wind energy, building envelope, heating and cooling systems, water conservation, and acoustics. Energy cost savings are also discussed.

  11. Performance investigation of a cogeneration plant with the efficient and compact heat recovery system

    KAUST Repository

    Myat, Aung

    2011-10-03

    This paper presents the performance investigation of a cogeneration plant equipped with an efficient waste heat recovery system. The proposed cogeneration system produces four types of useful energy namely: (i) electricity, (ii) steam, (iii) cooling and (iv) dehumidification. The proposed plant comprises a Capstone C30 micro-turbine which generates 24 kW of electricity, a compact and efficient waste heat recovery system and a host of waste heat activated devices namely (i) a steam generator, (ii) an absorption chiller, (iii) an adsorption chiller and (iv) a multi-bed desiccant dehumidifier. The numerical analysis for the host of waste heat recovery system and thermally activated devices using FORTRAN power station linked to powerful IMSL library is performed to investigate the performance of the overall system. A set of experiments, both part load and full load, of micro-turbine is conducted to examine the electricity generation and the exhaust gas temperature. It is observed that energy utilization factor (EUF) could achieve as high as 70% while Fuel Energy Saving Ratio (FESR) is found to be 28%.

  12. High performance polymeric foams

    International Nuclear Information System (INIS)

    Gargiulo, M.; Sorrentino, L.; Iannace, S.

    2008-01-01

    The aim of this work was to investigate the foamability of high-performance polymers (polyethersulfone, polyphenylsulfone, polyetherimide and polyethylenenaphtalate). Two different methods have been used to prepare the foam samples: high temperature expansion and two-stage batch process. The effects of processing parameters (saturation time and pressure, foaming temperature) on the densities and microcellular structures of these foams were analyzed by using scanning electron microscopy

  13. Responsive design high performance

    CERN Document Server

    Els, Dewald

    2015-01-01

    This book is ideal for developers who have experience in developing websites or possess minor knowledge of how responsive websites work. No experience of high-level website development or performance tweaking is required.

  14. Extending OpenMP for NUMA Machines

    Directory of Open Access Journals (Sweden)

    John Bircsak

    2000-01-01

    Full Text Available This paper describes extensions to OpenMP that implement data placement features needed for NUMA architectures. OpenMP is a collection of compiler directives and library routines used to write portable parallel programs for shared-memory architectures. Writing efficient parallel programs for NUMA architectures, which have characteristics of both shared-memory and distributed-memory architectures, requires that a programmer control the placement of data in memory and the placement of computations that operate on that data. Optimal performance is obtained when computations occur on processors that have fast access to the data needed by those computations. OpenMP -- designed for shared-memory architectures -- does not by itself address these issues. The extensions to OpenMP Fortran presented here have been mainly taken from High Performance Fortran. The paper describes some of the techniques that the Compaq Fortran compiler uses to generate efficient code based on these extensions. It also describes some additional compiler optimizations, and concludes with some preliminary results.

  15. Striving for Excellence Sometimes Hinders High Achievers: Performance-Approach Goals Deplete Arithmetical Performance in Students with High Working Memory Capacity

    Science.gov (United States)

    Crouzevialle, Marie; Smeding, Annique; Butera, Fabrizio

    2015-01-01

    We tested whether the goal to attain normative superiority over other students, referred to as performance-approach goals, is particularly distractive for high-Working Memory Capacity (WMC) students—that is, those who are used to being high achievers. Indeed, WMC is positively related to high-order cognitive performance and academic success, a record of success that confers benefits on high-WMC as compared to low-WMC students. We tested whether such benefits may turn out to be a burden under performance-approach goal pursuit. Indeed, for high achievers, aiming to rise above others may represent an opportunity to reaffirm their positive status—a stake susceptible to trigger disruptive outcome concerns that interfere with task processing. Results revealed that with performance-approach goals—as compared to goals with no emphasis on social comparison—the higher the students’ WMC, the lower their performance at a complex arithmetic task (Experiment 1). Crucially, this pattern appeared to be driven by uncertainty regarding the chances to outclass others (Experiment 2). Moreover, an accessibility measure suggested the mediational role played by status-related concerns in the observed disruption of performance. We discuss why high-stake situations can paradoxically lead high-achievers to sub-optimally perform when high-order cognitive performance is at play. PMID:26407097

  16. High-performance ceramics. Fabrication, structure, properties

    International Nuclear Information System (INIS)

    Petzow, G.; Tobolski, J.; Telle, R.

    1996-01-01

    The program ''Ceramic High-performance Materials'' pursued the objective to understand the chaining of cause and effect in the development of high-performance ceramics. This chain of problems begins with the chemical reactions for the production of powders, comprises the characterization, processing, shaping and compacting of powders, structural optimization, heat treatment, production and finishing, and leads to issues of materials testing and of a design appropriate to the material. The program ''Ceramic High-performance Materials'' has resulted in contributions to the understanding of fundamental interrelationships in terms of materials science, which are summarized in the present volume - broken down into eight special aspects. (orig./RHM)

  17. [IgG4 immunohistochemistry in Riedle thyroiditis].

    Science.gov (United States)

    Wang, S; Luo, Y F; Cao, J L; Zhang, H; Shi, X H; Liang, Z Y; Feng, R E

    2017-03-08

    Objective: To observe the histopathological changes and immunohistochemical expression of IgG4 in Riedle thyroiditis (RT) and to study the relationship between RT and IgG4-related diseases (IgG4-RD). Methods: A total of 5 RT patients were collected from the Department of Pathology, Peking Union Medical College Hospital during April 2012 to August 2014. The clinical and immunohistochemical features were analyzed in the 5 patients. Histopathologic analysis was performed on hematoxylin and eosin-stained sections. Results: There were one male and four female patients, aged 52 to 78 years (median 59 years). Five cases were characterized by multiple nodules of thyroid, which increased year by year. All patients were found to have surrounding tissue compression symptoms and signs. Two female patients were found to have hypothyroidism. The serum concentration of IgG was elevated in 2 cases, and the serum concentration of IgG was not tested before operation in the remaining patients. By ultrasound, all presented as low echo or medium low echo. Strong echo occasionally appeared in hypoechoic nodules. Microscopically, fibrous tissue hyperplasia was infiltrated with varying numbers of lymphocytes and plasma cells. The occlusion of phlebitis was found in 4 cases and eosinophils were found in 3 cases. IgG4 counts and IgG4/IgG ratios in 5 cases were 20/HPF, 16%; 60/HPF, 82%; 22/HPF, 28%; 400/HPF, 266% and 33/HPF, 71%, respectively. Conclusions: With the similar pathological manifestations between RT and IgG4-RD, immunohistochemical staining shows that the number of IgG4 positive plasma cells and IgG4/IgG ratio of RT are increased in varying degrees. Some cases meet the diagnostic criteria of IgG4-RD, and speculate that some cases of RT belong to IgG4-RD.

  18. [Influence of dendritic cell infiltration on prognosis and biologic characteristics of progressing gastric cancer].

    Science.gov (United States)

    Huang, Hai-li; Wu, Ben-yan; You, Wei-di; Shen, Ming-shi; Wang, Wen-ju

    2003-09-01

    To study the relation between dendritic cell (DC) infiltration and clinicopathologic parameters, biologic characteristics and prognosis of progressing gastric cancer. The development of apoptotic cell death (apoptotic index, AI) in 61 progressing gastric carcinoma tissues was analyzed by terminal deoxynucleotidyl transferase-mediated deoxyuridine triphosphate biotin nick end labeling (TUNEL) method. The PCNA labeling index (PCNA-LI), density of dendritic cells in the tumor were detected by immunohistochemical method by the LSAB kit using antibody against S-100 protein and PC-10. DC infiltration was negatively correlated with lymph node metastasis, clinical stage and PCNA-LI, but positively with AI. The DCs in gastric cancer groups with and without lymph node metastasis were (5.63 +/- 4.37)/HPF and (8.51 +/- 5.57)/HPF with difference significant (P stage lesions were (11.23 +/- 6.05)/HPF, (6.28 +/- 4.37)/HPF and (5.53 +/- 5.19)/HPF also with differences significant (P gastric carcinoma.

  19. Induction of skeletal abnormalities and autophagy in Paracentrotus lividus sea urchin embryos exposed to gadolinium.

    Science.gov (United States)

    Martino, Chiara; Chiarelli, Roberto; Bosco, Liana; Roccheri, Maria Carmela

    2017-09-01

    Gadolinium (Gd) concentration is constantly increasing in the aquatic environment, becoming an emergent environmental pollutant. We investigated the effects of Gd on Paracentrotus lividus sea urchin embryos, focusing on skeletogenesis and autophagy. We observed a delay of biomineral deposition at 24 hours post fertilization (hpf), and a strong impairment of skeleton growth at 48 hpf, frequently displayed by an asymmetrical pattern. Skeleton growth was found partially resumed in recovery experiments. The mesodermal cells designated to biomineralization were found correctly migrated at 24 hpf, but not at 48 hpf. Western blot analysis showed an increase of the LC3-II autophagic marker at 24 and 48 hpf. Confocal microscopy studies confirmed the increased number of autophagolysosomes and autophagosomes. Results show the hazard of Gd in the marine environment, indicating that Gd is able to affect different aspects of sea urchin development: morphogenesis, biomineralization, and stress response through autophagy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. High performance data transfer

    Science.gov (United States)

    Cottrell, R.; Fang, C.; Hanushevsky, A.; Kreuger, W.; Yang, W.

    2017-10-01

    The exponentially increasing need for high speed data transfer is driven by big data, and cloud computing together with the needs of data intensive science, High Performance Computing (HPC), defense, the oil and gas industry etc. We report on the Zettar ZX software. This has been developed since 2013 to meet these growing needs by providing high performance data transfer and encryption in a scalable, balanced, easy to deploy and use way while minimizing power and space utilization. In collaboration with several commercial vendors, Proofs of Concept (PoC) consisting of clusters have been put together using off-the- shelf components to test the ZX scalability and ability to balance services using multiple cores, and links. The PoCs are based on SSD flash storage that is managed by a parallel file system. Each cluster occupies 4 rack units. Using the PoCs, between clusters we have achieved almost 200Gbps memory to memory over two 100Gbps links, and 70Gbps parallel file to parallel file with encryption over a 5000 mile 100Gbps link.

  1. Strategy Guideline. Partnering for High Performance Homes

    Energy Technology Data Exchange (ETDEWEB)

    Prahl, Duncan [IBACOS, Inc., Pittsburgh, PA (United States)

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. This guide is intended for use by all parties associated in the design and construction of high performance homes. It serves as a starting point and features initial tools and resources for teams to collaborate to continually improve the energy efficiency and durability of new houses.

  2. High performance parallel I/O

    CERN Document Server

    Prabhat

    2014-01-01

    Gain Critical Insight into the Parallel I/O EcosystemParallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem.The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O har

  3. ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS

    International Nuclear Information System (INIS)

    WONG, CPC; MALANG, S; NISHIO, S; RAFFRAY, R; SAGARA, S

    2002-01-01

    OAK A271 ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS. First wall and blanket (FW/blanket) design is a crucial element in the performance and acceptance of a fusion power plant. High temperature structural and breeding materials are needed for high thermal performance. A suitable combination of structural design with the selected materials is necessary for D-T fuel sufficiency. Whenever possible, low afterheat, low chemical reactivity and low activation materials are desired to achieve passive safety and minimize the amount of high-level waste. Of course the selected fusion FW/blanket design will have to match the operational scenarios of high performance plasma. The key characteristics of eight advanced high performance FW/blanket concepts are presented in this paper. Design configurations, performance characteristics, unique advantages and issues are summarized. All reviewed designs can satisfy most of the necessary design goals. For further development, in concert with the advancement in plasma control and scrape off layer physics, additional emphasis will be needed in the areas of first wall coating material selection, design of plasma stabilization coils, consideration of reactor startup and transient events. To validate the projected performance of the advanced FW/blanket concepts the critical element is the need for 14 MeV neutron irradiation facilities for the generation of necessary engineering design data and the prediction of FW/blanket components lifetime and availability

  4. DNA methyltransferases and stress-related genes expression in zebrafish larvae after exposure to heat and copper during reprogramming of DNA methylation.

    Science.gov (United States)

    Dorts, Jennifer; Falisse, Elodie; Schoofs, Emilie; Flamion, Enora; Kestemont, Patrick; Silvestre, Frédéric

    2016-10-12

    DNA methylation, a well-studied epigenetic mark, is important for gene regulation in adulthood and for development. Using genetic and epigenetic approaches, the present study aimed at evaluating the effects of heat stress and copper exposure during zebrafish early embryogenesis when patterns of DNA methylation are being established, a process called reprogramming. Embryos were exposed to 325 μg Cu/L from fertilization (<1 h post fertilization - hpf) to 4 hpf at either 26.5 °C or 34 °C, followed by incubation in clean water at 26.5 °C till 96 hpf. Significant increased mortality rates and delayed hatching were observed following exposure to combined high temperature and Cu. Secondly, both stressors, alone or in combination, significantly upregulated the expression of de novo DNA methyltransferase genes (dnmt3) along with no differences in global cytosine methylation level. Finally, Cu exposure significantly increased the expression of metallothionein (mt2) and heat shock protein (hsp70), the latter being also increased following exposure to high temperature. These results highlighted the sensitivity of early embryogenesis and more precisely of the reprogramming period to environmental challenges, in a realistic situation of combined stressors.

  5. DOE research in utilization of high-performance computers

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-12-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models whose execution is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex; consequently, it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure

  6. Particle beam dynamics simulations using the POOMA framework

    International Nuclear Information System (INIS)

    Humphrey, W.; Ryne, R.; Cleland, T.; Cummings, J.; Habib, S.; Mark, G.; Ji Qiang

    1998-01-01

    A program for simulation of the dynamics of high intensity charged particle beams in linear particle accelerators has been developed in C++ using the POOMA Framework, for use on serial and parallel architectures. The code models the trajectories of charged particles through a sequence of different accelerator beamline elements such as drift chambers, quadrupole magnets, or RF cavities. An FFT-based particle-in-cell algorithm is used to solve the Poisson equation that models the Coulomb interactions of the particles. The code employs an object-oriented design with software abstractions for the particle beam, accelerator beamline, and beamline elements, using C++ templates to efficiently support both 2D and 3D capabilities in the same code base. The POOMA Framework, which encapsulates much of the effort required for parallel execution, provides particle and field classes, particle-field interaction capabilities, and parallel FFT algorithms. The performance of this application running serially and in parallel is compared to an existing HPF implementation, with the POOMA version seen to run four times faster than the HPF code

  7. PNP2 calculus programme for interpretation of the experimental data by pulsed source neutrons methods. (Pt. 1). [Fortran IV for ICT 1900

    Energy Technology Data Exchange (ETDEWEB)

    Fratiloiu, C; Cristea, Gh

    1975-01-01

    PNP2 is a calculation programme destined to the interpretation of the experimental data by the pulsed source neutrons method on multiplyer environments into critic or subcritic state, populated with thermal neutrons. The programme is elaborate in the FORTRAN IV language of the ICT 1900 computer. The variation form in time of the thermal neutrons population for the multiplyer environments as a result of this whipping to the moments t=KT, with pockets of neutrons, appearing in the simplified theory of the pulsed source neutrons method. By this process are determinated the quantities Nsub(..cap alpha..), ..cap alpha.., Nsub(r) and B as well as empiric variants which affect these magnitudes. With these quantities is calculated the reactivity in relative units.

  8. High-performance mass storage system for workstations

    Science.gov (United States)

    Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.

    1993-01-01

    Reduced Instruction Set Computer (RISC) workstations and Personnel Computers (PC) are very popular tools for office automation, command and control, scientific analysis, database management, and many other applications. However, when using Input/Output (I/O) intensive applications, the RISC workstations and PC's are often overburdened with the tasks of collecting, staging, storing, and distributing data. Also, by using standard high-performance peripherals and storage devices, the I/O function can still be a common bottleneck process. Therefore, the high-performance mass storage system, developed by Loral AeroSys' Independent Research and Development (IR&D) engineers, can offload a RISC workstation of I/O related functions and provide high-performance I/O functions and external interfaces. The high-performance mass storage system has the capabilities to ingest high-speed real-time data, perform signal or image processing, and stage, archive, and distribute the data. This mass storage system uses a hierarchical storage structure, thus reducing the total data storage cost, while maintaining high-I/O performance. The high-performance mass storage system is a network of low-cost parallel processors and storage devices. The nodes in the network have special I/O functions such as: SCSI controller, Ethernet controller, gateway controller, RS232 controller, IEEE488 controller, and digital/analog converter. The nodes are interconnected through high-speed direct memory access links to form a network. The topology of the network is easily reconfigurable to maximize system throughput for various applications. This high-performance mass storage system takes advantage of a 'busless' architecture for maximum expandability. The mass storage system consists of magnetic disks, a WORM optical disk jukebox, and an 8mm helical scan tape to form a hierarchical storage structure. Commonly used files are kept in the magnetic disk for fast retrieval. The optical disks are used as archive

  9. Ground Glass Pozzolan in Conventional, High, and Ultra-High Performance Concrete

    OpenAIRE

    Tagnit-Hamou Arezki; Zidol Ablam; Soliman Nancy; Deschamps Joris; Omran Ahmed

    2018-01-01

    Ground-glass pozzolan (G) obtained by grinding the mixed-waste glass to same fineness of cement can act as a supplementary-cementitious material (SCM), given that it is an amorphous and a pozzolanic material. The G showed promising performances in different concrete types such as conventional concrete (CC), high-performance concrete (HPC), and ultra-high performance concrete (UHPC). The current paper reports on the characteristics and performance of G in these concrete types. The use of G pro...

  10. Fabrication of Hadfield-Cored Multi-layer Steel Sheet by Roll-Bonding with 1.8-GPa-Strength-Grade Hot-Press-Forming Steel

    Science.gov (United States)

    Chin, Kwang-Geun; Kang, Chung-Yun; Park, Jaeyeong; Lee, Sunghak

    2018-05-01

    An austenitic Hadfield steel was roll-bonded with a 1.8-GPa-strength-grade martensitic hot-press-forming (HPF) steel to fabricate a multi-layer steel (MLS) sheet. Near the Hadfield/HPF interface, the carburized and decarburized layers were formed by the carbon diffusion from the Hadfield (1.2%C) to HPF (0.35%C) layers, and could be regarded as kinds of very thin multi-layers of 35 μm in thickness. The tensile test and fractographic data indicated that the MLS sheet was fractured abruptly within the elastic range by the intergranular fracture occurred in the carburized layer. This was because C was mainly segregated at prior austenite grain boundaries in the carburized layer, which weakened grain boundaries to induce the intergranular fracture. In order to solve the intergranular facture problem, the MLS sheet was tempered at 200 °C. The stress-strain curve of the tempered MLS sheet lay between those of the HPF and Hadfield sheets, and a rule of mixtures was roughly satisfied. Tensile properties of the MLS sheet were dramatically improved after the tempering, and the intergranular fracture was erased completely. In particular, the yield strength up to 1073 MPa along with the high strain hardening and excellent ductility of 32.4% were outstanding because the yield strength over 1 GPa was hardly achieved in conventional austenitic steels.

  11. Indoor Air Quality in High Performance Schools

    Science.gov (United States)

    High performance schools are facilities that improve the learning environment while saving energy, resources, and money. The key is understanding the lifetime value of high performance schools and effectively managing priorities, time, and budget.

  12. Advanced high performance solid wall blanket concepts

    International Nuclear Information System (INIS)

    Wong, C.P.C.; Malang, S.; Nishio, S.; Raffray, R.; Sagara, A.

    2002-01-01

    First wall and blanket (FW/blanket) design is a crucial element in the performance and acceptance of a fusion power plant. High temperature structural and breeding materials are needed for high thermal performance. A suitable combination of structural design with the selected materials is necessary for D-T fuel sufficiency. Whenever possible, low afterheat, low chemical reactivity and low activation materials are desired to achieve passive safety and minimize the amount of high-level waste. Of course the selected fusion FW/blanket design will have to match the operational scenarios of high performance plasma. The key characteristics of eight advanced high performance FW/blanket concepts are presented in this paper. Design configurations, performance characteristics, unique advantages and issues are summarized. All reviewed designs can satisfy most of the necessary design goals. For further development, in concert with the advancement in plasma control and scrape off layer physics, additional emphasis will be needed in the areas of first wall coating material selection, design of plasma stabilization coils, consideration of reactor startup and transient events. To validate the projected performance of the advanced FW/blanket concepts the critical element is the need for 14 MeV neutron irradiation facilities for the generation of necessary engineering design data and the prediction of FW/blanket components lifetime and availability

  13. High-performance OPCPA laser system

    International Nuclear Information System (INIS)

    Zuegel, J.D.; Bagnoud, V.; Bromage, J.; Begishev, I.A.; Puth, J.

    2006-01-01

    Optical parametric chirped-pulse amplification (OPCPA) is ideally suited for amplifying ultra-fast laser pulses since it provides broadband gain across a wide range of wavelengths without many of the disadvantages of regenerative amplification. A high-performance OPCPA system has been demonstrated as a prototype for the front end of the OMEGA Extended Performance (EP) Laser System. (authors)

  14. High-performance OPCPA laser system

    Energy Technology Data Exchange (ETDEWEB)

    Zuegel, J.D.; Bagnoud, V.; Bromage, J.; Begishev, I.A.; Puth, J. [Rochester Univ., Lab. for Laser Energetics, NY (United States)

    2006-06-15

    Optical parametric chirped-pulse amplification (OPCPA) is ideally suited for amplifying ultra-fast laser pulses since it provides broadband gain across a wide range of wavelengths without many of the disadvantages of regenerative amplification. A high-performance OPCPA system has been demonstrated as a prototype for the front end of the OMEGA Extended Performance (EP) Laser System. (authors)

  15. Thermal Hydraulic Fortran Program for Steady State Calculations of Plate Type Fuel Research Reactors

    International Nuclear Information System (INIS)

    Khedr, H.

    2008-01-01

    The safety assessment of Research and Power Reactors is a continuous process over their life and that requires verified and validated codes. Power Reactor codes all over the world are well established and qualified against a real measuring data and qualified experimental facilities. These codes are usually sophisticated, require special skills and consume much more running time. On the other hand, most of the Research Reactor codes still requiring more data for validation and qualification. Therefore it is benefit for a regulatory body and the companies working in the area of Research Reactor assessment and design to have their own program that give them a quick judgment. The present paper introduces a simple one dimensional Fortran program called THDSN for steady state best estimate Thermal Hydraulic (TH) calculations of plate type fuel RRs. Beside calculating the fuel and coolant temperature distribution and pressure gradient in an average and hot channel the program calculates the safety limits and margins against the critical phenomena encountered in RR such as the burnout heat flux and the onset of flow instability. Well known TH correlations for calculating the safety parameters are used. THDSN program is verified by comparing its results for 2 and 10 MW benchmark reactors with that published in IAEA publications and good agreement is found. Also the program results are compared with those published for other programs such as PARET and TERMIC. An extension for this program is underway to cover the transient TH calculations

  16. Thermal-hydraulic Fortran program for steady-state calculations of plate-type fuel research reactors

    Directory of Open Access Journals (Sweden)

    Khedr Ahmed

    2008-01-01

    Full Text Available The safety assessment of research and power reactors is a continuous process covering their lifespan and requiring verified and validated codes. Power reactor codes all over the world are well established and qualified against real measuring data and qualified experimental facilities. These codes are usually sophisticated, require special skills and consume a lot of running time. On the other hand, most research reactor codes still require much more data for validation and qualification. It is, therefore, of benefit to any regulatory body to develop its own codes for the review and assessment of research reactors. The present paper introduces a simple, one-dimensional Fortran program called THDSN for steady-state thermal-hydraulic calculations of plate-type fuel research reactors. Besides calculating the fuel and coolant temperature distributions and pressure gradients in an average and hot channel, the program calculates the safety limits and margins against the critical phenomena encountered in research reactors, such as the onset of nucleate boiling, critical heat flux and flow instability. Well known thermal-hydraulic correlations for calculating the safety parameters and several formulas for the heat transfer coefficient have been used. The THDSN program was verified by comparing its results for 2 and 10 MW benchmark reactors with those published in IAEA publications and a good agreement was found. Also, the results of the program are compared with those published for other programs, such as the PARET and TERMIC.

  17. KUEBEL. A Fortran program for computation of cooling-agent-distribution within reactor fuel-elements

    International Nuclear Information System (INIS)

    Inhoven, H.

    1984-12-01

    KUEBEL is a Fortran-program for computation of cooling-agent-distribution within reactor fuel-elements or -zones of theirs. They may be assembled of max. 40 cooling-channels with laminar up to turbulent type of flow (respecting Reynolds' coefficients up to 2.0E+06) at equal pressure loss. Flow-velocity, dynamic flow-, contraction- and friction-losses will be calculated for each channel and for the total zone. Other computations will present mean heat-up of cooling-agent, mean outlet-temperature of the core, boiling-temperature and absolute pressure at flow-outlet. All characteristic coolant-values, including the factor of safety for flow-instability of the most-loaded cooling gap are computed by 'KUEBEL' too. Absolute pressure at flow-outlet or is-factor may be defined as dependent or independent variables of the program alternatively. In latter case 3 variations of solution will be available: Adapted flow of cooling-agent, inlet-temperature of the core and thermal power. All calculations can be done alternatively with variation of parameters: flow of cooling-agent, inlet-temperature of the core and thermal power, which are managed by the program itself. 'KUEBEL' is able to distinguish light- and heavy-water coolant, flow-direction of coolant and fuel elements with parallel, rectangular, respectively concentric, cylindrical shape of their gaps. Required material specifics are generated by the program. Segments of fuel elements or constructively unconnected gaps can also be computed by means of interposition of S.C. 'phantom channels'. (orig.) [de

  18. High performance in software development

    CERN Multimedia

    CERN. Geneva; Haapio, Petri; Liukkonen, Juha-Matti

    2015-01-01

    What are the ingredients of high-performing software? Software development, especially for large high-performance systems, is one the most complex tasks mankind has ever tried. Technological change leads to huge opportunities but challenges our old ways of working. Processing large data sets, possibly in real time or with other tight computational constraints, requires an efficient solution architecture. Efficiency requirements span from the distributed storage and large-scale organization of computation and data onto the lowest level of processor and data bus behavior. Integrating performance behavior over these levels is especially important when the computation is resource-bounded, as it is in numerics: physical simulation, machine learning, estimation of statistical models, etc. For example, memory locality and utilization of vector processing are essential for harnessing the computing power of modern processor architectures due to the deep memory hierarchies of modern general-purpose computers. As a r...

  19. High Performance Computing in Science and Engineering '16 : Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2016

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2016. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  20. High-performance computing — an overview

    Science.gov (United States)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  1. Team Development for High Performance Management.

    Science.gov (United States)

    Schermerhorn, John R., Jr.

    1986-01-01

    The author examines a team development approach to management that creates shared commitments to performance improvement by focusing the attention of managers on individual workers and their task accomplishments. It uses the "high-performance equation" to help managers confront shared beliefs and concerns about performance and develop realistic…

  2. Metabolism of clofibric acid in zebrafish embryos (Danio rerio) as determined by liquid chromatography-high resolution-mass spectrometry.

    Science.gov (United States)

    Brox, Stephan; Seiwert, Bettina; Haase, Nora; Küster, Eberhard; Reemtsma, Thorsten

    2016-01-01

    The zebrafish embryo (ZFE) is increasingly used in ecotoxicology research but detailed knowledge of its metabolic potential is still limited. This study focuses on the xenobiotic metabolism of ZFE at different life-stages using the pharmaceutical compound clofibric acid as study compound. Liquid chromatography with quadrupole-time-of-flight mass spectrometry (LC-QToF-MS) is used to detect and to identify the transformation products (TPs). In screening experiments, a total of 18 TPs was detected and structure proposals were elaborated for 17 TPs, formed by phase I and phase II metabolism. Biotransformation of clofibric acid by the ZFE involves conjugation with sulfate or glucuronic acid, and, reported here for the first time, with carnitine, taurine, and aminomethanesulfonic acid. Further yet unknown cyclization products were identified using non-target screening that may represent a new detoxification pathway. Sulfate containing TPs occurred already after 3h of exposure (7hpf), and from 48h of exposure (52hpf) onwards, all TPs were detected. The detection of these TPs indicates the activity of phase I and phase II enzymes already at early life-stages. Additionally, the excretion of one TP into the exposure medium was observed. The results of this study outline the high metabolic potential of the ZFE with respect to the transformation of xenobiotics. Similarities but also differences to other test systems were observed. Biotransformation of test chemicals in toxicity testing with ZFE may therefore need further consideration. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. A Versatile Technique to Enable Sub-milli-Kelvin Instrument Stability for Precise Radial Velocity Measurements: Tests with the Habitable-zone Planet Finder

    Science.gov (United States)

    Stefansson, Gudmundur; Hearty, Frederick; Robertson, Paul; Mahadevan, Suvrath; Anderson, Tyler; Levi, Eric; Bender, Chad; Nelson, Matthew; Monson, Andrew; Blank, Basil; Halverson, Samuel; Henderson, Chuck; Ramsey, Lawrence; Roy, Arpita; Schwab, Christian; Terrien, Ryan

    2016-12-01

    Insufficient instrument thermomechanical stability is one of the many roadblocks for achieving 10 cm s-1 Doppler radial velocity precision, the precision needed to detect Earth-twins orbiting solar-type stars. Highly temperature and pressure stabilized spectrographs allow us to better calibrate out instrumental drifts, thereby helping in distinguishing instrumental noise from astrophysical stellar signals. We present the design and performance of the Environmental Control System (ECS) for the Habitable-zone Planet Finder (HPF), a high-resolution (R = 50,000) fiber-fed near-infrared (NIR) spectrograph for the 10 {{m}} Hobby-Eberly Telescope at McDonald Observatory. HPF will operate at 180 {{K}}, driven by the choice of an H2RG NIR detector array with a 1.7 μ {{m}} cutoff. This ECS has demonstrated 0.6 {mK} rms stability over 15 days at both 180 and 300 {{K}}, and maintained high-quality vacuum (\\lt {10}-7 {Torr}) over months, during long-term stability tests conducted without a planned passive thermal enclosure surrounding the vacuum chamber. This control scheme is versatile and can be applied as a blueprint to stabilize future NIR and optical high-precision Doppler instruments over a wide temperature range from ˜77 {{K}} to elevated room temperatures. A similar ECS is being implemented to stabilize NEID, the NASA/NSF NN-EXPLORE spectrograph for the 3.5 {{m}} WIYN telescope at Kitt Peak, operating at 300 {{K}}. A [full SolidWorks 3D-CAD model] and a comprehensive parts list of the HPF ECS are included with this manuscript to facilitate the adaptation of this versatile environmental control scheme in the broader astronomical community. Certain commercial equipment, instruments, or materials are identified in this paper in order to specify the experimental procedure adequately. Such identification is not intended to imply recommendation or endorsement by the National Institute of Standards and Technology, nor is it intended to imply that the materials or equipment

  4. Fine structure of synapses on dendritic spines

    Directory of Open Access Journals (Sweden)

    Michael eFrotscher

    2014-09-01

    Full Text Available Camillo Golgi’s Reazione Nera led to the discovery of dendritic spines, small appendages originating from dendritic shafts. With the advent of electron microscopy (EM they were identified as sites of synaptic contact. Later it was found that changes in synaptic strength were associated with changes in the shape of dendritic spines. While live-cell imaging was advantageous in monitoring the time course of such changes in spine structure, EM is still the best method for the simultaneous visualization of all cellular components, including actual synaptic contacts, at high resolution. Immunogold labeling for EM reveals the precise localization of molecules in relation to synaptic structures. Previous EM studies of spines and synapses were performed in tissue subjected to aldehyde fixation and dehydration in ethanol, which is associated with protein denaturation and tissue shrinkage. It has remained an issue to what extent fine structural details are preserved when subjecting the tissue to these procedures. In the present review, we report recent studies on the fine structure of spines and synapses using high-pressure freezing (HPF, which avoids protein denaturation by aldehydes and results in an excellent preservation of ultrastructural detail. In these studies, HPF was used to monitor subtle fine-structural changes in spine shape associated with chemically induced long-term potentiation (cLTP at identified hippocampal mossy fiber synapses. Changes in spine shape result from reorganization of the actin cytoskeleton. We report that cLTP was associated with decreased immunogold labeling for phosphorylated cofilin (p-cofilin, an actin-depolymerizing protein. Phosphorylation of cofilin renders it unable to depolymerize F-actin, which stabilizes the actin cytoskeleton. Decreased levels of p-cofilin, in turn, suggest increased actin turnover, possibly underlying the changes in spine shape associated with cLTP. The findings reviewed here establish HPF as

  5. A fortran program for elastic scattering of deuterons with an optical model containing tensorial potentials

    International Nuclear Information System (INIS)

    Raynal, J.

    1963-01-01

    The optical model has been applied with success to the elastic scattering of particles of spin 0 and 1/2 and to a lesser degree to that of deuterons. For particles of spin l/2, an LS coupling term is ordinarily used; this term is necessary to obtain a polarization; for deuterons, this coupling has been already introduced, but the possible forms of potentials are more numerous (in this case, scalar products of a second rank spin tensor with a tensor of the same rank in space or momentum can occur). These terms which may be necessary are primarily important for the tensor polarization. This problem is of particular interest at Saclay since a beam of polarized deuterons has become available. The FORTRAN program SPM 037 permits the study of the effect of tensorial potentials constructed from the distance of the deuteron from the target and its angular momentum with respect to it. This report should make possible the use and even the modification of the program. It consists of: a description of the problem and of the notation employed, a presentation of the methods adopted, an indication of the necessary data and how they should be introduced, and finally tables of symbols which are in equivalence or common statements: these tables must be considered when making any modification. (author) [fr

  6. High Performance Walls in Hot-Dry Climates

    Energy Technology Data Exchange (ETDEWEB)

    Hoeschele, Marc [Alliance for Residential Building Innovation (ARBI), Davis, CA (United States); Springer, David [Alliance for Residential Building Innovation (ARBI), Davis, CA (United States); Dakin, Bill [Alliance for Residential Building Innovation (ARBI), Davis, CA (United States); German, Alea [Alliance for Residential Building Innovation (ARBI), Davis, CA (United States)

    2015-01-01

    High performance walls represent a high priority measure for moving the next generation of new homes to the Zero Net Energy performance level. The primary goal in improving wall thermal performance revolves around increasing the wall framing from 2x4 to 2x6, adding more cavity and exterior rigid insulation, achieving insulation installation criteria meeting ENERGY STAR's thermal bypass checklist. To support this activity, in 2013 the Pacific Gas & Electric Company initiated a project with Davis Energy Group (lead for the Building America team, Alliance for Residential Building Innovation) to solicit builder involvement in California to participate in field demonstrations of high performance wall systems. Builders were given incentives and design support in exchange for providing site access for construction observation, cost information, and builder survey feedback. Information from the project was designed to feed into the 2016 Title 24 process, but also to serve as an initial mechanism to engage builders in more high performance construction strategies. This Building America project utilized information collected in the California project.

  7. Hyperfine Induced Transitions as Diagnostics of Isotopic Composition and Densities of Low-Density Plasmas

    Science.gov (United States)

    Brage, Tomas; Judge, Philip G.; Aboussaïd, Abdellatif; Godefroid, Michel R.; Jönsson, Per; Ynnerman, Anders; Froese Fischer, Charlotte; Leckrone, David S.

    1998-06-01

    The J = 0 --> J' = 0 radiative transitions, usually viewed as allowed through two-photon decay, may also be induced by the hyperfine (HPF) interaction in atoms or ions having a nonzero nuclear spin. We compute new and review existing decay rates for the nsnp 3PoJ --> ns2 1SJ'=0 transitions in ions of the Be (n = 2) and Mg (n = 3) isoelectronic sequences. The HPF induced decay rates for the J = 0 --> J' = 0 transitions are many orders of magnitude larger than those for the competing two-photon processes, and when present are typically 1 or 2 orders of magnitude smaller than the decay rates of the magnetic quadrupole (J = 2 --> J' = 0) transitions for these ions. Several HPF induced transitions are potentially of astrophysical interest in ions of C, N, Na, Mg, Al, Si, K, Cr, Fe, and Ni. We highlight those cases that may be of particular diagnostic value for determining isotopic abundance ratios and/or electron densities from UV or EUV emission-line data. We present our atomic data in the form of scaling laws so that, given the isotopic nuclear spin and magnetic moment, a simple expression yields estimates for HPF induced decay rates. We examine some UV and EUV solar and nebular data in light of these new results and suggest possible applications for future study. We could not find evidence for the existence of HPF induced lines in the spectra we examined, but we demonstrate that existing data have come close to providing interesting upper limits. For the planetary nebula SMC N2, we derive an upper limit of 0.1 for 13C/12C from Goddard High-Resolution Spectrograph data obtained by Clegg. It is likely that more stringent limits could be obtained using newer data with higher sensitivities in a variety of objects.

  8. Flexible nanoscale high-performance FinFETs

    KAUST Repository

    Sevilla, Galo T.

    2014-10-28

    With the emergence of the Internet of Things (IoT), flexible high-performance nanoscale electronics are more desired. At the moment, FinFET is the most advanced transistor architecture used in the state-of-the-art microprocessors. Therefore, we show a soft-etch based substrate thinning process to transform silicon-on-insulator (SOI) based nanoscale FinFET into flexible FinFET and then conduct comprehensive electrical characterization under various bending conditions to understand its electrical performance. Our study shows that back-etch based substrate thinning process is gentler than traditional abrasive back-grinding process; it can attain ultraflexibility and the electrical characteristics of the flexible nanoscale FinFET show no performance degradation compared to its rigid bulk counterpart indicating its readiness to be used for flexible high-performance electronics.

  9. Later life swimming performance and persistent heart damage following subteratogenic PAH mixture exposure in the Atlantic killifish (Fundulus heteroclitus).

    Science.gov (United States)

    Brown, Daniel R; Thompson, Jasmine; Chernick, Melissa; Hinton, David E; Di Giulio, Richard T

    2017-12-01

    High-level, acute exposures to individual polycyclic aromatic hydrocarbons (PAHs) and complex PAH mixtures result in cardiac abnormalities in developing fish embryos. Whereas acute PAH exposures can be developmentally lethal, little is known about the later life consequences of early life, lower level PAH exposures in survivors. A population of PAH-adapted Fundulus heteroclitus from the PAH-contaminated Superfund site, Atlantic Wood Industries, Elizabeth River, Portsmouth, Virginia, United States, is highly resistant to acute PAH cardiac teratogenicity. We sought to determine and characterize long-term swimming performance and cardiac histological alterations of a subteratogenic PAH mixture exposure in both reference killifish and PAH-adapted Atlantic Wood killifish embryos. Killifish from a relatively uncontaminated reference site, King's Creek, Virginia, United States, and Atlantic Wood killifish were treated with dilutions of Elizabeth River sediment extract at 24 h post fertilization (hpf). Two proven subteratogenic dilutions, 0.1 and 1.0% Elizabeth River sediment extract (total PAH 5.04 and 50.4 µg/L, respectively), were used for embryo exposures. Then, at 5-mo post hatching, killifish were subjected to a swim performance test. A separate subset of these individuals was processed for cardiac histological analysis. Unexposed King's Creek killifish significantly outperformed the unexposed Atlantic Wood killifish in swimming performance as measured by Ucrit (i.e., critical swimming speed). However, King's Creek killifish exposed to Elizabeth River sediment extract (both 0.1 and 1.0%) showed significant declines in Ucrit. Histological analysis revealed the presence of blood in the pericardium of King's Creek killifish. Although Atlantic Wood killifish showed baseline performance deficits relative to King's Creek killifish, their pericardial cavities were nearly free of blood and atrial and ventricular alterations. These findings may explain, in part, the

  10. The Role of Performance Management in Creating and Maintaining a High-Performance Organization

    Directory of Open Access Journals (Sweden)

    André A. de Waal

    2015-04-01

    Full Text Available There is still a good deal of confusion in the literature about how the use of a performance management system affects overall organizational performance. Some researchers find that performance management enhances both the financial and non-financial results of an organization, while others do not find any positive effects or, at most, ambiguous effects. An important step toward getting more clarity in this relationship is to investigate the role performance management plays in creating and maintaining a high-performance organization (HPO. The purpose of this study is to integrate performance management analysis (PMA and high-performance organization (HPO. A questionnaire combining questions on PMA dimensions and HPO factors was administered to two European-based multinational firms. Based on 468 valid questionnaires, a correlation analysis was performed on the PMA dimensions and the HPO factors in order to test the impact of performance management on the factors of high organizational performance. The results show strong and significant correlations between all the PMA dimensions and all the HPO factors, indicating that a performance management system that fosters performance-driven behavior in the organization is of critical importance to strengthen overall financial and non-financial performance.

  11. Development of new high-performance stainless steels

    International Nuclear Information System (INIS)

    Park, Yong Soo

    2002-01-01

    This paper focused on high-performance stainless steels and their development status. Effect of nitrogen addition on super-stainless steel was discussed. Research activities at Yonsei University, on austenitic and martensitic high-performance stainless, steels, and the next-generation duplex stainless steels were introduced

  12. vSphere high performance cookbook

    CERN Document Server

    Sarkar, Prasenjit

    2013-01-01

    vSphere High Performance Cookbook is written in a practical, helpful style with numerous recipes focusing on answering and providing solutions to common, and not-so common, performance issues and problems.The book is primarily written for technical professionals with system administration skills and some VMware experience who wish to learn about advanced optimization and the configuration features and functions for vSphere 5.1.

  13. High Burnup Fuel Performance and Safety Research

    Energy Technology Data Exchange (ETDEWEB)

    Bang, Je Keun; Lee, Chan Bok; Kim, Dae Ho (and others)

    2007-03-15

    The worldwide trend of nuclear fuel development is to develop a high burnup and high performance nuclear fuel with high economies and safety. Because the fuel performance evaluation code, INFRA, has a patent, and the superiority for prediction of fuel performance was proven through the IAEA CRP FUMEX-II program, the INFRA code can be utilized with commercial purpose in the industry. The INFRA code was provided and utilized usefully in the universities and relevant institutes domesticallly and it has been used as a reference code in the industry for the development of the intrinsic fuel rod design code.

  14. Danish High Performance Concretes

    DEFF Research Database (Denmark)

    Nielsen, M. P.; Christoffersen, J.; Frederiksen, J.

    1994-01-01

    In this paper the main results obtained in the research program High Performance Concretes in the 90's are presented. This program was financed by the Danish government and was carried out in cooperation between The Technical University of Denmark, several private companies, and Aalborg University...... concretes, workability, ductility, and confinement problems....

  15. Computer programs for capital cost estimation, lifetime economic performance simulation, and computation of cost indexes for laser fusion and other advanced technology facilities

    International Nuclear Information System (INIS)

    Pendergrass, J.H.

    1978-01-01

    Three FORTRAN programs, CAPITAL, VENTURE, and INDEXER, have been developed to automate computations used in assessing the economic viability of proposed or conceptual laser fusion and other advanced-technology facilities, as well as conventional projects. The types of calculations performed by these programs are, respectively, capital cost estimation, lifetime economic performance simulation, and computation of cost indexes. The codes permit these three topics to be addressed with considerable sophistication commensurate with user requirements and available data

  16. Promising high monetary rewards for future task performance increases intermediate task performance.

    Directory of Open Access Journals (Sweden)

    Claire M Zedelius

    Full Text Available In everyday life contexts and work settings, monetary rewards are often contingent on future performance. Based on research showing that the anticipation of rewards causes improved task performance through enhanced task preparation, the present study tested the hypothesis that the promise of monetary rewards for future performance would not only increase future performance, but also performance on an unrewarded intermediate task. Participants performed an auditory Simon task in which they responded to two consecutive tones. While participants could earn high vs. low monetary rewards for fast responses to every second tone, their responses to the first tone were not rewarded. Moreover, we compared performance under conditions in which reward information could prompt strategic performance adjustments (i.e., when reward information was presented for a relatively long duration to conditions preventing strategic performance adjustments (i.e., when reward information was presented very briefly. Results showed that high (vs. low rewards sped up both rewarded and intermediate, unrewarded responses, and the effect was independent of the duration of reward presentation. Moreover, long presentation led to a speed-accuracy trade-off for both rewarded and unrewarded tones, whereas short presentation sped up responses to rewarded and unrewarded tones without this trade-off. These results suggest that high rewards for future performance boost intermediate performance due to enhanced task preparation, and they do so regardless whether people respond to rewards in a strategic or non-strategic manner.

  17. Promising high monetary rewards for future task performance increases intermediate task performance.

    Science.gov (United States)

    Zedelius, Claire M; Veling, Harm; Bijleveld, Erik; Aarts, Henk

    2012-01-01

    In everyday life contexts and work settings, monetary rewards are often contingent on future performance. Based on research showing that the anticipation of rewards causes improved task performance through enhanced task preparation, the present study tested the hypothesis that the promise of monetary rewards for future performance would not only increase future performance, but also performance on an unrewarded intermediate task. Participants performed an auditory Simon task in which they responded to two consecutive tones. While participants could earn high vs. low monetary rewards for fast responses to every second tone, their responses to the first tone were not rewarded. Moreover, we compared performance under conditions in which reward information could prompt strategic performance adjustments (i.e., when reward information was presented for a relatively long duration) to conditions preventing strategic performance adjustments (i.e., when reward information was presented very briefly). Results showed that high (vs. low) rewards sped up both rewarded and intermediate, unrewarded responses, and the effect was independent of the duration of reward presentation. Moreover, long presentation led to a speed-accuracy trade-off for both rewarded and unrewarded tones, whereas short presentation sped up responses to rewarded and unrewarded tones without this trade-off. These results suggest that high rewards for future performance boost intermediate performance due to enhanced task preparation, and they do so regardless whether people respond to rewards in a strategic or non-strategic manner.

  18. Comparison of ultra high performance supercritical fluid chromatography, ultra high performance liquid chromatography, and gas chromatography for the separation of synthetic cathinones.

    Science.gov (United States)

    Carnes, Stephanie; O'Brien, Stacey; Szewczak, Angelica; Tremeau-Cayel, Lauriane; Rowe, Walter F; McCord, Bruce; Lurie, Ira S

    2017-09-01

    A comparison of ultra high performance supercritical fluid chromatography, ultra high performance liquid chromatography, and gas chromatography for the separation of synthetic cathinones has been conducted. Nine different mixtures of bath salts were analyzed in this study. The three different chromatographic techniques were examined using a general set of controlled synthetic cathinones as well as a variety of other synthetic cathinones that exist as positional isomers. Overall 35 different synthetic cathinones were analyzed. A variety of column types and chromatographic modes were examined for developing each separation. For the ultra high performance supercritical fluid chromatography separations, analyses were performed using a series of Torus and Trefoil columns with either ammonium formate or ammonium hydroxide as additives, and methanol, ethanol or isopropanol organic solvents as modifiers. Ultra high performance liquid chromatographic separations were performed in both reversed phase and hydrophilic interaction chromatographic modes using SPP C18 and SPP HILIC columns. Gas chromatography separations were performed using an Elite-5MS capillary column. The orthogonality of ultra high performance supercritical fluid chromatography, ultra high performance liquid chromatography, and gas chromatography was examined using principal component analysis. For the best overall separation of synthetic cathinones, the use of ultra high performance supercritical fluid chromatography in combination with gas chromatography is recommended. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. High Performance Electronics on Flexible Silicon

    KAUST Repository

    Sevilla, Galo T.

    2016-09-01

    Over the last few years, flexible electronic systems have gained increased attention from researchers around the world because of their potential to create new applications such as flexible displays, flexible energy harvesters, artificial skin, and health monitoring systems that cannot be integrated with conventional wafer based complementary metal oxide semiconductor processes. Most of the current efforts to create flexible high performance devices are based on the use of organic semiconductors. However, inherent material\\'s limitations make them unsuitable for big data processing and high speed communications. The objective of my doctoral dissertation is to develop integration processes that allow the transformation of rigid high performance electronics into flexible ones while maintaining their performance and cost. In this work, two different techniques to transform inorganic complementary metal-oxide-semiconductor electronics into flexible ones have been developed using industry compatible processes. Furthermore, these techniques were used to realize flexible discrete devices and circuits which include metal-oxide-semiconductor field-effect-transistors, the first demonstration of flexible Fin-field-effect-transistors, and metal-oxide-semiconductors-based circuits. Finally, this thesis presents a new technique to package, integrate, and interconnect flexible high performance electronics using low cost additive manufacturing techniques such as 3D printing and inkjet printing. This thesis contains in depth studies on electrical, mechanical, and thermal properties of the fabricated devices.

  20. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  1. Critical Factors Explaining the Leadership Performance of High-Performing Principals

    Science.gov (United States)

    Hutton, Disraeli M.

    2018-01-01

    The study explored critical factors that explain leadership performance of high-performing principals and examined the relationship between these factors based on the ratings of school constituents in the public school system. The principal component analysis with the use of Varimax Rotation revealed that four components explain 51.1% of the…

  2. High Performance Walls in Hot-Dry Climates

    Energy Technology Data Exchange (ETDEWEB)

    Hoeschele, Marc [National Renewable Energy Lab. (NREL), Golden, CO (United States); Springer, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Dakin, Bill [National Renewable Energy Lab. (NREL), Golden, CO (United States); German, Alea [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2015-01-01

    High performance walls represent a high priority measure for moving the next generation of new homes to the Zero Net Energy performance level. The primary goal in improving wall thermal performance revolves around increasing the wall framing from 2x4 to 2x6, adding more cavity and exterior rigid insulation, achieving insulation installation criteria meeting ENERGY STAR's thermal bypass checklist, and reducing the amount of wood penetrating the wall cavity.

  3. High-performance liquid chromatography of oligoguanylates at high pH

    Science.gov (United States)

    Stribling, R.; Deamer, D. (Principal Investigator)

    1991-01-01

    Because of the stable self-structures formed by oligomers of guanosine, standard high-performance liquid chromatography techniques for oligonucleotide fractionation are not applicable. Previously, oligoguanylate separations have been carried out at pH 12 using RPC-5 as the packing material. While RPC-5 provides excellent separations, there are several limitations, including the lack of a commercially available source. This report describes a new anion-exchange high-performance liquid chromatography method using HEMA-IEC BIO Q, which successfully separates different forms of the guanosine monomer as well as longer oligoguanylates. The reproducibility and stability at high pH suggests a versatile role for this material.

  4. Development of high-performance concrete having high resistance to chloride penetration

    International Nuclear Information System (INIS)

    Oh, Byung Hwan; Cha, Soo Won; Jang, Bong Seok; Jang, Seung Yup

    2002-01-01

    The resistance to chloride penetration is one of the simplest measures to determine the durability of concrete, e.g. resistance to freezing and thawing, corrosion of steel in concrete and other chemical attacks. Thus, high-performance concrete may be defined as the concrete having high resistance to chloride penetration as well as high strength. The purpose of this paper is to investigate the resistance to chloride penetration of different types of concrete and to develop high-performance concrete that has very high resistance to chloride penetration, and thus, can guarantee high durability. A large number of concrete specimens have been tested by the rapid chloride permeability test method as designated in AASHTO T 277 and ASTM C 1202. The major test variables include water-to-binder ratios, type of cement, type and amount of mineral admixtures (silica fume, fly ash and blast-furnace slag), maximum size of aggregates and air-entrainment. Test results show that concrete containing optimal amount of silica fume shows very high resistance to chloride penetration, and high-performance concrete developed in this study can be efficiently employed to enhance the durability of concrete structures in severe environments such as nuclear power plants, water-retaining structures and other offshore structures

  5. Identifying High Performance ERP Projects

    OpenAIRE

    Stensrud, Erik; Myrtveit, Ingunn

    2002-01-01

    Learning from high performance projects is crucial for software process improvement. Therefore, we need to identify outstanding projects that may serve as role models. It is common to measure productivity as an indicator of performance. It is vital that productivity measurements deal correctly with variable returns to scale and multivariate data. Software projects generally exhibit variable returns to scale, and the output from ERP projects is multivariate. We propose to use Data Envelopment ...

  6. p53 dependent apoptotic cell death induces embryonic malformation in Carassius auratus under chronic hypoxia.

    Directory of Open Access Journals (Sweden)

    Paramita Banerjee Sawant

    Full Text Available Hypoxia is a global phenomenon affecting recruitment as well as the embryonic development of aquatic fauna. The present study depicts hypoxia induced disruption of the intrinsic pathway of programmed cell death (PCD, leading to embryonic malformation in the goldfish, Carrasius auratus. Constant hypoxia induced the early expression of pro-apoptotic/tumor suppressor p53 and concomitant expression of the cell death molecule, caspase-3, leading to high level of DNA damage and cell death in hypoxic embryos, as compared to normoxic ones. As a result, the former showed delayed 4 and 64 celled stages and a delay in appearance of epiboly stage. Expression of p53 efficiently switched off expression of the anti-apoptotic Bcl-2 during the initial 12 hours post fertilization (hpf and caused embryonic cell death. However, after 12 hours, simultaneous downregulation of p53 and Caspase-3 and exponential increase of Bcl-2, caused uncontrolled cell proliferation and prevented essential programmed cell death (PCD, ultimately resulting in significant (p<0.05 embryonic malformation up to 144 hpf. Evidences suggest that uncontrolled cell proliferation after 12 hpf may have been due to downregulation of p53 abundance, which in turn has an influence on upregulation of anti-apoptotic Bcl-2. Therefore, we have been able to show for the first time and propose that hypoxia induced downregulation of p53 beyond 12 hpf, disrupts PCD and leads to failure in normal differentiation, causing malformation in gold fish embryos.

  7. The Cost of being Object-Oriented: A Preliminary Study

    Directory of Open Access Journals (Sweden)

    Zoran Budimlić

    1999-01-01

    Full Text Available Since the introduction of the Java programming language, there has been widespread interest in the use Java for the high performance scientific computing. One major impediment to such use is the performance penalty paid relative to Fortran. To support our research on overcoming this penalty through compiler technology, we have developed a benchmark suite, called OwlPack, which is based on the popular LINPACK library. Although there are existing implementations of LINPACK in Java, most of these are produced by direct translation from Fortran. As such they do not reflect the style of programming that a good object‐oriented programmer would use in Java. Our goal is to investigate how to make object‐oriented scientific programming practical. Therefore we developed two object‐oriented versions of LINPACK in Java, a true polymorphic version and a “Lite” version designed for higher performance. We used these libraries to perform a detailed performance analysis using several leading Java compilers and virtual machines, comparing the performance of the object‐oriented versions of the benchmark with a version produced by direct translation from Fortran. Although Java implementations have been made great strides, they still fall short on programs that use the full power of Java’s object‐oriented features. Our ultimate goal is to drive research on compiler technology that will reward, rather than penalize good object‐oriented programming practice.

  8. Radionuclide transport in fractured rock: quantifying releases from final disposal of high level waste

    International Nuclear Information System (INIS)

    Silveira, Claudia S. da; Alvim, Antonio C.M.

    2013-01-01

    Crystalline rock has been considered as a potentially suitable matrix for high-level radioactive waste (HLW) repository because it is found in very stable geological formations and may have very low permeability. In this study the adopted physical system consists of the rock matrix containing a discrete horizontal fracture in a water saturated porous rock and a system of vertical fractures as a lineament. The transport in the fractures - horizontal and vertical, is assumed to obey a relation convection-diffusion, while the molecular diffusion is considered dominant mechanism of transport in porous rock. In this model the decay chain is considered. We use a code in Fortran 90, where the partial differential equations that describe the movement of radionuclides were discretized by finite differences methods. We use the fully implicit method for temporal discretization schemes. The simulation was performed with relevant data of nuclides in spent fuel for performance assessment in a hypothetical repository, thus quantifying the radionuclides released into the host rock. (author)

  9. PDDP, A Data Parallel Programming Model

    Directory of Open Access Journals (Sweden)

    Karen H. Warren

    1996-01-01

    Full Text Available PDDP, the parallel data distribution preprocessor, is a data parallel programming model for distributed memory parallel computers. PDDP implements high-performance Fortran-compatible data distribution directives and parallelism expressed by the use of Fortran 90 array syntax, the FORALL statement, and the WHERE construct. Distributed data objects belong to a global name space; other data objects are treated as local and replicated on each processor. PDDP allows the user to program in a shared memory style and generates codes that are portable to a variety of parallel machines. For interprocessor communication, PDDP uses the fastest communication primitives on each platform.

  10. Development of INFRA graphic user interface

    International Nuclear Information System (INIS)

    Yang, Y. S.; Lee, C. B.; Kim, Y. M.; Kim, D. H.; Kim, S. K.

    2004-01-01

    GUI(Graphic User Interface) has been developed for high burnup fuel performance code INFRA. Based upon FORTRAN program language, INFRA was developed by COMPAQ Visual FORTRAN 6.5. Graphic user input and output interface have been developed by using Visual Basic and MDB which are the most widely used program language and database for windows application development. Various input parameters, which are required for INFRA calculation, can be input more conveniently by newly developed input interface. Without any additional data handling, INFRA calculation results can be investigated intuitively by 2D or 3D graphs on screen and animation function

  11. Integrated plasma control for high performance tokamaks

    International Nuclear Information System (INIS)

    Humphreys, D.A.; Deranian, R.D.; Ferron, J.R.; Johnson, R.D.; LaHaye, R.J.; Leuer, J.A.; Penaflor, B.G.; Walker, M.L.; Welander, A.S.; Jayakumar, R.J.; Makowski, M.A.; Khayrutdinov, R.R.

    2005-01-01

    Sustaining high performance in a tokamak requires controlling many equilibrium shape and profile characteristics simultaneously with high accuracy and reliability, while suppressing a variety of MHD instabilities. Integrated plasma control, the process of designing high-performance tokamak controllers based on validated system response models and confirming their performance in detailed simulations, provides a systematic method for achieving and ensuring good control performance. For present-day devices, this approach can greatly reduce the need for machine time traditionally dedicated to control optimization, and can allow determination of high-reliability controllers prior to ever producing the target equilibrium experimentally. A full set of tools needed for this approach has recently been completed and applied to present-day devices including DIII-D, NSTX and MAST. This approach has proven essential in the design of several next-generation devices including KSTAR, EAST, JT-60SC, and ITER. We describe the method, results of design and simulation tool development, and recent research producing novel approaches to equilibrium and MHD control in DIII-D. (author)

  12. Highlighting High Performance: Blackstone Valley Regional Vocational Technical High School; Upton, Massachusetts

    Energy Technology Data Exchange (ETDEWEB)

    2006-10-01

    This brochure describes the key high-performance building features of the Blackstone Valley High School. The brochure was paid for by the Massachusetts Technology Collaborative as part of their Green Schools Initiative. High-performance features described are daylighting and energy-efficient lighting, indoor air quality, solar energy, building envelope, heating and cooling systems, and water conservation. Energy cost savings are also discussed.

  13. Ground Glass Pozzolan in Conventional, High, and Ultra-High Performance Concrete

    Directory of Open Access Journals (Sweden)

    Tagnit-Hamou Arezki

    2018-01-01

    Full Text Available Ground-glass pozzolan (G obtained by grinding the mixed-waste glass to same fineness of cement can act as a supplementary-cementitious material (SCM, given that it is an amorphous and a pozzolanic material. The G showed promising performances in different concrete types such as conventional concrete (CC, high-performance concrete (HPC, and ultra-high performance concrete (UHPC. The current paper reports on the characteristics and performance of G in these concrete types. The use of G provides several advantages (technological, economical, and environmental. It reduces the production cost of concrete and decrease the carbon footprint of a traditional concrete structures. The rheology of fresh concrete can be improved due to the replacement of cement by non-absorptive glass particles. Strength and rigidity improvements in the concrete containing G are due to the fact that glass particles act as inclusions having a very high strength and elastic modulus that have a strengthening effect on the overall hardened matrix.

  14. Moving domain computational fluid dynamics to interface with an embryonic model of cardiac morphogenesis.

    Directory of Open Access Journals (Sweden)

    Juhyun Lee

    Full Text Available Peristaltic contraction of the embryonic heart tube produces time- and spatial-varying wall shear stress (WSS and pressure gradients (∇P across the atrioventricular (AV canal. Zebrafish (Danio rerio are a genetically tractable system to investigate cardiac morphogenesis. The use of Tg(fli1a:EGFP (y1 transgenic embryos allowed for delineation and two-dimensional reconstruction of the endocardium. This time-varying wall motion was then prescribed in a two-dimensional moving domain computational fluid dynamics (CFD model, providing new insights into spatial and temporal variations in WSS and ∇P during cardiac development. The CFD simulations were validated with particle image velocimetry (PIV across the atrioventricular (AV canal, revealing an increase in both velocities and heart rates, but a decrease in the duration of atrial systole from early to later stages. At 20-30 hours post fertilization (hpf, simulation results revealed bidirectional WSS across the AV canal in the heart tube in response to peristaltic motion of the wall. At 40-50 hpf, the tube structure undergoes cardiac looping, accompanied by a nearly 3-fold increase in WSS magnitude. At 110-120 hpf, distinct AV valve, atrium, ventricle, and bulbus arteriosus form, accompanied by incremental increases in both WSS magnitude and ∇P, but a decrease in bi-directional flow. Laminar flow develops across the AV canal at 20-30 hpf, and persists at 110-120 hpf. Reynolds numbers at the AV canal increase from 0.07±0.03 at 20-30 hpf to 0.23±0.07 at 110-120 hpf (p< 0.05, n=6, whereas Womersley numbers remain relatively unchanged from 0.11 to 0.13. Our moving domain simulations highlights hemodynamic changes in relation to cardiac morphogenesis; thereby, providing a 2-D quantitative approach to complement imaging analysis.

  15. Amino-modified polystyrene nanoparticles affect signalling pathways of the sea urchin (Paracentrotus lividus) embryos.

    Science.gov (United States)

    Pinsino, Annalisa; Bergami, Elisa; Della Torre, Camilla; Vannuccini, Maria Luisa; Addis, Piero; Secci, Marco; Dawson, Kenneth A; Matranga, Valeria; Corsi, Ilaria

    2017-03-01

    Polystyrene nanoparticles have been shown to pose serious risk to marine organisms including sea urchin embryos based on their surface properties and consequently behaviour in natural sea water. The aim of this study is to investigate the toxicity pathways of amino polystyrene nanoparticles (PS-NH 2 , 50 nm) in Paracentrotus lividus embryos in terms of development and signalling at both protein and gene levels. Two sub-lethal concentrations of 3 and 4 μg/mL of PS-NH 2 were used to expose sea urchin embryos in natural sea water (PS-NH 2 as aggregates of 143 ± 5 nm). At 24 and 48 h post-fertilisation (hpf) embryonic development was monitored and variations in the levels of key proteins involved in stress response and development (Hsp70, Hsp60, MnSOD, Phospho-p38 Mapk) as well as the modulation of target genes (Pl-Hsp70, Pl-Hsp60, Pl-Cytochrome b, Pl-p38 Mapk, Pl-Caspase 8, Pl-Univin) were measured. At 48 hpf various striking teratogenic effects were observed such as the occurrence of cells/masses randomly distributed, severe skeletal defects and delayed development. At 24 hpf a significant up-regulation of Pl-Hsp70, Pl-p38 Mapk, Pl-Univin and Pl-Cas8 genes was found, while at 48 hpf only for Pl-Univin was observed. Protein profile showed different patterns as a significant increase of Hsp70 and Hsp60 only after 48 hpf compared to controls. Conversely, P-p38 Mapk protein significantly increased at 24 hpf and decreased at 48 hpf. Our findings highlight that PS-NH 2 are able to disrupt sea urchin embryos development by modulating protein and gene profile providing new understandings into the signalling pathways involved.

  16. Component and system simulation models for High Flux Isotope Reactor

    International Nuclear Information System (INIS)

    Sozer, A.

    1989-08-01

    Component models for the High Flux Isotope Reactor (HFIR) have been developed. The models are HFIR core, heat exchangers, pressurizer pumps, circulation pumps, letdown valves, primary head tank, generic transport delay (pipes), system pressure, loop pressure-flow balance, and decay heat. The models were written in FORTRAN and can be run on different computers, including IBM PCs, as they do not use any specific simulation languages such as ACSL or CSMP. 14 refs., 13 figs

  17. Zebrafish: an exciting model for investigating the spatio-temporal pattern of enteric nervous system development.

    LENUS (Irish Health Repository)

    Doodnath, Reshma

    2012-02-01

    AIM: Recently, the zebrafish (Danio rerio) has been shown to be an excellent model for human paediatric research. Advantages over other models include its small size, externally visually accessible development and ease of experimental manipulation. The enteric nervous system (ENS) consists of neurons and enteric glia. Glial cells permit cell bodies and processes of neurons to be arranged and maintained in a proper spatial arrangement, and are essential in the maintenance of basic physiological functions of neurons. Glial fibrillary acidic protein (GFAP) is expressed in astrocytes, but also expressed outside of the central nervous system. The aim of this study was to investigate the spatio-temporal pattern of GFAP expression in developing zebrafish ENS from 24 h post-fertilization (hpf), using transgenic fish that express green fluorescent protein (GFP). METHODS: Zebrafish embryos were collected from transgenic GFP Tg(GFAP:GFP)(mi2001) adult zebrafish from 24 to 120 hpf, fixed and processed for whole mount immunohistochemistry. Antibodies to Phox2b were used to identify enteric neurons. Specimens were mounted on slides and imaging was performed using a fluorescent laser confocal microscope. RESULTS: GFAP:GFP labelling outside the spinal cord was identified in embryos from 48 hpf. The patterning was intracellular and consisted of elongated profiles that appeared to migrate away from the spinal cord into the periphery. At 72 and 96 hpf, GFAP:GFP was expressed dorsally and ventrally to the intestinal tract. At 120 hpf, GFAP:GFP was expressed throughout the intestinal wall, and clusters of enteric neurons were identified using Phox2b immunofluorescence along the pathway of GFAP:GFP positive processes, indicative of a migratory pathway of ENS precursors from the spinal cord into the intestine. CONCLUSION: The pattern of migration of GFAP:GFP expressing cells outside the spinal cord suggests an organized, early developing migratory pathway to the ENS. This shows for the

  18. Roseville Nursing Home, 49 Meath Road, Bray, Wicklow.

    LENUS (Irish Health Repository)

    Doodnath, Reshma

    2012-02-01

    AIM: Recently, the zebrafish (Danio rerio) has been shown to be an excellent model for human paediatric research. Advantages over other models include its small size, externally visually accessible development and ease of experimental manipulation. The enteric nervous system (ENS) consists of neurons and enteric glia. Glial cells permit cell bodies and processes of neurons to be arranged and maintained in a proper spatial arrangement, and are essential in the maintenance of basic physiological functions of neurons. Glial fibrillary acidic protein (GFAP) is expressed in astrocytes, but also expressed outside of the central nervous system. The aim of this study was to investigate the spatio-temporal pattern of GFAP expression in developing zebrafish ENS from 24 h post-fertilization (hpf), using transgenic fish that express green fluorescent protein (GFP). METHODS: Zebrafish embryos were collected from transgenic GFP Tg(GFAP:GFP)(mi2001) adult zebrafish from 24 to 120 hpf, fixed and processed for whole mount immunohistochemistry. Antibodies to Phox2b were used to identify enteric neurons. Specimens were mounted on slides and imaging was performed using a fluorescent laser confocal microscope. RESULTS: GFAP:GFP labelling outside the spinal cord was identified in embryos from 48 hpf. The patterning was intracellular and consisted of elongated profiles that appeared to migrate away from the spinal cord into the periphery. At 72 and 96 hpf, GFAP:GFP was expressed dorsally and ventrally to the intestinal tract. At 120 hpf, GFAP:GFP was expressed throughout the intestinal wall, and clusters of enteric neurons were identified using Phox2b immunofluorescence along the pathway of GFAP:GFP positive processes, indicative of a migratory pathway of ENS precursors from the spinal cord into the intestine. CONCLUSION: The pattern of migration of GFAP:GFP expressing cells outside the spinal cord suggests an organized, early developing migratory pathway to the ENS. This shows for the

  19. Lipid quantitation and metabolomics data from vitamin E-deficient and -sufficient zebrafish embryos from 0 to 120 hours-post-fertilization

    Directory of Open Access Journals (Sweden)

    Melissa McDougall

    2017-04-01

    Full Text Available The data herein is in support of our research article by McDougall et al. (2017 [1], in which we used our zebrafish model of embryonic vitamin E (VitE deficiency to study the consequences of VitE deficiency during development. Adult 5D wild-type zebrafish (Danio rerio, fed defined diets without (E– or with VitE (E+, 500 mg RRR-α-tocopheryl acetate/kg diet, were spawned to obtain E– and E+ embryos that we evaluated using metabolomics and specific lipid analyses (each measure at 24, 48, 72, 120 hours-post-fertilization, hpf, neurobehavioral development (locomotor responses at 96 hpf, and rescue strategies. Rescues were attempted using micro-injection into the yolksac using VitE (as a phospholipid emulsion containing d6-α-tocopherol at 0 hpf or D-glucose (in saline at 24 hpf.

  20. Mood states and motor performance: a study with high performance voleybol athletes

    Directory of Open Access Journals (Sweden)

    Lenamar Fiorese Vieira

    2008-07-01

    Full Text Available http://dx.doi.org/10.5007/1980-0037.2008v10n1p62 The objective of this research was to investigate the relationship between the sporting performance and mood states of high performance volleyball athletes. Twenty-three adult athletes of both sexes were assessed. The measurement instrument adopted was the POMS questionnaire. Data collection was carried out individually during the state championships. Dada were analyzed using descriptive statistics; the Friedman test for analysis of variance and the Mann-Whitney test for differences between means. The results demonstrated that both teams exhibited the mood state profi le corresponding to the “iceberg” profile. In the male team, vigor remained constant throughout all phases of the competition, while in the female team this element was unstable. The male team’s fatigue began low, during the training phase, with rates that rose as the competition progressed, with statistically significant differences between the fi rst and last matches the team played. In the female team, the confusion factor, which was at a high level during training, reduced progressively throughout the competition, with a difference that was signifi cant to p ≤ 0.05. With relation to performance and mood profi le, the female team exhibited statistically significant differences between the mean vigor and fatigue factors of high and low performance athletes. It is therefore concluded that the mood state profi le is a factor that impacts on the motor performance of these high performance teams.

  1. RTOD- RADIAL TURBINE OFF-DESIGN PERFORMANCE ANALYSIS

    Science.gov (United States)

    Glassman, A. J.

    1994-01-01

    The RTOD program was developed to accurately predict radial turbine off-design performance. The radial turbine has been used extensively in automotive turbochargers and aircraft auxiliary power units. It is now being given serious consideration for primary powerplant applications. In applications where the turbine will operate over a wide range of power settings, accurate off-design performance prediction is essential for a successful design. RTOD predictions have already illustrated a potential improvement in off-design performance offered by rotor back-sweep for high-work-factor radial turbines. RTOD can be used to analyze other potential performance enhancing design features. RTOD predicts the performance of a radial turbine (with or without rotor blade sweep) as a function of pressure ratio, speed, and stator setting. The program models the flow with the following: 1) stator viscous and trailing edge losses; 2) a vaneless space loss between the stator and the rotor; and 3) rotor incidence, viscous, trailing-edge, clearance, and disk friction losses. The stator and rotor viscous losses each represent the combined effects of profile, endwall, and secondary flow losses. The stator inlet and exit and the rotor inlet flows are modeled by a mean-line analysis, but a sector analysis is used at the rotor exit. The leakage flow through the clearance gap in a pivoting stator is also considered. User input includes gas properties, turbine geometry, and the stator and rotor viscous losses at a reference performance point. RTOD output includes predicted turbine performance over a specified operating range and any user selected flow parameters. The RTOD program is written in FORTRAN IV for batch execution and has been implemented on an IBM 370 series computer with a central memory requirement of approximately 100K of 8 bit bytes. The RTOD program was developed in 1983.

  2. High performance leadership in unusually challenging educational circumstances

    Directory of Open Access Journals (Sweden)

    Andy Hargreaves

    2015-04-01

    Full Text Available This paper draws on findings from the results of a study of leadership in high performing organizations in three sectors. Organizations were sampled and included on the basis of high performance in relation to no performance, past performance, performance among similar peers and performance in the face of limited resources or challenging circumstances. The paper concentrates on leadership in four schools that met the sample criteria.  It draws connections to explanations of the high performance ofEstoniaon the OECD PISA tests of educational achievement. The article argues that leadership in these four schools that performed above expectations comprised more than a set of competencies. Instead, leadership took the form of a narrative or quest that pursued an inspiring dream with relentless determination; took improvement pathways that were more innovative than comparable peers; built collaboration and community including with competing schools; and connected short-term success to long-term sustainability.

  3. High-performance computing in seismology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  4. High performance flexible CMOS SOI FinFETs

    KAUST Repository

    Fahad, Hossain M.

    2014-06-01

    We demonstrate the first ever CMOS compatible soft etch back based high performance flexible CMOS SOI FinFETs. The move from planar to non-planar FinFETs has enabled continued scaling down to the 14 nm technology node. This has been possible due to the reduction in off-state leakage and reduced short channel effects on account of the superior electrostatic charge control of multiple gates. At the same time, flexible electronics is an exciting expansion opportunity for next generation electronics. However, a fully integrated low-cost system will need to maintain ultra-large-scale-integration density, high performance and reliability - same as today\\'s traditional electronics. Up until recently, this field has been mainly dominated by very weak performance organic electronics enabled by low temperature processes, conducive to low melting point plastics. Now however, we show the world\\'s highest performing flexible version of 3D FinFET CMOS using a state-of-the-art CMOS compatible fabrication technique for high performance ultra-mobile consumer applications with stylish design. © 2014 IEEE.

  5. Architecting Web Sites for High Performance

    Directory of Open Access Journals (Sweden)

    Arun Iyengar

    2002-01-01

    Full Text Available Web site applications are some of the most challenging high-performance applications currently being developed and deployed. The challenges emerge from the specific combination of high variability in workload characteristics and of high performance demands regarding the service level, scalability, availability, and costs. In recent years, a large body of research has addressed the Web site application domain, and a host of innovative software and hardware solutions have been proposed and deployed. This paper is an overview of recent solutions concerning the architectures and the software infrastructures used in building Web site applications. The presentation emphasizes three of the main functions in a complex Web site: the processing of client requests, the control of service levels, and the interaction with remote network caches.

  6. The introspective may achieve more: Enhancing existing Geoscientific models with native-language emulated structural reflection

    Science.gov (United States)

    Ji, Xinye; Shen, Chaopeng

    2018-01-01

    Geoscientific models manage myriad and increasingly complex data structures as trans-disciplinary models are integrated. They often incur significant redundancy with cross-cutting tasks. Reflection, the ability of a program to inspect and modify its structure and behavior at runtime, is known as a powerful tool to improve code reusability, abstraction, and separation of concerns. Reflection is rarely adopted in high-performance Geoscientific models, especially with Fortran, where it was previously deemed implausible. Practical constraints of language and legacy often limit us to feather-weight, native-language solutions. We demonstrate the usefulness of a structural-reflection-emulating, dynamically-linked metaObjects, gd. We show real-world examples including data structure self-assembly, effortless input/output (IO) and upgrade to parallel I/O, recursive actions and batch operations. We share gd and a derived module that reproduces MATLAB-like structure in Fortran and C++. We suggest that both a gd representation and a Fortran-native representation are maintained to access the data, each for separate purposes. Embracing emulated reflection allows generically-written codes that are highly re-usable across projects.

  7. Development of a high performance liquid chromatography method ...

    African Journals Online (AJOL)

    Development of a high performance liquid chromatography method for simultaneous ... Purpose: To develop and validate a new low-cost high performance liquid chromatography (HPLC) method for ..... Several papers have reported the use of ...

  8. Toward a theory of high performance.

    Science.gov (United States)

    Kirby, Julia

    2005-01-01

    What does it mean to be a high-performance company? The process of measuring relative performance across industries and eras, declaring top performers, and finding the common drivers of their success is such a difficult one that it might seem a fool's errand to attempt. In fact, no one did for the first thousand or so years of business history. The question didn't even occur to many scholars until Tom Peters and Bob Waterman released In Search of Excellence in 1982. Twenty-three years later, we've witnessed several more attempts--and, just maybe, we're getting closer to answers. In this reported piece, HBR senior editor Julia Kirby explores why it's so difficult to study high performance and how various research efforts--including those from John Kotter and Jim Heskett; Jim Collins and Jerry Porras; Bill Joyce, Nitin Nohria, and Bruce Roberson; and several others outlined in a summary chart-have attacked the problem. The challenge starts with deciding which companies to study closely. Are the stars the ones with the highest market caps, the ones with the greatest sales growth, or simply the ones that remain standing at the end of the game? (And when's the end of the game?) Each major study differs in how it defines success, which companies it therefore declares to be worthy of emulation, and the patterns of activity and attitude it finds in common among them. Yet, Kirby concludes, as each study's method incrementally solves problems others have faced, we are progressing toward a consensus theory of high performance.

  9. Development of high performance cladding materials

    International Nuclear Information System (INIS)

    Park, Jeong Yong; Jeong, Y. H.; Park, S. Y.

    2010-04-01

    The irradiation test for HANA claddings conducted and a series of evaluation for next-HANA claddings as well as their in-pile and out-of pile performances tests were also carried out at Halden research reactor. The 6th irradiation test have been completed successfully in Halden research reactor. As a result, HANA claddings showed high performance, such as corrosion resistance increased by 40% compared to Zircaloy-4. The high performance of HANA claddings in Halden test has enabled lead test rod program as the first step of the commercialization of HANA claddings. DB has been established for thermal and LOCA-related properties. It was confirmed from the thermal shock test that the integrity of HANA claddings was maintained in more expanded region than the criteria regulated by NRC. The manufacturing process of strips was established in order to apply HANA alloys, which were originally developed for the claddings, to the spacer grids. 250 kinds of model alloys for the next-generation claddings were designed and manufactured over 4 times and used to select the preliminary candidate alloys for the next-generation claddings. The selected candidate alloys showed 50% better corrosion resistance and 20% improved high temperature oxidation resistance compared to the foreign advanced claddings. We established the manufacturing condition controlling the performance of the dual-cooled claddings by changing the reduction rate in the cold working steps

  10. Parallel Fortran-MPI software for numerical inversion of the Laplace transform and its application to oscillatory water levels in groundwater environments

    Science.gov (United States)

    Zhan, X.

    2005-01-01

    A parallel Fortran-MPI (Message Passing Interface) software for numerical inversion of the Laplace transform based on a Fourier series method is developed to meet the need of solving intensive computational problems involving oscillatory water level's response to hydraulic tests in a groundwater environment. The software is a parallel version of ACM (The Association for Computing Machinery) Transactions on Mathematical Software (TOMS) Algorithm 796. Running 38 test examples indicated that implementation of MPI techniques with distributed memory architecture speedups the processing and improves the efficiency. Applications to oscillatory water levels in a well during aquifer tests are presented to illustrate how this package can be applied to solve complicated environmental problems involved in differential and integral equations. The package is free and is easy to use for people with little or no previous experience in using MPI but who wish to get off to a quick start in parallel computing. ?? 2004 Elsevier Ltd. All rights reserved.

  11. Morphology and cardiac physiology are differentially affected by temperature in developing larvae of the marine fish mahi-mahi (Coryphaena hippurus

    Directory of Open Access Journals (Sweden)

    Prescilla Perrichon

    2017-06-01

    Full Text Available Cardiovascular performance is altered by temperature in larval fishes, but how acute versus chronic temperature exposures independently affect cardiac morphology and physiology in the growing larva is poorly understood. Consequently, we investigated the influence of water temperature on cardiac plasticity in developing mahi-mahi. Morphological (e.g. standard length, heart angle and physiological cardiac variables (e.g. heart rate fH, stroke volume, cardiac output were recorded under two conditions by imaging: (i under acute temperature exposure where embryos were reared at 25°C up to 128 h post-fertilization (hpf and then acutely exposed to 25 (rearing temperature, 27 and 30°C; and (ii at two rearing (chronic temperatures of 26 and 30°C and performed at 32 and 56 hpf. Chronic elevated temperature improved developmental time in mahi-mahi. Heart rates were 1.2–1.4-fold higher under exposure of elevated acute temperatures across development (Q10≥2.0. Q10 for heart rate in acute exposure was 1.8-fold higher compared to chronic exposure at 56 hpf. At same stage, stroke volume was temperature independent (Q10∼1.0. However, larvae displayed higher stroke volume later in stage. Cardiac output in developing mahi-mahi is mainly dictated by chronotropic rather than inotropic modulation, is differentially affected by temperature during development and is not linked to metabolic changes.

  12. High performance computing in Windows Azure cloud

    OpenAIRE

    Ambruš, Dejan

    2013-01-01

    High performance, security, availability, scalability, flexibility and lower costs of maintenance have essentially contributed to the growing popularity of cloud computing in all spheres of life, especially in business. In fact cloud computing offers even more than this. With usage of virtual computing clusters a runtime environment for high performance computing can be efficiently implemented also in a cloud. There are many advantages but also some disadvantages of cloud computing, some ...

  13. High Performance Work System, HRD Climate and Organisational Performance: An Empirical Study

    Science.gov (United States)

    Muduli, Ashutosh

    2015-01-01

    Purpose: This paper aims to study the relationship between high-performance work system (HPWS) and organizational performance and to examine the role of human resource development (HRD) Climate in mediating the relationship between HPWS and the organizational performance in the context of the power sector of India. Design/methodology/approach: The…

  14. Governance among Malaysian high performing companies

    Directory of Open Access Journals (Sweden)

    Asri Marsidi

    2016-07-01

    Full Text Available Well performed companies have always been linked with effective governance which is generally reflected through effective board of directors. However many issues concerning the attributes for effective board of directors remained unresolved. Nowadays diversity has been perceived as able to influence the corporate performance due to the likelihood of meeting variety of needs and demands from diverse customers and clients. The study therefore aims to provide a fundamental understanding on governance among high performing companies in Malaysia.

  15. Excessive nitrite affects zebrafish valvulogenesis through yielding too much NO signaling.

    Directory of Open Access Journals (Sweden)

    Junbo Li

    Full Text Available Sodium nitrite, a common food additive, exists widely not only in the environment but also in our body. Excessive nitrite causes toxicological effects on human health; however, whether it affects vertebrate heart valve development remains unknown. In vertebrates, developmental defects of cardiac valves usually lead to congenital heart disease. To understand the toxic effects of nitrite on valvulogenesis, we exposed zebrafish embryos with different concentrations of sodium nitrite. Our results showed that sodium nitrite caused developmental defects of zebrafish heart dose dependently. It affected zebrafish heart development starting from 36 hpf (hour post fertilization when heart initiates looping process. Comprehensive analysis on the embryos at 24 hpf and 48 hpf showed that excessive nitrite did not affect blood circulation, vascular network, myocardium and endocardium development. But development of endocardial cells in atrioventricular canal (AVC of the embryos at 48 hpf was disrupted by too much nitrite, leading to defective formation of primitive valve leaflets at 76 hpf. Consistently, excessive nitrite diminished expressions of valve progenitor markers including bmp4, has2, vcana and notch1b at 48 hpf. Furthermore, 3', 5'-cyclic guanosine monophosphate (cGMP, downstream of nitric oxide (NO signaling, was increased its level significantly in the embryos exposed with excessive nitrite and microinjection of soluble guanylate cyclase inhibitor ODQ (1H-[1], [2], [4]Oxadiazolo[4,3-a] quinoxalin-1-one, an antagonist of NO signaling, into nitrite-exposed embryos could partly rescue the cardiac valve malformation. Taken together, our results show that excessive nitrite affects early valve leaflet formation by producing too much NO signaling.

  16. Powder metallurgical high performance materials. Proceedings. Volume 1: high performance P/M metals

    International Nuclear Information System (INIS)

    Kneringer, G.; Roedhammer, P.; Wildner, H.

    2001-01-01

    The proceedings of this sequence of seminars form an impressive chronicle of the continued progress in the understanding of refractory metals and cemented carbides and in their manufacture and application. There the ingenuity and assiduous work of thousands of scientists and engineers striving for progress in the field of powder metallurgy is documented in more than 2000 contributions covering some 30000 pages. The 15th Plansee Seminar was convened under the general theme 'Powder Metallurgical High Performance Materials'. Under this broadened perspective the seminar will strive to look beyond the refractory metals and cemented carbides, which remain at its focus, to novel classes of materials, such as intermetallic compounds, with potential for high temperature applications. (author)

  17. Powder metallurgical high performance materials. Proceedings. Volume 1: high performance P/M metals

    Energy Technology Data Exchange (ETDEWEB)

    Kneringer, G; Roedhammer, P; Wildner, H [eds.

    2001-07-01

    The proceedings of this sequence of seminars form an impressive chronicle of the continued progress in the understanding of refractory metals and cemented carbides and in their manufacture and application. There the ingenuity and assiduous work of thousands of scientists and engineers striving for progress in the field of powder metallurgy is documented in more than 2000 contributions covering some 30000 pages. The 15th Plansee Seminar was convened under the general theme 'Powder Metallurgical High Performance Materials'. Under this broadened perspective the seminar will strive to look beyond the refractory metals and cemented carbides, which remain at its focus, to novel classes of materials, such as intermetallic compounds, with potential for high temperature applications. (author)

  18. High-Speed, High-Performance DQPSK Optical Links with Reduced Complexity VDFE Equalizers

    Directory of Open Access Journals (Sweden)

    Maki Nanou

    2017-02-01

    Full Text Available Optical transmission technologies optimized for optical network segments sensitive to power consumption and cost, comprise modulation formats with direct detection technologies. Specifically, non-return to zero differential quaternary phase shift keying (NRZ-DQPSK in deployed fiber plants, combined with high-performance, low-complexity electronic equalizers to compensate residual impairments at the receiver end, can be proved as a viable solution for high-performance, high-capacity optical links. Joint processing of the constructive and the destructive signals at the single-ended DQPSK receiver provides improved performance compared to the balanced configuration, however, at the expense of higher hardware requirements, a fact that may not be neglected especially in the case of high-speed optical links. To overcome this bottleneck, the use of partially joint constructive/destructive DQPSK equalization is investigated in this paper. Symbol-by-symbol equalization is performed by means of Volterra decision feedback-type equalizers, driven by a reduced subset of signals selected from the constructive and the destructive ports of the optical detectors. The proposed approach offers a low-complexity alternative for electronic equalization, without sacrificing much of the performance compared to the fully-deployed counterpart. The efficiency of the proposed equalizers is demonstrated by means of computer simulation in a typical optical transmission scenario.

  19. Design of JMTR high-performance fuel element

    International Nuclear Information System (INIS)

    Sakurai, Fumio; Shimakawa, Satoshi; Komori, Yoshihiro; Tsuchihashi, Keiichiro; Kaminaga, Fumito

    1999-01-01

    For test and research reactors, the core conversion to low-enriched uranium fuel is required from the viewpoint of non-proliferation of nuclear weapon material. Improvements of core performance are also required in order to respond to recent advanced utilization needs. In order to meet both requirements, a high-performance fuel element of high uranium density with Cd wires as burnable absorbers was adopted for JMTR core conversion to low-enriched uranium fuel. From the result of examination of an adaptability of a few group constants generated by a conventional transport-theory calculation with an isotropic scattering approximation to a few group diffusion-theory core calculation for design of the JMTR high-performance fuel element, it was clear that the depletion of Cd wires was not able to be predicted accurately using group constants generated by the conventional method. Therefore, a new generation method of a few group constants in consideration of an incident neutron spectrum at Cd wire was developed. As the result, the most suitable high-performance fuel element for JMTR was designed successfully, and that allowed extension of operation duration without refueling to almost twice as long and offer of irradiation field with constant neutron flux. (author)

  20. ELMs IN DIII-D HIGH PERFORMANCE DISCHARGES

    International Nuclear Information System (INIS)

    TURNBULL, A.D; LAO, L.L; OSBORNE, T.H; SAUTER, O; STRAIT, E.J; TAYLOR, T.S; CHU, M.S; FERRON, J.R; GREENFIELD, C.M; LEONARD, A.W; MILLER, R.L; SNYDER, P.B; WILSON, H.R; ZOHM, H

    2003-01-01

    A new understanding of edge localized modes (ELMs) in tokamak discharges is emerging [P.B. Snyder, et al., Phys. Plasmas, 9, 2037 (2002)], in which the ELM is an essentially ideal magnetohydrodynamic (MHD) instability and the ELM severity is determined by the radial width of the linearly unstable MHD kink modes. A detailed, comparative study of the penetration into the core of the respective linear instabilities in a standard DIII-D ELMing, high confinement mode (H-mode) discharge, with that for two relatively high performance discharges shows that these are also encompassed within the framework of the new model. These instabilities represent the key, limiting factor in extending the high performance of these discharges. In the standard ELMing H-mode, the MHD instabilities are highly localized in the outer few percent flux surfaces and the ELM is benign, causing only a small temporary drop in the energy confinement. In contrast, for both a very high confinement mode (VH-mode) and an H-mode with a broad internal transport barrier (ITB) extending over the entire core and coalesced with the edge transport barrier, the linearly unstable modes penetrate well into the mid radius and the corresponding consequences for global confinement are significantly more severe. The ELM accordingly results in an irreversible loss of the high performance

  1. The path toward HEP High Performance Computing

    CERN Document Server

    Apostolakis, John; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on th...

  2. 3D printed high performance strain sensors for high temperature applications

    Science.gov (United States)

    Rahman, Md Taibur; Moser, Russell; Zbib, Hussein M.; Ramana, C. V.; Panat, Rahul

    2018-01-01

    Realization of high temperature physical measurement sensors, which are needed in many of the current and emerging technologies, is challenging due to the degradation of their electrical stability by drift currents, material oxidation, thermal strain, and creep. In this paper, for the first time, we demonstrate that 3D printed sensors show a metamaterial-like behavior, resulting in superior performance such as high sensitivity, low thermal strain, and enhanced thermal stability. The sensors were fabricated using silver (Ag) nanoparticles (NPs), using an advanced Aerosol Jet based additive printing method followed by thermal sintering. The sensors were tested under cyclic strain up to a temperature of 500 °C and showed a gauge factor of 3.15 ± 0.086, which is about 57% higher than that of those available commercially. The sensor thermal strain was also an order of magnitude lower than that of commercial gages for operation up to a temperature of 500 °C. An analytical model was developed to account for the enhanced performance of such printed sensors based on enhanced lateral contraction of the NP films due to the porosity, a behavior akin to cellular metamaterials. The results demonstrate the potential of 3D printing technology as a pathway to realize highly stable and high-performance sensors for high temperature applications.

  3. Characterizing Flow and Suspended Sediment Trends in the Sacramento River Basin, CA Using Hydrologic Simulation Program - FORTRAN (HSPF)

    Science.gov (United States)

    Stern, M. A.; Flint, L. E.; Flint, A. L.; Wright, S. A.; Minear, J. T.

    2014-12-01

    A watershed model of the Sacramento River Basin, CA was developed to simulate streamflow and suspended sediment transport to the San Francisco Bay Delta (SFBD) for fifty years (1958-2008) using the Hydrological Simulation Program - FORTRAN (HSPF). To compensate for the large model domain and sparse data, rigorous meteorological development and characterization of hydraulic geometry were employed to spatially distribute climate and hydrologic processes in unmeasured locations. Parameterization techniques sought to include known spatial information for tributaries such as soil information and slope, and then parameters were scaled up or down during calibration to retain the spatial characteristics of the land surface in un-gaged areas. Accuracy was assessed by comparing model calibration to measured streamflow. Calibration and validation of the Sacramento River ranged from "good" to "very good" performance based upon a "goodness-of-fit" statistical guideline. Model calibration to measured sediment loads were underestimated on average by 39% for the Sacramento River, and model calibration to suspended sediment concentrations were underestimated on average by 22% for the Sacramento River. Sediment loads showed a slight decreasing trend from 1958-2008 and was significant (p < 0.0025) in the lower 50% of stream flows. Hypothetical climate change scenarios were developed using the Climate Assessment Tool (CAT). Several wet and dry scenarios coupled with temperature increases were imposed on the historical base conditions to evaluate sensitivity of streamflow and sediment on potential changes in climate. Wet scenarios showed an increase of 9.7 - 17.5% in streamflow, a 7.6 - 17.5% increase in runoff, and a 30 - 93% increase in sediment loads. The dry scenarios showed a roughly 5% decrease in flow and runoff, and a 16 - 18% decrease in sediment loads. The base hydrology was most sensitive to a temperature increase of 1.5 degrees Celsius and an increase in storm intensity and

  4. High performance carbon nanocomposites for ultracapacitors

    Science.gov (United States)

    Lu, Wen

    2012-10-02

    The present invention relates to composite electrodes for electrochemical devices, particularly to carbon nanotube composite electrodes for high performance electrochemical devices, such as ultracapacitors.

  5. Strategy Guideline: Partnering for High Performance Homes

    Energy Technology Data Exchange (ETDEWEB)

    Prahl, D.

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. In an environment where the builder is the only source of communication between trades and consultants and where relationships are, in general, adversarial as opposed to cooperative, the chances of any one building system to fail are greater. Furthermore, it is much harder for the builder to identify and capitalize on synergistic opportunities. Partnering can help bridge the cross-functional aspects of the systems approach and achieve performance-based criteria. Critical success factors for partnering include support from top management, mutual trust, effective and open communication, effective coordination around common goals, team building, appropriate use of an outside facilitator, a partnering charter progress toward common goals, an effective problem-solving process, long-term commitment, continuous improvement, and a positive experience for all involved.

  6. Quantum Accelerators for High-performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S. [ORNL; Britt, Keith A. [ORNL; Mohiyaddin, Fahd A. [ORNL

    2017-11-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, the prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.

  7. High Performance Commercial Fenestration Framing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Mike Manteghi; Sneh Kumar; Joshua Early; Bhaskar Adusumalli

    2010-01-31

    A major objective of the U.S. Department of Energy is to have a zero energy commercial building by the year 2025. Windows have a major influence on the energy performance of the building envelope as they control over 55% of building energy load, and represent one important area where technologies can be developed to save energy. Aluminum framing systems are used in over 80% of commercial fenestration products (i.e. windows, curtain walls, store fronts, etc.). Aluminum framing systems are often required in commercial buildings because of their inherent good structural properties and long service life, which is required from commercial and architectural frames. At the same time, they are lightweight and durable, requiring very little maintenance, and offer design flexibility. An additional benefit of aluminum framing systems is their relatively low cost and easy manufacturability. Aluminum, being an easily recyclable material, also offers sustainable features. However, from energy efficiency point of view, aluminum frames have lower thermal performance due to the very high thermal conductivity of aluminum. Fenestration systems constructed of aluminum alloys therefore have lower performance in terms of being effective barrier to energy transfer (heat loss or gain). Despite the lower energy performance, aluminum is the choice material for commercial framing systems and dominates the commercial/architectural fenestration market because of the reasons mentioned above. In addition, there is no other cost effective and energy efficient replacement material available to take place of aluminum in the commercial/architectural market. Hence it is imperative to improve the performance of aluminum framing system to improve the energy performance of commercial fenestration system and in turn reduce the energy consumption of commercial building and achieve zero energy building by 2025. The objective of this project was to develop high performance, energy efficient commercial

  8. High-Performance Management Practices and Employee Outcomes in Denmark

    DEFF Research Database (Denmark)

    Cristini, Annalisa; Eriksson, Tor; Pozzoli, Dario

    High-performance work practices are frequently considered to have positive effects on corporate performance, but what do they do for employees? After showing that organizational innovation is indeed positively associated with firm performance, we investigate whether high-involvement work practices...

  9. Optical interconnection networks for high-performance computing systems

    International Nuclear Information System (INIS)

    Biberman, Aleksandr; Bergman, Keren

    2012-01-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers. (review article)

  10. High-performance, stretchable, wire-shaped supercapacitors.

    Science.gov (United States)

    Chen, Tao; Hao, Rui; Peng, Huisheng; Dai, Liming

    2015-01-07

    A general approach toward extremely stretchable and highly conductive electrodes was developed. The method involves wrapping a continuous carbon nanotube (CNT) thin film around pre-stretched elastic wires, from which high-performance, stretchable wire-shaped supercapacitors were fabricated. The supercapacitors were made by twisting two such CNT-wrapped elastic wires, pre-coated with poly(vinyl alcohol)/H3PO4 hydrogel, as the electrolyte and separator. The resultant wire-shaped supercapacitors exhibited an extremely high elasticity of up to 350% strain with a high device capacitance up to 30.7 F g(-1), which is two times that of the state-of-the-art stretchable supercapacitor under only 100% strain. The wire-shaped structure facilitated the integration of multiple supercapacitors into a single wire device to meet specific energy and power needs for various potential applications. These supercapacitors can be repeatedly stretched from 0 to 200% strain for hundreds of cycles with no change in performance, thus outperforming all the reported state-of-the-art stretchable electronics. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Effect of Photon Hormesis on Dose Responses to Alpha Particles in Zebrafish Embryos

    Directory of Open Access Journals (Sweden)

    Candy Yuen Ping Ng

    2017-02-01

    Full Text Available Photon hormesis refers to the phenomenon where the biological effect of ionizing radiation with a high linear energy transfer (LET value is diminished by photons with a low LET value. The present paper studied the effect of photon hormesis from X-rays on dose responses to alpha particles using embryos of the zebrafish (Danio rerio as the in vivo vertebrate model. The toxicity of these ionizing radiations in the zebrafish embryos was assessed using the apoptotic counts at 20, 24, or 30 h post fertilization (hpf revealed through acridine orange (AO staining. For alpha-particle doses ≥ 4.4 mGy, the additional X-ray dose of 10 mGy significantly reduced the number of apoptotic cells at 24 hpf, which proved the presence of photon hormesis. Smaller alpha-particle doses might not have inflicted sufficient aggregate damages to trigger photon hormesis. The time gap T between the X-ray (10 mGy and alpha-particle (4.4 mGy exposures was also studied. Photon hormesis was present when T ≤ 30 min, but was absent when T = 60 min, at which time repair of damage induced by alpha particles would have completed to prevent their interactions with those induced by X-rays. Finally, the drop in the apoptotic counts at 24 hpf due to photon hormesis was explained by bringing the apoptotic events earlier to 20 hpf, which strongly supported the removal of aberrant cells through apoptosis as an underlying mechanism for photon hormesis.

  12. High Performance Computing in Science and Engineering '14

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2015-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS). The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and   engineers. The book comes with a wealth of color illustrations and tables of results.  

  13. Dairy cow feeding space requirements assessed in a Y-maze choice test.

    Science.gov (United States)

    Rioja-Lang, F C; Roberts, D J; Healy, S D; Lawrence, A B; Haskell, M J

    2012-07-01

    The effect of proximity to a dominant cow on a low-ranking cow's willingness to feed was assessed using choice tests. The main aim of the experiment was to determine the feeding space allowance at which the majority of subordinate cows would choose to feed on high-palatability food (HPF) next to a dominant cow rather than feeding alone on low-palatability food (LPF). Thirty Holstein-Friesian cows were used in the study. Half of the cows were trained to make an association between a black bin and HPF and a white bin and LPF, and the other half were trained with the opposite combination. Observations of pair-wise aggressive interactions were observed during feeding to determine the relative social status of each cow. From this, dominant and subordinate cows were allocated to experimental pairs. When cows had achieved an HPF preference with an 80% success rate in training, they were presented with choices using a Y-maze test apparatus, in which cows were offered choices between feeding on HPF with a dominant cow and feeding on LPF alone. Four different space allowances were tested at the HPF feeder: 0.3, 0.45, 0.6, and 0.75 m. At the 2 smaller space allowances, cows preferred to feed alone (choices between feeding alone or not for 0.3- and 0.45-m tests were significantly different). For the 2 larger space allowances, cows had no significant preferences (number of choices for feeding alone or with a dominant). Given that low-status cows are willing to sacrifice food quality to avoid close contact with a dominant animal, we suggest that the feeding space allowance should be at least 0.6m per cow whenever possible. However, even when space allowances are large, it is clear that some subordinate cows will still prefer to avoid proximity to dominant individuals. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  14. Department of Energy research in utilization of high-performance computers

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-08-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programmatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models, the execution of which is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex, and consequently it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure

  15. HETFIS: High-Energy Nucleon-Meson Transport Code with Fission

    International Nuclear Information System (INIS)

    Barish, J.; Gabriel, T.A.; Alsmiller, F.S.; Alsmiller, R.G. Jr.

    1981-07-01

    A model that includes fission for predicting particle production spectra from medium-energy nucleon and pion collisions with nuclei (Z greater than or equal to 91) has been incorporated into the nucleon-meson transport code, HETC. This report is primarily concerned with the programming aspects of HETFIS (High-Energy Nucleon-Meson Transport Code with Fission). A description of the program data and instructions for operating the code are given. HETFIS is written in FORTRAN IV for the IBM computers and is readily adaptable to other systems

  16. Multi-language Struct Support in Babel

    Energy Technology Data Exchange (ETDEWEB)

    Ebner, D; Prantl, A; Epperly, T W

    2011-03-22

    Babel is an open-source language interoperability framework tailored to the needs of high-performance scientific computing. As an integral element of the Common Component Architecture (CCA) it is used in a wide range of research projects. In this paper we describe how we extended Babel to support interoperable tuple data types (structs). Structs are a common idiom in scientific APIs; they are an efficient way to pass tuples of nonuniform data between functions, and are supported natively by most programming languages. Using our extended version of Babel, developers of scientific code can now pass structs as arguments between functions implemented in any of the supported languages. In C, C++ and Fortran 2003, structs can be passed without the overhead of data marshaling or copying, providing language interoperability at minimal cost. Other supported languages are Fortran 77, Fortran 90, Java and Python. We will show how we designed a struct implementation that is interoperable with all of the supported languages and present benchmark data compare the performance of all language bindings, highlighting the differences between languages that offer native struct support and an object-oriented interface with getter/setter methods.

  17. Mechanical Properties of High Performance Cementitious Grout (II)

    DEFF Research Database (Denmark)

    Sørensen, Eigil V.

    The present report is an update of the report “Mechanical Properties of High Performance Cementitious Grout (I)” [1] and describes tests carried out on the high performance grout MASTERFLOW 9500, marked “WMG 7145 FP”, developed by BASF Construction Chemicals A/S and designed for use in grouted...

  18. A Linux Workstation for High Performance Graphics

    Science.gov (United States)

    Geist, Robert; Westall, James

    2000-01-01

    The primary goal of this effort was to provide a low-cost method of obtaining high-performance 3-D graphics using an industry standard library (OpenGL) on PC class computers. Previously, users interested in doing substantial visualization or graphical manipulation were constrained to using specialized, custom hardware most often found in computers from Silicon Graphics (SGI). We provided an alternative to expensive SGI hardware by taking advantage of third-party, 3-D graphics accelerators that have now become available at very affordable prices. To make use of this hardware our goal was to provide a free, redistributable, and fully-compatible OpenGL work-alike library so that existing bodies of code could simply be recompiled. for PC class machines running a free version of Unix. This should allow substantial cost savings while greatly expanding the population of people with access to a serious graphics development and viewing environment. This should offer a means for NASA to provide a spectrum of graphics performance to its scientists, supplying high-end specialized SGI hardware for high-performance visualization while fulfilling the requirements of medium and lower performance applications with generic, off-the-shelf components and still maintaining compatibility between the two.

  19. Autogenic Feedback Training (Body FORTRAN) for Musically Gifted Students at Bonita Vista High School.

    Science.gov (United States)

    Lane, John M.

    1982-01-01

    The Gifted Self-Understanding Assessment Battery (GSAB) was given to 34 (27 females, 7 males) music students (aged 15-17) at Bonita Vista High School in Chula Vista (California). Biofeedback training and assessment were followed by individual counseling for Autogenic Feedback Training (AFT) to achieve improvement of the individual's own well…

  20. The FSE system for crop simulation, version 2.1

    NARCIS (Netherlands)

    Kraalingen, van D.W.G.

    1995-01-01

    A FORTRAN 77 programming environment for continuous simulation of agro-ecological processes, such as crop growth and calculation of water balances is presented. This system, called FSE (FORTRAN Simulation Environment), consists of a main program, weather data and utilities for performing specific

  1. High performance anode for advanced Li batteries

    Energy Technology Data Exchange (ETDEWEB)

    Lake, Carla [Applied Sciences, Inc., Cedarville, OH (United States)

    2015-11-02

    The overall objective of this Phase I SBIR effort was to advance the manufacturing technology for ASI’s Si-CNF high-performance anode by creating a framework for large volume production and utilization of low-cost Si-coated carbon nanofibers (Si-CNF) for the battery industry. This project explores the use of nano-structured silicon which is deposited on a nano-scale carbon filament to achieve the benefits of high cycle life and high charge capacity without the consequent fading of, or failure in the capacity resulting from stress-induced fracturing of the Si particles and de-coupling from the electrode. ASI’s patented coating process distinguishes itself from others, in that it is highly reproducible, readily scalable and results in a Si-CNF composite structure containing 25-30% silicon, with a compositionally graded interface at the Si-CNF interface that significantly improve cycling stability and enhances adhesion of silicon to the carbon fiber support. In Phase I, the team demonstrated the production of the Si-CNF anode material can successfully be transitioned from a static bench-scale reactor into a fluidized bed reactor. In addition, ASI made significant progress in the development of low cost, quick testing methods which can be performed on silicon coated CNFs as a means of quality control. To date, weight change, density, and cycling performance were the key metrics used to validate the high performance anode material. Under this effort, ASI made strides to establish a quality control protocol for the large volume production of Si-CNFs and has identified several key technical thrusts for future work. Using the results of this Phase I effort as a foundation, ASI has defined a path forward to commercialize and deliver high volume and low-cost production of SI-CNF material for anodes in Li-ion batteries.

  2. Evaluation of high-performance computing software

    Energy Technology Data Exchange (ETDEWEB)

    Browne, S.; Dongarra, J. [Univ. of Tennessee, Knoxville, TN (United States); Rowan, T. [Oak Ridge National Lab., TN (United States)

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  3. TRIGRS - A Fortran Program for Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability Analysis, Version 2.0

    Science.gov (United States)

    Baum, Rex L.; Savage, William Z.; Godt, Jonathan W.

    2008-01-01

    The Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability Model (TRIGRS) is a Fortran program designed for modeling the timing and distribution of shallow, rainfall-induced landslides. The program computes transient pore-pressure changes, and attendant changes in the factor of safety, due to rainfall infiltration. The program models rainfall infiltration, resulting from storms that have durations ranging from hours to a few days, using analytical solutions for partial differential equations that represent one-dimensional, vertical flow in isotropic, homogeneous materials for either saturated or unsaturated conditions. Use of step-function series allows the program to represent variable rainfall input, and a simple runoff routing model allows the user to divert excess water from impervious areas onto more permeable downslope areas. The TRIGRS program uses a simple infinite-slope model to compute factor of safety on a cell-by-cell basis. An approximate formula for effective stress in unsaturated materials aids computation of the factor of safety in unsaturated soils. Horizontal heterogeneity is accounted for by allowing material properties, rainfall, and other input values to vary from cell to cell. This command-line program is used in conjunction with geographic information system (GIS) software to prepare input grids and visualize model results.

  4. Calculation of Absorbed Glandular Dose using a FORTRAN Program Based on Monte Carlo X-ray Spectra in Mammography

    Directory of Open Access Journals (Sweden)

    Ali Asghar Mowlavi

    2011-03-01

    Full Text Available Introduction: Average glandular dose calculation in mammography with Mo-Rh target-filter and dose calculation for different situations is accurate and fast. Material and Methods: In this research, first of all, x-ray spectra of a Mo target bombarded by a 28 keV electron beam with and without a Rh filter were calculated using the MCNP code. Then, we used the Sobol-Wu parameters to write a FORTRAN code to calculate average glandular dose. Results: Average glandular dose variation was calculated against the voltage of the mammographic x-ray tube for d = 5 cm, HVL= 0.35 mm Al, and different value of g. Also, the results related to average glandular absorbed dose variation per unit roentgen radiation against the glandular fraction of breast tissue for kV = 28 and HVL = 0.400 mmAl and different values of d are presented. Finally, average glandular dose against d for g = 60% and three values of kV (23, 27, 35 kV with corresponding HVLs have been calculated. Discussion and Conclusion: The absorbed dose computational program is accurate, complete, fast and user friendly. This program can be used for optimization of exposure dose in mammography. Also, the results of this research are in good agreement with the computational results of others.

  5. A Fortran 77 computer code for damped least-squares inversion of Slingram electromagnetic anomalies over thin tabular conductors

    Science.gov (United States)

    Dondurur, Derman; Sarı, Coşkun

    2004-07-01

    A FORTRAN 77 computer code is presented that permits the inversion of Slingram electromagnetic anomalies to an optimal conductor model. Damped least-squares inversion algorithm is used to estimate the anomalous body parameters, e.g. depth, dip and surface projection point of the target. Iteration progress is controlled by maximum relative error value and iteration continued until a tolerance value was satisfied, while the modification of Marquardt's parameter is controlled by sum of the squared errors value. In order to form the Jacobian matrix, the partial derivatives of theoretical anomaly expression with respect to the parameters being optimised are calculated by numerical differentiation by using first-order forward finite differences. A theoretical and two field anomalies are inserted to test the accuracy and applicability of the present inversion program. Inversion of the field data indicated that depth and the surface projection point parameters of the conductor are estimated correctly, however, considerable discrepancies appeared on the estimated dip angles. It is therefore concluded that the most important factor resulting in the misfit between observed and calculated data is due to the fact that the theory used for computing Slingram anomalies is valid for only thin conductors and this assumption might have caused incorrect dip estimates in the case of wide conductors.

  6. High Performance Proactive Digital Forensics

    International Nuclear Information System (INIS)

    Alharbi, Soltan; Traore, Issa; Moa, Belaid; Weber-Jahnke, Jens

    2012-01-01

    With the increase in the number of digital crimes and in their sophistication, High Performance Computing (HPC) is becoming a must in Digital Forensics (DF). According to the FBI annual report, the size of data processed during the 2010 fiscal year reached 3,086 TB (compared to 2,334 TB in 2009) and the number of agencies that requested Regional Computer Forensics Laboratory assistance increasing from 689 in 2009 to 722 in 2010. Since most investigation tools are both I/O and CPU bound, the next-generation DF tools are required to be distributed and offer HPC capabilities. The need for HPC is even more evident in investigating crimes on clouds or when proactive DF analysis and on-site investigation, requiring semi-real time processing, are performed. Although overcoming the performance challenge is a major goal in DF, as far as we know, there is almost no research on HPC-DF except for few papers. As such, in this work, we extend our work on the need of a proactive system and present a high performance automated proactive digital forensic system. The most expensive phase of the system, namely proactive analysis and detection, uses a parallel extension of the iterative z algorithm. It also implements new parallel information-based outlier detection algorithms to proactively and forensically handle suspicious activities. To analyse a large number of targets and events and continuously do so (to capture the dynamics of the system), we rely on a multi-resolution approach to explore the digital forensic space. Data set from the Honeynet Forensic Challenge in 2001 is used to evaluate the system from DF and HPC perspectives.

  7. Sex Differences in Mathematics Performance among Senior High ...

    African Journals Online (AJOL)

    This study explored sex differences in mathematics performance of students in the final year of high school and changes in these differences over a 3-year period in Ghana. A convenience sample of 182 students, 109 boys and 72 girls in three high schools in Ghana was used. Mathematics performance was assessed using ...

  8. Embedded High Performance Scalable Computing Systems

    National Research Council Canada - National Science Library

    Ngo, David

    2003-01-01

    The Embedded High Performance Scalable Computing Systems (EHPSCS) program is a cooperative agreement between Sanders, A Lockheed Martin Company and DARPA that ran for three years, from Apr 1995 - Apr 1998...

  9. Apoptosis-related genes induced in response to ketamine during early life stages of zebrafish.

    Science.gov (United States)

    Félix, Luís M; Serafim, Cindy; Valentim, Ana M; Antunes, Luís M; Matos, Manuela; Coimbra, Ana M

    2017-09-05

    Increasing evidence supports that ketamine, a widely used anaesthetic, potentiates apoptosis during development through the mitochondrial pathway of apoptosis. Defects in the apoptotic machinery can cause or contribute to the developmental abnormalities previously described in ketamine-exposed zebrafish. The involvement of the apoptotic machinery in ketamine-induced teratogenicity was addressed by assessing the apoptotic signals at 8 and 24 hpf following 20min exposure to ketamine at three stages of early zebrafish embryo development (256 cell, 50% epiboly and 1-4 somites stages). Exposure at the 256-cell stage to ketamine induced an up-regulation of casp8 and pcna at 8 hpf while changes in pcna at the mRNA level were observed at 24 hpf. After the 50% epiboly stage exposure, the mRNA levels of casp9 were increased at 8 and 24 hpf while aifm1 was affected at 24 hpf. Both tp53 and pcna expressions were increased at 8 hpf. After exposure during the 1-4 somites stage, no meaningful changes on transcript levels were observed. The distribution of apoptotic cells and the caspase-like enzymatic activities of caspase-3 and -9 were not affected by ketamine exposure. It is proposed that ketamine exposure at the 256-cell stage induced a cooperative mechanism between proliferation and cellular death while following exposure at the 50% epiboly, a p53-dependent and -independent caspase activation may occur. Finally, at the 1-4 somites stage, the defence mechanisms are already fully in place to protect against ketamine-insult. Thus, ketamine teratogenicity seems to be dependent on the functional mechanisms present in each developmental stage. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. High-performance vertical organic transistors.

    Science.gov (United States)

    Kleemann, Hans; Günther, Alrun A; Leo, Karl; Lüssem, Björn

    2013-11-11

    Vertical organic thin-film transistors (VOTFTs) are promising devices to overcome the transconductance and cut-off frequency restrictions of horizontal organic thin-film transistors. The basic physical mechanisms of VOTFT operation, however, are not well understood and VOTFTs often require complex patterning techniques using self-assembly processes which impedes a future large-area production. In this contribution, high-performance vertical organic transistors comprising pentacene for p-type operation and C60 for n-type operation are presented. The static current-voltage behavior as well as the fundamental scaling laws of such transistors are studied, disclosing a remarkable transistor operation with a behavior limited by injection of charge carriers. The transistors are manufactured by photolithography, in contrast to other VOTFT concepts using self-assembled source electrodes. Fluorinated photoresist and solvent compounds allow for photolithographical patterning directly and strongly onto the organic materials, simplifying the fabrication protocol and making VOTFTs a prospective candidate for future high-performance applications of organic transistors. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. High-performance non-volatile organic ferroelectric memory on banknotes

    KAUST Repository

    Khan, Yasser; Bhansali, Unnat Sampatraj; Alshareef, Husam N.

    2012-01-01

    High-performance non-volatile polymer ferroelectric memory are fabricated on banknotes using poly(vinylidene fluoride trifluoroethylene). The devices show excellent performance with high remnant polarization, low operating voltages, low leakage

  12. Computer simulation of steady-state performance of air-to-air heat pumps

    Energy Technology Data Exchange (ETDEWEB)

    Ellison, R D; Creswick, F A

    1978-03-01

    A computer model by which the performance of air-to-air heat pumps can be simulated is described. The intended use of the model is to evaluate analytically the improvements in performance that can be effected by various component improvements. The model is based on a trio of independent simulation programs originated at the Massachusetts Institute of Technology Heat Transfer Laboratory. The three programs have been combined so that user intervention and decision making between major steps of the simulation are unnecessary. The program was further modified by substituting a new compressor model and adding a capillary tube model, both of which are described. Performance predicted by the computer model is shown to be in reasonable agreement with performance data observed in our laboratory. Planned modifications by which the utility of the computer model can be enhanced in the future are described. User instructions and a FORTRAN listing of the program are included.

  13. High-performance non-volatile organic ferroelectric memory on banknotes

    KAUST Repository

    Khan, Yasser

    2012-03-21

    High-performance non-volatile polymer ferroelectric memory are fabricated on banknotes using poly(vinylidene fluoride trifluoroethylene). The devices show excellent performance with high remnant polarization, low operating voltages, low leakage, high mobility, and long retention times. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. High-performance non-volatile organic ferroelectric memory on banknotes.

    Science.gov (United States)

    Khan, M A; Bhansali, Unnat S; Alshareef, H N

    2012-04-24

    High-performance non-volatile polymer ferroelectric memory are fabricated on banknotes using poly(vinylidene fluoride trifluoroethylene). The devices show excellent performance with high remnant polarization, low operating voltages, low leakage, high mobility, and long retention times. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. High magnetic field induced otolith fusion in the zebrafish larvae.

    Science.gov (United States)

    Pais-Roldán, Patricia; Singh, Ajeet Pratap; Schulz, Hildegard; Yu, Xin

    2016-04-11

    Magnetoreception in animals illustrates the interaction of biological systems with the geomagnetic field (geoMF). However, there are few studies that identified the impact of high magnetic field (MF) exposure from Magnetic Resonance Imaging (MRI) scanners (>100,000 times of geoMF) on specific biological targets. Here, we investigated the effects of a 14 Tesla MRI scanner on zebrafish larvae. All zebrafish larvae aligned parallel to the B0 field, i.e. the static MF, in the MRI scanner. The two otoliths (ear stones) in the otic vesicles of zebrafish larvae older than 24 hours post fertilization (hpf) fused together after the high MF exposure as short as 2 hours, yielding a single-otolith phenotype with aberrant swimming behavior. The otolith fusion was blocked in zebrafish larvae under anesthesia or embedded in agarose. Hair cells may play an important role on the MF-induced otolith fusion. This work provided direct evidence to show that high MF interacts with the otic vesicle of zebrafish larvae and causes otolith fusion in an "all-or-none" manner. The MF-induced otolith fusion may facilitate the searching for MF sensors using genetically amenable vertebrate animal models, such as zebrafish.

  16. THE RELATION OF HIGH-PERFORMANCE WORK SYSTEMS WITH EMPLOYEE INVOLVEMENT

    Directory of Open Access Journals (Sweden)

    Bilal AFSAR

    2010-01-01

    Full Text Available The basic aim of high performance work systems is to enable employees to exercise decision making, leading to flexibility, innovation, improvement and skill sharing. By facilitating the development of high performance work systems we help organizations make continuous improvement a way of life.The notion of a high-performance work system (HPWS constitutes a claim that there exists a system of work practices for core workers in an organisation that leads in some way to superior performance. This article will discuss the relation that HPWS has with the improvement of firms’ performance and high involvement of the employees.

  17. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2013-01-01

    Contemporary High Performance Computing: From Petascale toward Exascale focuses on the ecosystems surrounding the world's leading centers for high performance computing (HPC). It covers many of the important factors involved in each ecosystem: computer architectures, software, applications, facilities, and sponsors. The first part of the book examines significant trends in HPC systems, including computer architectures, applications, performance, and software. It discusses the growth from terascale to petascale computing and the influence of the TOP500 and Green500 lists. The second part of the

  18. Creating User-Friendly Tools for Data Analysis and Visualization in K-12 Classrooms: A Fortran Dinosaur Meets Generation Y

    Science.gov (United States)

    Chambers, L. H.; Chaudhury, S.; Page, M. T.; Lankey, A. J.; Doughty, J.; Kern, Steven; Rogerson, Tina M.

    2008-01-01

    During the summer of 2007, as part of the second year of a NASA-funded project in partnership with Christopher Newport University called SPHERE (Students as Professionals Helping Educators Research the Earth), a group of undergraduate students spent 8 weeks in a research internship at or near NASA Langley Research Center. Three students from this group formed the Clouds group along with a NASA mentor (Chambers), and the brief addition of a local high school student fulfilling a mentorship requirement. The Clouds group was given the task of exploring and analyzing ground-based cloud observations obtained by K-12 students as part of the Students' Cloud Observations On-Line (S'COOL) Project, and the corresponding satellite data. This project began in 1997. The primary analysis tools developed for it were in FORTRAN, a computer language none of the students were familiar with. While they persevered through computer challenges and picky syntax, it eventually became obvious that this was not the most fruitful approach for a project aimed at motivating K-12 students to do their own data analysis. Thus, about halfway through the summer the group shifted its focus to more modern data analysis and visualization tools, namely spreadsheets and Google(tm) Earth. The result of their efforts, so far, is two different Excel spreadsheets and a Google(tm) Earth file. The spreadsheets are set up to allow participating classrooms to paste in a particular dataset of interest, using the standard S'COOL format, and easily perform a variety of analyses and comparisons of the ground cloud observation reports and their correspondence with the satellite data. This includes summarizing cloud occurrence and cloud cover statistics, and comparing cloud cover measurements from the two points of view. A visual classification tool is also provided to compare the cloud levels reported from the two viewpoints. This provides a statistical counterpart to the existing S'COOL data visualization tool

  19. Pengaruh High Performance Work Practice (Hpwp) Terhadap Job Performance Pada Frontliner Bank

    OpenAIRE

    Ihdaryanti, Monica Amani; Panggabean, Mutiara S

    2014-01-01

    Generally High Performance Work Practice (HPWP) is a part of management human resources. The objectives of this research are getting and analyzing the effect of HPWPs with Job Satisfaction; HPWPs with Organizational Commitment; Job Satisfaction with Organizational Commitment; Job Satisfaction with Job Performance; and Organizational Commitment with Job Performance. The total of sample in this research is 100 respondents which are as Front liner BNI and Mandiri. The result of th...

  20. High performance liquid chromatographic determination of ...

    African Journals Online (AJOL)

    STORAGESEVER

    2010-02-08

    ) high performance liquid chromatography (HPLC) grade .... applications. These are important requirements if the reagent is to be applicable to on-line pre or post column derivatisation in a possible automation of the analytical.

  1. Can Knowledge of the Characteristics of "High Performers" Be Generalised?

    Science.gov (United States)

    McKenna, Stephen

    2002-01-01

    Two managers described as high performing constructed complexity maps of their organization/world. The maps suggested that high performance is socially constructed and negotiated in specific contexts and management competencies associated with it are context specific. Development of high performers thus requires personalized coaching more than…

  2. Comparing Dutch and British high performing managers

    NARCIS (Netherlands)

    Waal, A.A. de; Heijden, B.I.J.M. van der; Selvarajah, C.; Meyer, D.

    2016-01-01

    National cultures have a strong influence on the performance of organizations and should be taken into account when studying the traits of high performing managers. At the same time, many studies that focus upon the attributes of successful managers show that there are attributes that are similar

  3. FORTRAN Code for Glandular Dose Calculation in Mammography Using Sobol-Wu Parameters

    Directory of Open Access Journals (Sweden)

    Mowlavi A A

    2007-07-01

    Full Text Available Background: Accurate computation of the radiation dose to the breast is essential to mammography. Various the thicknesses of breast, the composition of the breast tissue and other variables affect the optimal breast dose. Furthermore, the glandular fraction, which refers to the composition of the breasts, as partitioned between radiation-sensitive glandular tissue and the adipose tissue, also has an effect on this calculation. Fatty or fibrous breasts would have a lower value for the glandular fraction than dense breasts. Breast tissue composed of half glandular and half adipose tissue would have a glandular fraction in between that of fatty and dense breasts. Therefore, the use of a computational code for average glandular dose calculation in mammography is a more effective means of estimating the dose of radiation, and is accurate and fast. Methods: In the present work, the Sobol-Wu beam quality parameters are used to write a FORTRAN code for glandular dose calculation in molybdenum anode-molybdenum filter (Mo-Mo, molybdenum anode-rhodium filter (Mo-Rh and rhodium anode-rhodium filter (Rh-Rh target-filter combinations in mammograms. The input parameters of code are: tube voltage in kV, half-value layer (HVL of the incident x-ray spectrum in mm, breast thickness in cm (d, and glandular tissue fraction (g. Results: The average glandular dose (AGD variation against the voltage of the mammogram X-ray tube for d = 4 cm, HVL = 0.34 mm Al and g=0.5 for the three filter-target combinations, as well as its variation against the glandular fraction of breast tissue for kV=25, HVL=0.34, and d=4 cm has been calculated. The results related to the average glandular absorbed dose variation against HVL for kV = 28, d=4 cm and g= 0.6 are also presented. The results of this code are in good agreement with those previously reported in the literature. Conclusion: The code developed in this study calculates the glandular dose quickly, and it is complete and

  4. Micromagnetics on high-performance workstation and mobile computational platforms

    Science.gov (United States)

    Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.

    2015-05-01

    The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.

  5. Edge enhancement algorithm for low-dose X-ray fluoroscopic imaging.

    Science.gov (United States)

    Lee, Min Seok; Park, Chul Hee; Kang, Moon Gi

    2017-12-01

    Low-dose X-ray fluoroscopy has continually evolved to reduce radiation risk to patients during clinical diagnosis and surgery. However, the reduction in dose exposure causes quality degradation of the acquired images. In general, an X-ray device has a time-average pre-processor to remove the generated quantum noise. However, this pre-processor causes blurring and artifacts within the moving edge regions, and noise remains in the image. During high-pass filtering (HPF) to enhance edge detail, this noise in the image is amplified. In this study, a 2D edge enhancement algorithm comprising region adaptive HPF with the transient improvement (TI) method, as well as artifacts and noise reduction (ANR), was developed for degraded X-ray fluoroscopic images. The proposed method was applied in a static scene pre-processed by a low-dose X-ray fluoroscopy device. First, the sharpness of the X-ray image was improved using region adaptive HPF with the TI method, which facilitates sharpening of edge details without overshoot problems. Then, an ANR filter that uses an edge directional kernel was developed to remove the artifacts and noise that can occur during sharpening, while preserving edge details. The quantitative and qualitative results obtained by applying the developed method to low-dose X-ray fluoroscopic images and visually and numerically comparing the final images with images improved using conventional edge enhancement techniques indicate that the proposed method outperforms existing edge enhancement methods in terms of objective criteria and subjective visual perception of the actual X-ray fluoroscopic image. The developed edge enhancement algorithm performed well when applied to actual low-dose X-ray fluoroscopic images, not only by improving the sharpness, but also by removing artifacts and noise, including overshoot. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. High performance light water reactor

    International Nuclear Information System (INIS)

    Squarer, D.; Schulenberg, T.; Struwe, D.; Oka, Y.; Bittermann, D.; Aksan, N.; Maraczy, C.; Kyrki-Rajamaeki, R.; Souyri, A.; Dumaz, P.

    2003-01-01

    The objective of the high performance light water reactor (HPLWR) project is to assess the merit and economic feasibility of a high efficiency LWR operating at thermodynamically supercritical regime. An efficiency of approximately 44% is expected. To accomplish this objective, a highly qualified team of European research institutes and industrial partners together with the University of Tokyo is assessing the major issues pertaining to a new reactor concept, under the co-sponsorship of the European Commission. The assessment has emphasized the recent advancement achieved in this area by Japan. Additionally, it accounts for advanced European reactor design requirements, recent improvements, practical design aspects, availability of plant components and the availability of high temperature materials. The final objective of this project is to reach a conclusion on the potential of the HPLWR to help sustain the nuclear option, by supplying competitively priced electricity, as well as to continue the nuclear competence in LWR technology. The following is a brief summary of the main project achievements:-A state-of-the-art review of supercritical water-cooled reactors has been performed for the HPLWR project.-Extensive studies have been performed in the last 10 years by the University of Tokyo. Therefore, a 'reference design', developed by the University of Tokyo, was selected in order to assess the available technological tools (i.e. computer codes, analyses, advanced materials, water chemistry, etc.). Design data and results of the analysis were supplied by the University of Tokyo. A benchmark problem, based on the 'reference design' was defined for neutronics calculations and several partners of the HPLWR project carried out independent analyses. The results of these analyses, which in addition help to 'calibrate' the codes, have guided the assessment of the core and the design of an improved HPLWR fuel assembly. Preliminary selection was made for the HPLWR scale

  7. Strategy Guideline. High Performance Residential Lighting

    Energy Technology Data Exchange (ETDEWEB)

    Holton, J. [IBACOS, Inc., Pittsburgh, PA (United States)

    2012-02-01

    This report has been developed to provide a tool for the understanding and application of high performance lighting in the home. The strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner’s expectations for high quality lighting.

  8. Improving UV Resistance of High Performance Fibers

    Science.gov (United States)

    Hassanin, Ahmed

    High performance fibers are characterized by their superior properties compared to the traditional textile fibers. High strength fibers have high modules, high strength to weight ratio, high chemical resistance, and usually high temperature resistance. It is used in application where superior properties are needed such as bulletproof vests, ropes and cables, cut resistant products, load tendons for giant scientific balloons, fishing rods, tennis racket strings, parachute cords, adhesives and sealants, protective apparel and tire cords. Unfortunately, Ultraviolet (UV) radiation causes serious degradation to the most of high performance fibers. UV lights, either natural or artificial, cause organic compounds to decompose and degrade, because the energy of the photons of UV light is high enough to break chemical bonds causing chain scission. This work is aiming at achieving maximum protection of high performance fibers using sheathing approaches. The sheaths proposed are of lightweight to maintain the advantage of the high performance fiber that is the high strength to weight ratio. This study involves developing three different types of sheathing. The product of interest that need be protected from UV is braid from PBO. First approach is extruding a sheath from Low Density Polyethylene (LDPE) loaded with different rutile TiO2 % nanoparticles around the braid from the PBO. The results of this approach showed that LDPE sheath loaded with 10% TiO2 by weight achieved the highest protection compare to 0% and 5% TiO2. The protection here is judged by strength loss of PBO. This trend noticed in different weathering environments, where the sheathed samples were exposed to UV-VIS radiations in different weatheromter equipments as well as exposure to high altitude environment using NASA BRDL balloon. The second approach is focusing in developing a protective porous membrane from polyurethane loaded with rutile TiO2 nanoparticles. Membrane from polyurethane loaded with 4

  9. High performance computing in linear control

    International Nuclear Information System (INIS)

    Datta, B.N.

    1993-01-01

    Remarkable progress has been made in both theory and applications of all important areas of control. The theory is rich and very sophisticated. Some beautiful applications of control theory are presently being made in aerospace, biomedical engineering, industrial engineering, robotics, economics, power systems, etc. Unfortunately, the same assessment of progress does not hold in general for computations in control theory. Control Theory is lagging behind other areas of science and engineering in this respect. Nowadays there is a revolution going on in the world of high performance scientific computing. Many powerful computers with vector and parallel processing have been built and have been available in recent years. These supercomputers offer very high speed in computations. Highly efficient software, based on powerful algorithms, has been developed to use on these advanced computers, and has also contributed to increased performance. While workers in many areas of science and engineering have taken great advantage of these hardware and software developments, control scientists and engineers, unfortunately, have not been able to take much advantage of these developments

  10. High Performance Bulk Thermoelectric Materials

    Energy Technology Data Exchange (ETDEWEB)

    Ren, Zhifeng [Boston College, Chestnut Hill, MA (United States)

    2013-03-31

    Over 13 plus years, we have carried out research on electron pairing symmetry of superconductors, growth and their field emission property studies on carbon nanotubes and semiconducting nanowires, high performance thermoelectric materials and other interesting materials. As a result of the research, we have published 104 papers, have educated six undergraduate students, twenty graduate students, nine postdocs, nine visitors, and one technician.

  11. Performance characterization of solid oxide cells under high pressure

    DEFF Research Database (Denmark)

    Sun, Xiufu; Bonaccorso, Alfredo Damiano; Graves, Christopher R.

    2014-01-01

    in both fuel cell mode and electrolysis mode. In electrolysis mode at low current density, the performance improvement was counteracted by the increase in open circuit voltage, but it has to be born in mind that the pressurised gas contains higher molar free energy. Operating at high current density...... hydrocarbon fuels, which is normally performed at high pressure to achieve a high yield. Operation of SOECs at elevated pressure will therefore facilitate integration with the downstream fuel synthesis and is furthermore advantageous as it increases the cell performance. In this work, recent pressurised test...... results of a planar Ni-YSZ (YSZ: Yttria stabilized Zirconia) supported solid oxide cell are presented. The test was performed at 800 °C at pressures up to 15 bar. A comparison of the electrochemical performance of the cell at 1 and 3 bar shows a significant and equal performance gain at higher pressure...

  12. Design practice and operational experience of highly irradiated, high-performance normal magnets

    International Nuclear Information System (INIS)

    Schultz, J.H.

    1982-09-01

    The limitations of high performance magnets are discussed in terms of mechanical, temperature, and electrical limits. The limitations of magnets that are highly irradiated by neutrons, gamma radiation, or x radiation are discussed

  13. Gradient High Performance Liquid Chromatography Method ...

    African Journals Online (AJOL)

    Purpose: To develop a gradient high performance liquid chromatography (HPLC) method for the simultaneous determination of phenylephrine (PHE) and ibuprofen (IBU) in solid ..... nimesulide, phenylephrine. Hydrochloride, chlorpheniramine maleate and caffeine anhydrous in pharmaceutical dosage form. Acta Pol.

  14. High performance sapphire windows

    Science.gov (United States)

    Bates, Stephen C.; Liou, Larry

    1993-02-01

    High-quality, wide-aperture optical access is usually required for the advanced laser diagnostics that can now make a wide variety of non-intrusive measurements of combustion processes. Specially processed and mounted sapphire windows are proposed to provide this optical access to extreme environment. Through surface treatments and proper thermal stress design, single crystal sapphire can be a mechanically equivalent replacement for high strength steel. A prototype sapphire window and mounting system have been developed in a successful NASA SBIR Phase 1 project. A large and reliable increase in sapphire design strength (as much as 10x) has been achieved, and the initial specifications necessary for these gains have been defined. Failure testing of small windows has conclusively demonstrated the increased sapphire strength, indicating that a nearly flawless surface polish is the primary cause of strengthening, while an unusual mounting arrangement also significantly contributes to a larger effective strength. Phase 2 work will complete specification and demonstration of these windows, and will fabricate a set for use at NASA. The enhanced capabilities of these high performance sapphire windows will lead to many diagnostic capabilities not previously possible, as well as new applications for sapphire.

  15. Stretchable and High-Performance Supercapacitors with Crumpled Graphene Papers

    Science.gov (United States)

    Zang, Jianfeng; Cao, Changyong; Feng, Yaying; Liu, Jie; Zhao, Xuanhe

    2014-01-01

    Fabrication of unconventional energy storage devices with high stretchability and performance is challenging, but critical to practical operations of fully power-independent stretchable electronics. While supercapacitors represent a promising candidate for unconventional energy-storage devices, existing stretchable supercapacitors are limited by their low stretchability, complicated fabrication process, and high cost. Here, we report a simple and low-cost method to fabricate extremely stretchable and high-performance electrodes for supercapacitors based on new crumpled-graphene papers. Electrolyte-mediated-graphene paper bonded on a compliant substrate can be crumpled into self-organized patterns by harnessing mechanical instabilities in the graphene paper. As the substrate is stretched, the crumpled patterns unfold, maintaining high reliability of the graphene paper under multiple cycles of large deformation. Supercapacitor electrodes based on the crumpled graphene papers exhibit a unique combination of high stretchability (e.g., linear strain ~300%, areal strain ~800%), high electrochemical performance (e.g., specific capacitance ~196 F g−1), and high reliability (e.g., over 1000 stretch/relax cycles). An all-solid-state supercapacitor capable of large deformation is further fabricated to demonstrate practical applications of the crumpled-graphene-paper electrodes. Our method and design open a wide range of opportunities for manufacturing future energy-storage devices with desired deformability together with high performance. PMID:25270673

  16. Stretchable and High-Performance Supercapacitors with Crumpled Graphene Papers

    Science.gov (United States)

    Zang, Jianfeng; Cao, Changyong; Feng, Yaying; Liu, Jie; Zhao, Xuanhe

    2014-10-01

    Fabrication of unconventional energy storage devices with high stretchability and performance is challenging, but critical to practical operations of fully power-independent stretchable electronics. While supercapacitors represent a promising candidate for unconventional energy-storage devices, existing stretchable supercapacitors are limited by their low stretchability, complicated fabrication process, and high cost. Here, we report a simple and low-cost method to fabricate extremely stretchable and high-performance electrodes for supercapacitors based on new crumpled-graphene papers. Electrolyte-mediated-graphene paper bonded on a compliant substrate can be crumpled into self-organized patterns by harnessing mechanical instabilities in the graphene paper. As the substrate is stretched, the crumpled patterns unfold, maintaining high reliability of the graphene paper under multiple cycles of large deformation. Supercapacitor electrodes based on the crumpled graphene papers exhibit a unique combination of high stretchability (e.g., linear strain ~300%, areal strain ~800%), high electrochemical performance (e.g., specific capacitance ~196 F g-1), and high reliability (e.g., over 1000 stretch/relax cycles). An all-solid-state supercapacitor capable of large deformation is further fabricated to demonstrate practical applications of the crumpled-graphene-paper electrodes. Our method and design open a wide range of opportunities for manufacturing future energy-storage devices with desired deformability together with high performance.

  17. High Performance Design of 100Gb/s DPSK Optical Transmitter

    DEFF Research Database (Denmark)

    Das, Bhagwan; Abdullah, M.F.L; Shah, Nor Shahihda Mohd

    2016-01-01

    and optical transmitter have taken plenty of time for transmitting signal. When proposed design is operated at 1 GHz, 5 GHz, 10 GHz and 20 GHz using time constraint technique, it is observed that among all these frequencies, at 10 GHz high performance output is achieved for designed optical transmitter....... This high performance design of optical transmitter has zero timing error, low timing score and high slack time due to synchronization between input data and clock frequency. It is also determined that 99% timing score is reduced in comparison with 1 GHz frequency that has high jitters, high timing error......, high time score and low slack time. The high performance design is realized without disturbing actual bandwidth, power consumption and other parameters of the design. The proposed high performance design of 100Gb/s optical transmitter can be used with existing optical communication system to develop...

  18. Urine Concentration and Pyuria for Identifying UTI in Infants.

    Science.gov (United States)

    Chaudhari, Pradip P; Monuteaux, Michael C; Bachur, Richard G

    2016-11-01

    Varying urine white blood cell (WBC) thresholds have been recommended for the presumptive diagnosis of urinary tract infection (UTI) among young infants. These thresholds have not been studied with newer automated urinalysis systems that analyze uncentrifuged urine that might be influenced by urine concentration. Our objective was to determine the optimal urine WBC threshold for UTI in young infants by using an automated urinalysis system, stratified by urine concentration. Retrospective cross-sectional study of infants aged UTI in the emergency department with paired urinalysis and urine culture. UTI was defined as ≥50 000 colony-forming units/mL from catheterized specimens. Test characteristics were calculated across a range of WBC and leukocyte esterase (LE) cut-points, dichotomized into specific gravity groups (dilute UTI prevalence was 7.8%. Optimal WBC cut-points were 3 WBC/high-power field (HPF) in dilute urine (likelihood ratio positive [LR+] 9.9, likelihood ratio negative [LR‒] 0.15) and 6 WBC/HPF (LR+ 10.1, LR‒ 0.17) in concentrated urine. For dipstick analysis, positive LE has excellent test characteristics regardless of urine concentration (LR+ 22.1, LR‒ 0.12 in dilute urine; LR+ 31.6, LR‒ 0.22 in concentrated urine). Urine concentration should be incorporated into the interpretation of automated microscopic urinalysis in young infants. Pyuria thresholds of 3 WBC/HPF in dilute urine and 6 WBC/HPF in concentrated urine are recommended for the presumptive diagnosis of UTI. Without correction of specific gravity, positive LE by automated dipstick is a reliably strong indicator of UTI. Copyright © 2016 by the American Academy of Pediatrics.

  19. Engineering High-Energy Interfacial Structures for High-Performance Oxygen-Involving Electrocatalysis.

    Science.gov (United States)

    Guo, Chunxian; Zheng, Yao; Ran, Jingrun; Xie, Fangxi; Jaroniec, Mietek; Qiao, Shi-Zhang

    2017-07-10

    Engineering high-energy interfacial structures for high-performance electrocatalysis is achieved by chemical coupling of active CoO nanoclusters and high-index facet Mn 3 O 4 nano-octahedrons (hi-Mn 3 O 4 ). A thorough characterization, including synchrotron-based near edge X-ray absorption fine structure, reveals that strong interactions between both components promote the formation of high-energy interfacial Mn-O-Co species and high oxidation state CoO, from which electrons are drawn by Mn III -O present in hi-Mn 3 O 4 . The CoO/hi-Mn 3 O 4 demonstrates an excellent catalytic performance over the conventional metal oxide-based electrocatalysts, which is reflected by 1.2 times higher oxygen evolution reaction (OER) activity than that of Ru/C and a comparable oxygen reduction reaction (ORR) activity to that of Pt/C as well as a better stability than that of Ru/C (95 % vs. 81 % retained OER activity) and Pt/C (92 % vs. 78 % retained ORR activity after 10 h running) in alkaline electrolyte. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Flexible nanoscale high-performance FinFETs

    KAUST Repository

    Sevilla, Galo T.; Ghoneim, Mohamed T.; Fahad, Hossain M.; Rojas, Jhonathan Prieto; Hussain, Aftab M.; Hussain, Muhammad Mustafa

    2014-01-01

    With the emergence of the Internet of Things (IoT), flexible high-performance nanoscale electronics are more desired. At the moment, FinFET is the most advanced transistor architecture used in the state-of-the-art microprocessors. Therefore, we show

  1. A novel auto-correlation function method and FORTRAN codes for the determination of the decay ratio in BWR stability analysis

    International Nuclear Information System (INIS)

    Behringer, K.

    2001-08-01

    A novel auto-correlation function (ACF) method has been investigated for determining the oscillation frequency and the decay ratio in BWR stability analyses. The report describes not only the method but also documents comprehensively the used and developed FORTRAN codes. The neutron signals are band-pass filtered to separate the oscillation peak in the power spectral density (PSD) from background. Two linear second-order oscillation models are considered. The ACF of each model, corrected for signal filtering and with the inclusion of a background term under the peak in the PSD, is then least-squares fitted to the ACF estimated on the previously filtered neutron signals, in order to determine the oscillation frequency and the decay ratio. The procedures of filtering and ACF estimation use fast Fourier transform techniques with signal segmentation. Gliding 'short-time' ACF estimates along a signal record allow the evaluation of uncertainties. Some numerical results are given which have been obtained from neutron signal data offered by the recent Forsmark I and Forsmark II NEA benchmark project. They are compared with those from other benchmark participants using different other analysis methods. (author)

  2. A novel auto-correlation function method and FORTRAN codes for the determination of the decay ratio in BWR stability analysis

    Energy Technology Data Exchange (ETDEWEB)

    Behringer, K

    2001-08-01

    A novel auto-correlation function (ACF) method has been investigated for determining the oscillation frequency and the decay ratio in BWR stability analyses. The report describes not only the method but also documents comprehensively the used and developed FORTRAN codes. The neutron signals are band-pass filtered to separate the oscillation peak in the power spectral density (PSD) from background. Two linear second-order oscillation models are considered. The ACF of each model, corrected for signal filtering and with the inclusion of a background term under the peak in the PSD, is then least-squares fitted to the ACF estimated on the previously filtered neutron signals, in order to determine the oscillation frequency and the decay ratio. The procedures of filtering and ACF estimation use fast Fourier transform techniques with signal segmentation. Gliding 'short-time' ACF estimates along a signal record allow the evaluation of uncertainties. Some numerical results are given which have been obtained from neutron signal data offered by the recent Forsmark I and Forsmark II NEA benchmark project. They are compared with those from other benchmark participants using different other analysis methods. (author)

  3. Aquatic toxicity assessment of single-walled carbon nanotubes using zebrafish embryos

    Energy Technology Data Exchange (ETDEWEB)

    Pan Huichin; Lin Yujun; Li Mengwei [Department of Biomedical Sciences, Chung Shan Medical University, Taichung 40201, Taiwan (China); Chuang Hanni; Chou Chengchung, E-mail: bioccc@ccu.edu.tw, E-mail: hp29@csmu.edu.tw [Department of Life Science, National Chung Cheng University, Min-Hsiung, 62102 Taiwan (China)

    2011-07-06

    Zebrafish embryos selected at the 64-cell stage were exposed to various concentrations of amide functionalized single-walled carbon nanotubes (SWCNTs) ranging from 1 to 10 {mu}g/ml dissolved in 1% Pluronic F-68 (a cell culture grade surfactant), and the development of embryos was examined from 24 to 120 hours post fertilization (hpf). Incubation of embryos in 1% F-68 did not induce overt abnormal phenotype as compared to the wild-type; neither did it cause significant mortality during the exposure period. Generally, there was a slight developmental delay in larvae treated with SWCNTs of 5 {mu}g/ml or above. Only larvae exposed to {>=} 5 {mu}g/ml SWCNTs showed significantly reduced survival rates. About 50% of the embryos exposed to 5 {mu}g/ml showed abnormal phenotypes at 24 hpf as compared to the control group. As development proceeds to 120 hpf, more embryos displayed defective morphology. A slight hatching delay was observed in embryos exposed to concentrations above 5 {mu}g/ml. There was a general reduction of body axes, including narrowed somite and shortened yolk stalk. In addition, pigmentation in the ventral trunk area was less than that observed in control group. The body lengths of the exposed embryos were decreased significantly at 48 hpf (3.11 mm in control vs. 3.00 mm in SWCNTs-exposed embryos). However, exposure to SWCNTs did not affect the number of somites. Other features that were noticed in the SWCNTs-exposed embryos included edema and shrinkage and blebbling of the epidermal lining. Most of these observed phenotypes persisted from 48 hpf through 120 hpf. Overall, the aforementioned results indicate that soluble amide-functionalized SWCNTs are toxic to zebrafish embryos at a minimum concentration of 5 {mu}g/ml.

  4. PySSM: A Python Module for Bayesian Inference of Linear Gaussian State Space Models

    Directory of Open Access Journals (Sweden)

    Christopher Strickland

    2014-04-01

    Full Text Available PySSM is a Python package that has been developed for the analysis of time series using linear Gaussian state space models. PySSM is easy to use; models can be set up quickly and efficiently and a variety of different settings are available to the user. It also takes advantage of scientific libraries NumPy and SciPy and other high level features of the Python language. PySSM is also used as a platform for interfacing between optimized and parallelized Fortran routines. These Fortran routines heavily utilize basic linear algebra and linear algebra Package functions for maximum performance. PySSM contains classes for filtering, classical smoothing as well as simulation smoothing.

  5. Shared Variable Oriented Parallel Precompiler for SPMD Model

    Institute of Scientific and Technical Information of China (English)

    1995-01-01

    For the moment,commercial parallel computer systems with distributed memory architecture are usually provided with parallel FORTRAN or parallel C compliers,which are just traditional sequential FORTRAN or C compilers expanded with communication statements.Programmers suffer from writing parallel programs with communication statements. The Shared Variable Oriented Parallel Precompiler (SVOPP) proposed in this paper can automatically generate appropriate communication statements based on shared variables for SPMD(Single Program Multiple Data) computation model and greatly ease the parallel programming with high communication efficiency.The core function of parallel C precompiler has been successfully verified on a transputer-based parallel computer.Its prominent performance shows that SVOPP is probably a break-through in parallel programming technique.

  6. SIOB: a FORTRAN code for least-squares shape fitting several neutron transmission measurements using the Breit--Wigner multilevel formula. [For IBM-360/91

    Energy Technology Data Exchange (ETDEWEB)

    de Saussure, G.; Olsen, D. K.; Perez, R. B.

    1978-05-01

    The FORTRAN-IV code SIOB was developed to least-square fit the shape of neutron transmission curves. Any number of measurements on a common energy scale for different sample thicknesses can be simultaneously fitted. The computed transmission curves can be broadened with either a Gaussian or a rectangular resolution function or both, with the resolution width a function of energy. The total cross section is expressed as a sum of single-level or multilevel Breit--Wigner terms and Doppler broadened by using the fast interpolation routine QUICKW. The number of data points, resonance levels, and variables which can be handled simultaneously is only limited by the overall dimensions of two arrays in the program and by the stability of the matrix inversion. In a test problem seven transmissions each with 3750 data points were simultaneously fitted with 74 resonances and 110 variable parameters. The problem took 47 min of CPU time on an IBM-360/91, for 3 iterations. 3 figures, 2 tables.

  7. Teacher Accountability at High Performing Charter Schools

    Science.gov (United States)

    Aguirre, Moises G.

    2016-01-01

    This study will examine the teacher accountability and evaluation policies and practices at three high performing charter schools located in San Diego County, California. Charter schools are exempted from many laws, rules, and regulations that apply to traditional school systems. By examining the teacher accountability systems at high performing…

  8. Improvement of performance of ultra-high performance concrete based composite material added with nano materials

    Directory of Open Access Journals (Sweden)

    Pang Jinchang

    2016-03-01

    Full Text Available Ultra-high performance concrete (UHPC, a kind of composite material characterized by ultra high strength, high toughness and high durability. It has a wide application prospect in engineering practice. But there are some defects in concrete. How to improve strength and toughness of UHPC remains to be the target of researchers. To obtain UHPC with better performance, this study introduced nano-SiO2 and nano-CaCO3 into UHPC. Moreover, hydration heat analysis, X-Ray Diffraction (XRD, mercury intrusion porosimetry (MIP and nanoindentation tests were used to explore hydration process and microstructure. Double-doped nanomaterials can further enhance various mechanical performances of materials. Nano-SiO2 can promote early progress of cement hydration due to its high reaction activity and C-S-H gel generates when it reacts with cement hydration product Ca(OH2. Nano-CaCO3 mainly plays the role of crystal nucleus effect and filling effect. Under the combined action of the two, the composite structure is denser, which provides a way to improve the performance of UHPC in practical engineering.

  9. Performance of high-rate gravel-packed oil wells

    Energy Technology Data Exchange (ETDEWEB)

    Unneland, Trond

    2001-05-01

    Improved methods for the prediction, evaluation, and monitoring of performance in high-rate cased-hole gravel-packed oil wells are presented in this thesis. The ability to predict well performance prior to the gravel-pack operations, evaluate the results after the operation, and monitor well performance over time has been improved. This lifetime approach to performance analysis of gravel-packed oil wells contributes to increase oil production and field profitability. First, analytical models available for prediction of performance in gravel-packed oil wells are reviewed, with particular emphasis on high-velocity flow effects. From the analysis of field data from three North Sea oil fields, improved and calibrated cased-hole gravel-pack performance prediction models are presented. The recommended model is based on serial flow through formation sand and gravel in the perforation tunnels. In addition, new correlations for high-velocity flow in high-rate gravel-packed oil wells are introduced. Combined, this improves the performance prediction for gravel-packed oil wells, and specific areas can be targeted for optimized well design. Next, limitations in the current methods and alternative methods for evaluation and comparison of well performance are presented. The most widely used parameter, the skin factor, remains a convenient and important parameter. However, using the skin concept in direct comparisons between wells with different reservoir properties may result in misleading or even invalid conclusions. A discussion of the parameters affecting the skin value, with a clarification of limitations, is included. A methodology for evaluation and comparison of gravel-packed well performance is presented, and this includes the use of results from production logs and the use of effective perforation tunnel permeability as a parameter. This contributes to optimized operational procedures from well to well and from field to field. Finally, the data sources available for

  10. Designing experimental setup and procedures for studying alpha-particle-induced adaptive response in zebrafish embryos in vivo

    Energy Technology Data Exchange (ETDEWEB)

    Choi, V.W.Y.; Lam, R.K.K.; Chong, E.Y.W. [Department of Physics and Materials Science, City University of Hong Kong, Tat Chee Avenue, Kowloon Tong (Hong Kong); Cheng, S.H. [Department of Biology and Chemistry, City University of Hong Kong, Tat Chee Avenue, Kowloon Tong (Hong Kong); Yu, K.N., E-mail: peter.yu@cityu.edu.h [Department of Physics and Materials Science, City University of Hong Kong, Tat Chee Avenue, Kowloon Tong (Hong Kong)

    2010-03-15

    The present work was devoted to designing the experimental setup and the associated procedures for alpha-particle-induced adaptive response in zebrafish embryos in vivo. Thin PADC films with a thickness of 16 mum were fabricated and employed as support substrates for holding dechorionated zebrafish embryos for alpha-particle irradiation from the bottom through the films. Embryos were collected within 15 min when the light photoperiod began, which were then incubated and dechorionated at 4 h post fertilization (hpf). They were then irradiated at 5 hpf by alpha particles using a planar {sup 241}Am source with an activity of 0.1151 muCi for 24 s (priming dose), and subsequently at 10 hpf using the same source for 240 s (challenging dose). The levels of apoptosis in irradiated zebrafish embryos at 24 hpf were quantified through staining with the vital dye acridine orange, followed by counting the stained cells under a florescent microscope. The results revealed the presence of the adaptive response in zebrafish embryos in vivo, and demonstrated the feasibility of the adopted experimental setup and procedures.

  11. High-Performance, Space-Storable, Bi-Propellant Program Status

    Science.gov (United States)

    Schneider, Steven J.

    2002-01-01

    Bipropellant propulsion systems currently represent the largest bus subsystem for many missions. These missions range from low Earth orbit satellite to geosynchronous communications and planetary exploration. The payoff of high performance bipropellant systems is illustrated by the fact that Aerojet Redmond has qualified a commercial NTO/MMH engine based on the high Isp technology recently delivered by this program. They are now qualifying a NTO/hydrazine version of this engine. The advanced rhenium thrust chambers recently provided by this program have raised the performance of earth storable propellants from 315 sec to 328 sec of specific impulse. The recently introduced rhenium technology is the first new technology introduced to satellite propulsion in 30 years. Typically, the lead time required to develop and qualify new chemical thruster technology is not compatible with program development schedules. These technology development programs must be supported by a long term, Base R&T Program, if the technology s to be matured. This technology program then addresses the need for high performance, storable, on-board chemical propulsion for planetary rendezvous and descent/ascent. The primary NASA customer for this technology is Space Science, which identifies this need for such programs as Mars Surface Return, Titan Explorer, Neptune Orbiter, and Europa Lander. High performance (390 sec) chemical propulsion is estimated to add 105% payload to the Mars Sample Return mission or alternatively reduce the launch mass by 33%. In many cases, the use of existing (flight heritage) propellant technology is accommodated by reducing mission objectives and/or increasing enroute travel times sacrificing the science value per unit cost of the program. Therefore, a high performance storable thruster utilizing fluorinated oxidizers with hydrazine is being developed.

  12. GASNet Specification, v1.8.1

    Energy Technology Data Exchange (ETDEWEB)

    Bonachea, Dan; Hargrove, P.

    2017-08-31

    GASNet is a language-independent, low-level networking layer that provides network-independent, high-performance communication primitives tailored for implementing parallel global address space SPMD languages and libraries such as UPC, UPC++, Co-Array Fortran, Legion, Chapel, and many others. The interface is primarily intended as a compilation target and for use by runtime library writers (as opposed to end users), and the primary goals are high performance, interface portability, and expressiveness. GASNet stands for "Global-Address Space Networking".

  13. WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code

    Energy Technology Data Exchange (ETDEWEB)

    Mendygral, P. J.; Radcliffe, N.; Kandalla, K. [Cray Inc., St. Paul, MN 55101 (United States); Porter, D. [Minnesota Supercomputing Institute for Advanced Computational Research, Minneapolis, MN USA (United States); O’Neill, B. J.; Nolting, C.; Donnert, J. M. F.; Jones, T. W. [School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 55455 (United States); Edmon, P., E-mail: pjm@cray.com, E-mail: nradclif@cray.com, E-mail: kkandalla@cray.com, E-mail: oneill@astro.umn.edu, E-mail: nolt0040@umn.edu, E-mail: donnert@ira.inaf.it, E-mail: twj@umn.edu, E-mail: dhp@umn.edu, E-mail: pedmon@cfa.harvard.edu [Institute for Theory and Computation, Center for Astrophysics, Harvard University, Cambridge, MA 02138 (United States)

    2017-02-01

    We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it may be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.

  14. WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code

    International Nuclear Information System (INIS)

    Mendygral, P. J.; Radcliffe, N.; Kandalla, K.; Porter, D.; O’Neill, B. J.; Nolting, C.; Donnert, J. M. F.; Jones, T. W.; Edmon, P.

    2017-01-01

    We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it may be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.

  15. Computational Biology and High Performance Computing 2000

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  16. Control switching in high performance and fault tolerant control

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2010-01-01

    The problem of reliability in high performance control and in fault tolerant control is considered in this paper. A feedback controller architecture for high performance and fault tolerance is considered. The architecture is based on the Youla-Jabr-Bongiorno-Kucera (YJBK) parameterization. By usi...

  17. Design and Implementation of High-Performance GIS Dynamic Objects Rendering Engine

    Science.gov (United States)

    Zhong, Y.; Wang, S.; Li, R.; Yun, W.; Song, G.

    2017-12-01

    Spatio-temporal dynamic visualization is more vivid than static visualization. It important to use dynamic visualization techniques to reveal the variation process and trend vividly and comprehensively for the geographical phenomenon. To deal with challenges caused by dynamic visualization of both 2D and 3D spatial dynamic targets, especially for different spatial data types require high-performance GIS dynamic objects rendering engine. The main approach for improving the rendering engine with vast dynamic targets relies on key technologies of high-performance GIS, including memory computing, parallel computing, GPU computing and high-performance algorisms. In this study, high-performance GIS dynamic objects rendering engine is designed and implemented for solving the problem based on hybrid accelerative techniques. The high-performance GIS rendering engine contains GPU computing, OpenGL technology, and high-performance algorism with the advantage of 64-bit memory computing. It processes 2D, 3D dynamic target data efficiently and runs smoothly with vast dynamic target data. The prototype system of high-performance GIS dynamic objects rendering engine is developed based SuperMap GIS iObjects. The experiments are designed for large-scale spatial data visualization, the results showed that the high-performance GIS dynamic objects rendering engine have the advantage of high performance. Rendering two-dimensional and three-dimensional dynamic objects achieve 20 times faster on GPU than on CPU.

  18. High-performance carbon nanotube-reinforced bioplastic

    CSIR Research Space (South Africa)

    Ramontja, J

    2009-12-01

    Full Text Available -1 High-Performance Carbon Nanotube-Reinforced Bioplastic 1. James Ramontja1,2, 2. Suprakas Sinha Ray1,*, 3. Sreejarani K. Pillai1, 4. Adriaan S. Luyt2 1. 1 DST/CSIR Nanotechnology Innovation Centre, National Centre for Nano-Structured Materials...

  19. High-Performance Liquid Chromatography-Mass Spectrometry.

    Science.gov (United States)

    Vestal, Marvin L.

    1984-01-01

    Reviews techniques for online coupling of high-performance liquid chromatography with mass spectrometry, emphasizing those suitable for application to nonvolatile samples. Also summarizes the present status, strengths, and weaknesses of various techniques and discusses potential applications of recently developed techniques for combined liquid…

  20. High performance cloud auditing and applications

    CERN Document Server

    Choi, Baek-Young; Song, Sejun

    2014-01-01

    This book mainly focuses on cloud security and high performance computing for cloud auditing. The book discusses emerging challenges and techniques developed for high performance semantic cloud auditing, and presents the state of the art in cloud auditing, computing and security techniques with focus on technical aspects and feasibility of auditing issues in federated cloud computing environments.   In summer 2011, the United States Air Force Research Laboratory (AFRL) CyberBAT Cloud Security and Auditing Team initiated the exploration of the cloud security challenges and future cloud auditing research directions that are covered in this book. This work was supported by the United States government funds from the Air Force Office of Scientific Research (AFOSR), the AFOSR Summer Faculty Fellowship Program (SFFP), the Air Force Research Laboratory (AFRL) Visiting Faculty Research Program (VFRP), the National Science Foundation (NSF) and the National Institute of Health (NIH). All chapters were partially suppor...

  1. High-performance scientific computing in the cloud

    Science.gov (United States)

    Jorissen, Kevin; Vila, Fernando; Rehr, John

    2011-03-01

    Cloud computing has the potential to open up high-performance computational science to a much broader class of researchers, owing to its ability to provide on-demand, virtualized computational resources. However, before such approaches can become commonplace, user-friendly tools must be developed that hide the unfamiliar cloud environment and streamline the management of cloud resources for many scientific applications. We have recently shown that high-performance cloud computing is feasible for parallelized x-ray spectroscopy calculations. We now present benchmark results for a wider selection of scientific applications focusing on electronic structure and spectroscopic simulation software in condensed matter physics. These applications are driven by an improved portable interface that can manage virtual clusters and run various applications in the cloud. We also describe a next generation of cluster tools, aimed at improved performance and a more robust cluster deployment. Supported by NSF grant OCI-1048052.

  2. High-performance phase-field modeling

    KAUST Repository

    Vignal, Philippe; Sarmiento, Adel; Cortes, Adriano Mauricio; Dalcin, L.; Collier, N.; Calo, Victor M.

    2015-01-01

    and phase-field crystal equation will be presented, which corroborate the theoretical findings, and illustrate the robustness of the method. Results related to more challenging examples, namely the Navier-Stokes Cahn-Hilliard and a diusion-reaction Cahn-Hilliard system, will also be presented. The implementation was done in PetIGA and PetIGA-MF, high-performance Isogeometric Analysis frameworks [1, 3], designed to handle non-linear, time-dependent problems.

  3. High-calorie food-cues impair working memory performance in high and low food cravers.

    Science.gov (United States)

    Meule, Adrian; Skirde, Ann Kathrin; Freund, Rebecca; Vögele, Claus; Kübler, Andrea

    2012-10-01

    The experience of food craving can lead to cognitive impairments. Experimentally induced chocolate craving exhausts cognitive resources and, therefore, impacts working memory, particularly in trait chocolate cravers. In the current study, we investigated the effects of exposure to food-cues on working memory task performance in a group with frequent and intense (high cravers, n=28) and less pronounced food cravings (low cravers, n=28). Participants performed an n-back task that contained either pictures of high-calorie sweets, high-calorie savory foods, or neutral objects. Current subjective food craving was assessed before and after the task. All participants showed slower reaction times and made more omission errors in response to food-cues, particularly savory foods. There were no differences in task performance between groups. State cravings did not differ between groups before the task, but increased more in high cravers compared to low cravers during the task. Results support findings about food cravings impairing visuo-spatial working memory performance independent of trait cravings. They further show that this influence is not restricted to chocolate, but also applies to high-calorie savory foods. Limiting working memory capacity may be especially crucial in persons who are more prone to high-calorie food-cues and experience such cravings habitually. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. High-performance noncontact thermal diode via asymmetric nanostructures

    Science.gov (United States)

    Shen, Jiadong; Liu, Xianglei; He, Huan; Wu, Weitao; Liu, Baoan

    2018-05-01

    Electric diodes, though laying the foundation of modern electronics and information processing industries, suffer from ineffectiveness and even failure at high temperatures. Thermal diodes are promising alternatives to relieve above limitations, but usually possess low rectification ratios, and how to obtain a high-performance thermal rectification effect is still an open question. This paper proposes an efficient contactless thermal diode based on the near-field thermal radiation of asymmetric doped silicon nanostructures. The rectification ratio computed via exact scattering theories is demonstrated to be as high as 10 at a nanoscale gap distance and period, outperforming the counterpart flat-plate diode by more than one order of magnitude. This extraordinary performance mainly lies in the higher forward and lower reverse radiative heat flux within the low frequency band compared with the counterpart flat-plate diode, which is caused by a lower loss and smaller cut-off wavevector of nanostructures for the forward and reversed scheme, respectively. This work opens new routes to realize high performance thermal diodes, and may have wide applications in efficient thermal computing, thermal information processing, and thermal management.

  5. High Performance Computing Modernization Program Kerberos Throughput Test Report

    Science.gov (United States)

    2017-10-26

    Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/5524--17-9751 High Performance Computing Modernization Program Kerberos Throughput Test ...NUMBER 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 2. REPORT TYPE1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 6. AUTHOR(S) 8. PERFORMING...PAGE 18. NUMBER OF PAGES 17. LIMITATION OF ABSTRACT High Performance Computing Modernization Program Kerberos Throughput Test Report Daniel G. Gdula* and

  6. High performance computing on vector systems

    CERN Document Server

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  7. A new high performance current transducer

    International Nuclear Information System (INIS)

    Tang Lijun; Lu Songlin; Li Deming

    2003-01-01

    A DC-100 kHz current transducer is developed using a new technique on zero-flux detecting principle. It was shown that the new current transducer is of high performance, its magnetic core need not be selected very stringently, and it is easy to manufacture

  8. The developmental effects of pentachlorophenol on zebrafish embryos during segmentation: A systematic view

    Science.gov (United States)

    Xu, Ting; Zhao, Jing; Xu, Zhifa; Pan, Ruijie; Yin, Daqiang

    2016-05-01

    Pentachlorophenol (PCP) is a typical toxicant and prevailing pollutant whose toxicity has been broadly investigated. However, previous studies did not specifically investigate the underlying mechanisms of its developmental toxicity. Here, we chose zebrafish embryos as the model, exposed them to 2 different concentrations of PCP, and sequenced their entire transcriptomes at 10 and 24 hours post-fertilization (hpf). The sequencing analysis revealed that high concentrations of PCP elicited systematic responses at both time points. By combining the enrichment terms with single genes, the results were further analyzed using three categories: metabolism, transporters, and organogenesis. Hyperactive glycolysis was the most outstanding feature of the transcriptome at 10 hpf. The entire system seemed to be hypoxic, although hypoxia-inducible factor-1α (HIF1α) may have been suppressed by the upregulation of prolyl hydroxylase domain enzymes (PHDs). At 24 hpf, PCP primarily affected somitogenesis and lens formation probably resulting from the disruption of embryonic body plan at earlier stages. The proposed underlying toxicological mechanism of PCP was based on the crosstalk between each clue. Our study attempted to describe the developmental toxicity of environmental pollutants from a systematic view. Meanwhile, some features of gene expression profiling could serve as markers of human health or ecological risk.

  9. High-definition television evaluation for remote handling task performance

    International Nuclear Information System (INIS)

    Fujita, Y.; Omori, E.; Hayashi, S.; Draper, J.V.; Herndon, J.N.

    1986-01-01

    In a plant that employs remote handling techniques for equipment maintenance, operators perform maintenance tasks primarily by using the information from television systems. The efficiency of the television system has a significant impact on remote maintenance task performance. High-definition television (HDTV) transmits a video image with more than twice the number of horizontal scan lines as standard-resolution television (1125 for HDTV to 525 for standard-resolution NTSC television). The added scan lines dramatically improve the resolution of images on the HDTV monitors. This paper describes experiments designed to evaluate the impact of HDTV on the performance of typical remote tasks. The experiments described in this paper compared the performance of four operators using HDTV with their performance while using other television systems. The experiments included four television systems: (a) high-definition color television, (b) high-definition monochromatic television, (c) standard-resolution monochromatic television, and (d) standard-resolution stereoscopic monochromatic television

  10. High Performance Work Systems for Online Education

    Science.gov (United States)

    Contacos-Sawyer, Jonna; Revels, Mark; Ciampa, Mark

    2010-01-01

    The purpose of this paper is to identify the key elements of a High Performance Work System (HPWS) and explore the possibility of implementation in an online institution of higher learning. With the projected rapid growth of the demand for online education and its importance in post-secondary education, providing high quality curriculum, excellent…

  11. Performance concerns for high duty fuel cycle

    International Nuclear Information System (INIS)

    Esposito, V.J.; Gutierrez, J.E.

    1999-01-01

    One of the goals of the nuclear industry is to achieve economic performance such that nuclear power plants are competitive in a de-regulated market. The manner in which nuclear fuel is designed and operated lies at the heart of economic viability. In this sense reliability, operating flexibility and low costs are the three major requirements of the NPP today. The translation of these three requirements to the design is part of our work. The challenge today is to produce a fuel design which will operate with long operating cycles, high discharge burnup, power up-rating and while still maintaining all design and safety margins. European Fuel Group (EFG) understands that to achieve the required performance high duty/energy fuel designs are needed. The concerns for high duty design includes, among other items, core design methods, advanced Safety Analysis methodologies, performance models, advanced material and operational strategies. The operational aspects require the trade-off and evaluation of various parameters including coolant chemistry control, material corrosion, boiling duty, boron level impacts, etc. In this environment MAEF is the design that EFG is now offering based on ZIRLO alloy and a robust skeleton. This new design is able to achieve 70 GWd/tU and Lead Test Programs are being executed to demonstrate this capability. A number of performance issues which have been a concern with current designs have been resolved such as cladding corrosion and incomplete RCCA insertion (IRI). As the core duty becomes more aggressive other new issues need to be addressed such as Axial Offset Anomaly. These new issues are being addressed by combination of the new design in concert with advanced methodologies to meet the demanding needs of NPP. The ability and strategy to meet high duty core requirements, flexibility of operation and maintain acceptable balance of all technical issues is the discussion in this paper. (authors)

  12. High performance separation of lanthanides and actinides

    International Nuclear Information System (INIS)

    Sivaraman, N.; Vasudeva Rao, P.R.

    2011-01-01

    The major advantage of High Performance Liquid Chromatography (HPLC) is its ability to provide rapid and high performance separations. It is evident from Van Deemter curve for particle size versus resolution that packing materials with particle sizes less than 2 μm provide better resolution for high speed separations and resolving complex mixtures compared to 5 μm based supports. In the recent past, chromatographic support material using monolith has been studied extensively at our laboratory. Monolith column consists of single piece of porous, rigid material containing mesopores and micropores, which provide fast analyte mass transfer. Monolith support provides significantly higher separation efficiency than particle-packed columns. A clear advantage of monolith is that it could be operated at higher flow rates but with lower back pressure. Higher operating flow rate results in higher column permeability, which drastically reduces analysis time and provides high separation efficiency. The above developed fast separation methods were applied to assay the lanthanides and actinides from the dissolver solutions of nuclear reactor fuels

  13. Persistence of STAT-1 inhibition and induction of cytokine resistance in pancreatic β cells treated with St John's wort and its component hyperforin.

    Science.gov (United States)

    Novelli, Michela; Beffy, Pascale; Gregorelli, Alex; Porozov, Svetlana; Mascia, Fabrizio; Vantaggiato, Chiara; Masiello, Pellegrino; Menegazzi, Marta

    2017-10-09

    St John's wort extract (SJW) and its component hyperforin (HPF) were shown to potently inhibit cytokine-induced STAT-1 and NF-κB activation in pancreatic β cells and protect them against injury. This study aimed at exploring the time course of STAT-1 inhibition afforded by these natural compounds in the β-cell line INS-1E. INS-1E cells were pre-incubated with SJW extract (2-5 μg/ml) or HPF (0.5-2 μm) and then exposed to a cytokine mixture. In some experiments, these compounds were added after or removed before cytokine exposure. STAT-1 activation was assessed by electrophoretic mobility shift assay, apoptosis by caspase-3 activity assay, mRNA gene expression by RT-qPCR. Pre-incubation with SJW/HPF for 1-2 h exerted a remarkable STAT-1 downregulation, which was maintained upon removal of the compounds before early or delayed cytokine addition. When the protective compounds were added after cell exposure to cytokines, between 15 and 90 min, STAT-1 inhibition also occurred at a progressively decreasing extent. Upon 24-h incubation, SJW and HPF counteracted cytokine-induced β-cell dysfunction, apoptosis and target gene expression. SJW and HPF confer to β cells a state of 'cytokine resistance', which can be elicited both before and after cytokine exposure and safeguards these cells from deleterious cytokine effects. © 2017 Royal Pharmaceutical Society.

  14. High rate response of ultra-high-performance fiber-reinforced concretes under direct tension

    Energy Technology Data Exchange (ETDEWEB)

    Tran, Ngoc Thanh [Department of Civil and Environmental Engineering, Sejong University, 98 Gunja-Dong, Gwangjin-Gu, Seoul 143-747 (Korea, Republic of); Tran, Tuan Kiet [Department of Civil and Environmental Engineering, Sejong University, 98 Gunja-Dong, Gwangjin-Gu, Seoul 143-747 (Korea, Republic of); Department of Civil Engineering and Applied Mechanics, Ho Chi Minh City University of Technology and Education, 01 Vo Van Ngan, Thu Duc District, Ho Chi Minh City (Viet Nam); Kim, Dong Joo, E-mail: djkim75@sejong.ac.kr [Department of Civil and Environmental Engineering, Sejong University, 98 Gunja-Dong, Gwangjin-Gu, Seoul 143-747 (Korea, Republic of)

    2015-03-15

    The tensile response of ultra-high-performance fiber-reinforced concretes (UHPFRCs) at high strain rates (5–24 s{sup −} {sup 1}) was investigated. Three types of steel fibers, including twisted, long and short smooth steel fibers, were added by 1.5% volume content in an ultra high performance concrete (UHPC) with a compressive strength of 180 MPa. Two different cross sections, 25 × 25 and 25 × 50 mm{sup 2}, of tensile specimens were used to investigate the effect of the cross section area on the measured tensile response of UHPFRCs. Although all the three fibers generated strain hardening behavior even at high strain rates, long smooth fibers produced the highest tensile resistance at high rates whereas twisted fiber did at static rate. The breakages of twisted fibers were observed from the specimens tested at high strain rates unlike smooth steel fibers. The tensile behavior of UHPFRCs at high strain rates was clearly influenced by the specimen size, especially in post-cracking strength.

  15. Turbostratic stacked CVD graphene for high-performance devices

    Science.gov (United States)

    Uemura, Kohei; Ikuta, Takashi; Maehashi, Kenzo

    2018-03-01

    We have fabricated turbostratic stacked graphene with high-transport properties by the repeated transfer of CVD monolayer graphene. The turbostratic stacked CVD graphene exhibited higher carrier mobility and conductivity than CVD monolayer graphene. The electron mobility for the three-layer turbostratic stacked CVD graphene surpassed 10,000 cm2 V-1 s-1 at room temperature, which is five times greater than that for CVD monolayer graphene. The results indicate that the high performance is derived from maintenance of the linear band dispersion, suppression of the carrier scattering, and parallel conduction. Therefore, turbostratic stacked CVD graphene is a superior material for high-performance devices.

  16. Initial results on computational performance of Intel Many Integrated Core (MIC) architecture: implementation of the Weather and Research Forecasting (WRF) Purdue-Lin microphysics scheme

    Science.gov (United States)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.

    2014-10-01

    Purdue-Lin scheme is a relatively sophisticated microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme includes six classes of hydro meteors: water vapor, cloud water, raid, cloud ice, snow and graupel. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. In this paper, we accelerate the Purdue Lin scheme using Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi is a high performance coprocessor consists of up to 61 cores. The Xeon Phi is connected to a CPU via the PCI Express (PICe) bus. In this paper, we will discuss in detail the code optimization issues encountered while tuning the Purdue-Lin microphysics Fortran code for Xeon Phi. In particularly, getting a good performance required utilizing multiple cores, the wide vector operations and make efficient use of memory. The results show that the optimizations improved performance of the original code on Xeon Phi 5110P by a factor of 4.2x. Furthermore, the same optimizations improved performance on Intel Xeon E5-2603 CPU by a factor of 1.2x compared to the original code.

  17. High performance MEAs. Final report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2012-07-15

    The aim of the present project is through modeling, material and process development to obtain significantly better MEA performance and to attain the technology necessary to fabricate stable catalyst materials thereby providing a viable alternative to current industry standard. This project primarily focused on the development and characterization of novel catalyst materials for the use in high temperature (HT) and low temperature (LT) proton-exchange membrane fuel cells (PEMFC). New catalysts are needed in order to improve fuel cell performance and reduce the cost of fuel cell systems. Additional tasks were the development of new, durable sealing materials to be used in PEMFC as well as the computational modeling of heat and mass transfer processes, predominantly in LT PEMFC, in order to improve fundamental understanding of the multi-phase flow issues and liquid water management in fuel cells. An improved fundamental understanding of these processes will lead to improved fuel cell performance and hence will also result in a reduced catalyst loading to achieve the same performance. The consortium have obtained significant research results and progress for new catalyst materials and substrates with promising enhanced performance and fabrication of the materials using novel methods. However, the new materials and synthesis methods explored are still in the early research and development phase. The project has contributed to improved MEA performance using less precious metal and has been demonstrated for both LT-PEM, DMFC and HT-PEM applications. New novel approach and progress of the modelling activities has been extremely satisfactory with numerous conference and journal publications along with two potential inventions concerning the catalyst layer. (LN)

  18. PyClaw: Accessible, Extensible, Scalable Tools for Wave Propagation Problems

    KAUST Repository

    Ketcheson, David I.; Mandli, Kyle; Ahmadia, Aron; Alghamdi, Amal; de Luna, Manuel Quezada; Parsani, Matteo; Knepley, Matthew G.; Emmett, Matthew

    2012-01-01

    Development of scientific software involves tradeoffs between ease of use, generality, and performance. We describe the design of a general hyperbolic PDE solver that can be operated with the convenience of MATLAB yet achieves efficiency near that of hand-coded Fortran and scales to the largest supercomputers. This is achieved by using Python for most of the code while employing automatically wrapped Fortran kernels for computationally intensive routines, and using Python bindings to interface with a parallel computing library and other numerical packages. The software described here is PyClaw, a Python-based structured grid solver for general systems of hyperbolic PDEs [K. T. Mandli et al., PyClaw Software, Version 1.0, http://numerics.kaust.edu.sa/pyclaw/ (2011)]. PyClaw provides a powerful and intuitive interface to the algorithms of the existing Fortran codes Clawpack and SharpClaw, simplifying code development and use while providing massive parallelism and scalable solvers via the PETSc library. The package is further augmented by use of PyWENO for generation of efficient high-order weighted essentially nonoscillatory reconstruction code. The simplicity, capability, and performance of this approach are demonstrated through application to example problems in shallow water flow, compressible flow, and elasticity.

  19. PyClaw: Accessible, Extensible, Scalable Tools for Wave Propagation Problems

    KAUST Repository

    Ketcheson, David I.

    2012-08-15

    Development of scientific software involves tradeoffs between ease of use, generality, and performance. We describe the design of a general hyperbolic PDE solver that can be operated with the convenience of MATLAB yet achieves efficiency near that of hand-coded Fortran and scales to the largest supercomputers. This is achieved by using Python for most of the code while employing automatically wrapped Fortran kernels for computationally intensive routines, and using Python bindings to interface with a parallel computing library and other numerical packages. The software described here is PyClaw, a Python-based structured grid solver for general systems of hyperbolic PDEs [K. T. Mandli et al., PyClaw Software, Version 1.0, http://numerics.kaust.edu.sa/pyclaw/ (2011)]. PyClaw provides a powerful and intuitive interface to the algorithms of the existing Fortran codes Clawpack and SharpClaw, simplifying code development and use while providing massive parallelism and scalable solvers via the PETSc library. The package is further augmented by use of PyWENO for generation of efficient high-order weighted essentially nonoscillatory reconstruction code. The simplicity, capability, and performance of this approach are demonstrated through application to example problems in shallow water flow, compressible flow, and elasticity.

  20. Performance of the dot product function in radiative transfer code SORD

    Science.gov (United States)

    Korkin, Sergey; Lyapustin, Alexei; Sinyuk, Aliaksandr; Holben, Brent

    2016-10-01

    The successive orders of scattering radiative transfer (RT) codes frequently call the scalar (dot) product function. In this paper, we study performance of some implementations of the dot product in the RT code SORD using 50 scenarios for light scattering in the atmosphere-surface system. In the dot product function, we use the unrolled loops technique with different unrolling factor. We also considered the intrinsic Fortran functions. We show results for two machines: ifort compiler under Windows, and pgf90 under Linux. Intrinsic DOT_PRODUCT function showed best performance for the ifort. For the pgf90, the dot product implemented with unrolling factor 4 was the fastest. The RT code SORD together with the interface that runs all the mentioned tests are publicly available from ftp://maiac.gsfc.nasa.gov/pub/skorkin/SORD_IP_16B (current release) or by email request from the corresponding (first) author.