WorldWideScience

Sample records for hybrid computer program

  1. Near-term hybrid vehicle program, phase 1. Appendix B: Design trade-off studies report. Volume 3: Computer program listings

    Science.gov (United States)

    1979-01-01

    A description and listing is presented of two computer programs: Hybrid Vehicle Design Program (HYVELD) and Hybrid Vehicle Simulation Program (HYVEC). Both of the programs are modifications and extensions of similar programs developed as part of the Electric and Hybrid Vehicle System Research and Development Project.

  2. Petascale computation performance of lightweight multiscale cardiac models using hybrid programming models.

    Science.gov (United States)

    Pope, Bernard J; Fitch, Blake G; Pitman, Michael C; Rice, John J; Reumann, Matthias

    2011-01-01

    Future multiscale and multiphysics models must use the power of high performance computing (HPC) systems to enable research into human disease, translational medical science, and treatment. Previously we showed that computationally efficient multiscale models will require the use of sophisticated hybrid programming models, mixing distributed message passing processes (e.g. the message passing interface (MPI)) with multithreading (e.g. OpenMP, POSIX pthreads). The objective of this work is to compare the performance of such hybrid programming models when applied to the simulation of a lightweight multiscale cardiac model. Our results show that the hybrid models do not perform favourably when compared to an implementation using only MPI which is in contrast to our results using complex physiological models. Thus, with regards to lightweight multiscale cardiac models, the user may not need to increase programming complexity by using a hybrid programming approach. However, considering that model complexity will increase as well as the HPC system size in both node count and number of cores per node, it is still foreseeable that we will achieve faster than real time multiscale cardiac simulations on these systems using hybrid programming models.

  3. Performance of hybrid programming models for multiscale cardiac simulations: preparing for petascale computation.

    Science.gov (United States)

    Pope, Bernard J; Fitch, Blake G; Pitman, Michael C; Rice, John J; Reumann, Matthias

    2011-10-01

    Future multiscale and multiphysics models that support research into human disease, translational medical science, and treatment can utilize the power of high-performance computing (HPC) systems. We anticipate that computationally efficient multiscale models will require the use of sophisticated hybrid programming models, mixing distributed message-passing processes [e.g., the message-passing interface (MPI)] with multithreading (e.g., OpenMP, Pthreads). The objective of this study is to compare the performance of such hybrid programming models when applied to the simulation of a realistic physiological multiscale model of the heart. Our results show that the hybrid models perform favorably when compared to an implementation using only the MPI and, furthermore, that OpenMP in combination with the MPI provides a satisfactory compromise between performance and code complexity. Having the ability to use threads within MPI processes enables the sophisticated use of all processor cores for both computation and communication phases. Considering that HPC systems in 2012 will have two orders of magnitude more cores than what was used in this study, we believe that faster than real-time multiscale cardiac simulations can be achieved on these systems.

  4. Analog and hybrid computing

    CERN Document Server

    Hyndman, D E

    2013-01-01

    Analog and Hybrid Computing focuses on the operations of analog and hybrid computers. The book first outlines the history of computing devices that influenced the creation of analog and digital computers. The types of problems to be solved on computers, computing systems, and digital computers are discussed. The text looks at the theory and operation of electronic analog computers, including linear and non-linear computing units and use of analog computers as operational amplifiers. The monograph examines the preparation of problems to be deciphered on computers. Flow diagrams, methods of ampl

  5. Advanced Hybrid Computer Systems. Software Technology.

    Science.gov (United States)

    This software technology final report evaluates advances made in Advanced Hybrid Computer System software technology . The report describes what...automatic patching software is available as well as which analog/hybrid programming languages would be most feasible for the Advanced Hybrid Computer...compiler software . The problem of how software would interface with the hybrid system is also presented.

  6. A Hybrid Computational Intelligence Approach Combining Genetic Programming And Heuristic Classification for Pap-Smear Diagnosis

    DEFF Research Database (Denmark)

    Tsakonas, Athanasios; Dounias, Georgios; Jantzen, Jan

    2001-01-01

    The paper suggests the combined use of different computational intelligence (CI) techniques in a hybrid scheme, as an effective approach to medical diagnosis. Getting to know the advantages and disadvantages of each computational intelligence technique in the recent years, the time has come for p...

  7. A Hybrid Computational Intelligence Approach Combining Genetic Programming And Heuristic Classification for Pap-Smear Diagnosis

    DEFF Research Database (Denmark)

    Tsakonas, Athanasios; Dounias, Georgios; Jantzen, Jan;

    2001-01-01

    The paper suggests the combined use of different computational intelligence (CI) techniques in a hybrid scheme, as an effective approach to medical diagnosis. Getting to know the advantages and disadvantages of each computational intelligence technique in the recent years, the time has come...... diagnoses. The final result is a short but robust rule based classification scheme, achieving high degree of classification accuracy (exceeding 90% of accuracy for most classes) in a meaningful and user-friendly representation form for the medical expert. The domain of application analyzed through the paper...... is the well-known Pap-Test problem, corresponding to a numerical database, which consists of 450 medical records, 25 diagnostic attributes and 5 different diagnostic classes. Experimental data are divided in two equal parts for the training and testing phase, and 8 mutually dependent rules for diagnosis...

  8. Computer code for intraply hybrid composite design

    Science.gov (United States)

    Chamis, C. C.; Sinclair, J. H.

    1981-01-01

    A computer program has been developed and is described herein for intraply hybrid composite design (INHYD). The program includes several composite micromechanics theories, intraply hybrid composite theories and a hygrothermomechanical theory. These theories provide INHYD with considerable flexibility and capability which the user can exercise through several available options. Key features and capabilities of INHYD are illustrated through selected samples.

  9. A HYBRID DYNAMIC PROGRAM SLICING

    Institute of Scientific and Technical Information of China (English)

    Yi Tong; Wu Fangjun

    2005-01-01

    This letter proposes a hybrid method for computing dynamic program slicing. The key element is to construct a Coverage-Testing-based Dynamic Dependence Graph (CTDDG),which makes use of both dynamic and static information to get execution status. The approach overcomes the limitations of previous dynamic slicing methods, which have to redo slicing if slice criterion changes.

  10. PHOTO: A computer simulation program for photovoltaic and hybrid energy systems. Document and user's guide

    Science.gov (United States)

    Manninen, L. M.; Lund, P. D.; Virkkula, A.

    1990-11-01

    The version 3.0 is described of the program package PHOTO for the simulation and sizing of hybrid power systems (photovoltaic and wind power plants) on IBM PC, XT, AT, PS/2 and compatibles. The minimum memory requirement is 260 kB. Graphical output is created with HALO'88 graphics subroutine library. In the simulation model, special attention is given to the battery storage unit. A backup generator can also be included in the system configuration. The dynamic method developed uses accurate system component models accounting for component interactions and losses in e.g. wiring and diodes. The photovoltaic array can operate in a maximum power mode or in a clamped voltage mode together with the other subsystems. Various control strategies can also be considered. Individual subsystem models were verified against real measurements. Illustrative simulation example is also discussed. The presented model can be used to simulate various system configurations accurately and evaluate system performance, such as energy flows and power losses in photovoltaic array, wind generator, backup generator, wiring, diodes, maximum power point tracking device, inverter and battery. Energy cost is also an important consideration.

  11. Hybrid propulsion technology program

    Science.gov (United States)

    1990-01-01

    Technology was identified which will enable application of hybrid propulsion to manned and unmanned space launch vehicles. Two design concepts are proposed. The first is a hybrid propulsion system using the classical method of regression (classical hybrid) resulting from the flow of oxidizer across a fuel grain surface. The second system uses a self-sustaining gas generator (gas generator hybrid) to produce a fuel rich exhaust that was mixed with oxidizer in a separate combustor. Both systems offer cost and reliability improvement over the existing solid rocket booster and proposed liquid boosters. The designs were evaluated using life cycle cost and reliability. The program consisted of: (1) identification and evaluation of candidate oxidizers and fuels; (2) preliminary evaluation of booster design concepts; (3) preparation of a detailed point design including life cycle costs and reliability analyses; (4) identification of those hybrid specific technologies needing improvement; and (5) preperation of a technology acquisition plan and large scale demonstration plan.

  12. Neural networks with multiple general neuron models: a hybrid computational intelligence approach using Genetic Programming.

    Science.gov (United States)

    Barton, Alan J; Valdés, Julio J; Orchard, Robert

    2009-01-01

    Classical neural networks are composed of neurons whose nature is determined by a certain function (the neuron model), usually pre-specified. In this paper, a type of neural network (NN-GP) is presented in which: (i) each neuron may have its own neuron model in the form of a general function, (ii) any layout (i.e network interconnection) is possible, and (iii) no bias nodes or weights are associated to the connections, neurons or layers. The general functions associated to a neuron are learned by searching a function space. They are not provided a priori, but are rather built as part of an Evolutionary Computation process based on Genetic Programming. The resulting network solutions are evaluated based on a fitness measure, which may, for example, be based on classification or regression errors. Two real-world examples are presented to illustrate the promising behaviour on classification problems via construction of a low-dimensional representation of a high-dimensional parameter space associated to the set of all network solutions.

  13. Hybridity in Embedded Computing Systems

    Institute of Scientific and Technical Information of China (English)

    虞慧群; 孙永强

    1996-01-01

    An embedded system is a system that computer is used as a component in a larger device.In this paper,we study hybridity in embedded systems and present an interval based temporal logic to express and reason about hybrid properties of such kind of systems.

  14. Computer Programs.

    Science.gov (United States)

    Anderson, Tiffoni

    This module provides information on development and use of a Material Safety Data Sheet (MSDS) software program that seeks to link literacy skills education, safety training, and human-centered design. Section 1 discusses the development of the software program that helps workers understand the MSDSs that accompany the chemicals with which they…

  15. Hybrid Vehicle Program. Final report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1984-06-01

    This report summarizes the activities on the Hybrid Vehicle Program. The program objectives and the vehicle specifications are reviewed. The Hybrid Vehicle has been designed so that maximum use can be made of existing production components with a minimum compromise to program goals. The program status as of the February 9-10 Hardware Test Review is presented, and discussions of the vehicle subsystem, the hybrid propulsion subsystem, the battery subsystem, and the test mule programs are included. Other program aspects included are quality assurance and support equipment. 16 references, 132 figures, 47 tables.

  16. Computer programming and computer systems

    CERN Document Server

    Hassitt, Anthony

    1966-01-01

    Computer Programming and Computer Systems imparts a "reading knowledge? of computer systems.This book describes the aspects of machine-language programming, monitor systems, computer hardware, and advanced programming that every thorough programmer should be acquainted with. This text discusses the automatic electronic digital computers, symbolic language, Reverse Polish Notation, and Fortran into assembly language. The routine for reading blocked tapes, dimension statements in subroutines, general-purpose input routine, and efficient use of memory are also elaborated.This publication is inten

  17. ICASE Computer Science Program

    Science.gov (United States)

    1985-01-01

    The Institute for Computer Applications in Science and Engineering computer science program is discussed in outline form. Information is given on such topics as problem decomposition, algorithm development, programming languages, and parallel architectures.

  18. Program Facilitates Distributed Computing

    Science.gov (United States)

    Hui, Joseph

    1993-01-01

    KNET computer program facilitates distribution of computing between UNIX-compatible local host computer and remote host computer, which may or may not be UNIX-compatible. Capable of automatic remote log-in. User communicates interactively with remote host computer. Data output from remote host computer directed to local screen, to local file, and/or to local process. Conversely, data input from keyboard, local file, or local process directed to remote host computer. Written in ANSI standard C language.

  19. Hybrid 3-D rocket trajectory program. Part 1: Formulation and analysis. Part 2: Computer programming and user's instruction. [computerized simulation using three dimensional motion analysis

    Science.gov (United States)

    Huang, L. C. P.; Cook, R. A.

    1973-01-01

    Models utilizing various sub-sets of the six degrees of freedom are used in trajectory simulation. A 3-D model with only linear degrees of freedom is especially attractive, since the coefficients for the angular degrees of freedom are the most difficult to determine and the angular equations are the most time consuming for the computer to evaluate. A computer program is developed that uses three separate subsections to predict trajectories. A launch rail subsection is used until the rocket has left its launcher. The program then switches to a special 3-D section which computes motions in two linear and one angular degrees of freedom. When the rocket trims out, the program switches to the standard, three linear degrees of freedom model.

  20. Programming in Biomolecular Computation

    DEFF Research Database (Denmark)

    Hartmann, Lars; Jones, Neil; Simonsen, Jakob Grue

    2010-01-01

    Our goal is to provide a top-down approach to biomolecular computation. In spite of widespread discussion about connections between biology and computation, one question seems notable by its absence: Where are the programs? We introduce a model of computation that is evidently programmable......, by programs reminiscent of low-level computer machine code; and at the same time biologically plausible: its functioning is defined by a single and relatively small set of chemical-like reaction rules. Further properties: the model is stored-program: programs are the same as data, so programs are not only...... in a strong sense: a universal algorithm exists, that is able to execute any program, and is not asymptotically inefficient. A prototype model has been implemented (for now in silico on a conventional computer). This work opens new perspectives on just how computation may be specified at the biological level....

  1. Programming in biomolecular computation

    DEFF Research Database (Denmark)

    Hartmann, Lars Røeboe; Jones, Neil; Simonsen, Jakob Grue

    2010-01-01

    executable, but are also compilable and interpretable. It is universal: all computable functions can be computed (in natural ways and without arcane encodings of data and algorithm); it is also uniform: new “hardware” is not needed to solve new problems; and (last but not least) it is Turing complete......Our goal is to provide a top-down approach to biomolecular computation. In spite of widespread discussion about connections between biology and computation, one question seems notable by its absence: Where are the programs? We introduce a model of computation that is evidently programmable......, by programs reminiscent of low-level computer machine code; and at the same time biologically plausible: its functioning is defined by a single and relatively small set of chemical-like reaction rules. Further properties: the model is stored-program: programs are the same as data, so programs are not only...

  2. Hybrid soft computing approaches research and applications

    CERN Document Server

    Dutta, Paramartha; Chakraborty, Susanta

    2016-01-01

    The book provides a platform for dealing with the flaws and failings of the soft computing paradigm through different manifestations. The different chapters highlight the necessity of the hybrid soft computing methodology in general with emphasis on several application perspectives in particular. Typical examples include (a) Study of Economic Load Dispatch by Various Hybrid Optimization Techniques, (b) An Application of Color Magnetic Resonance Brain Image Segmentation by ParaOptiMUSIG activation Function, (c) Hybrid Rough-PSO Approach in Remote Sensing Imagery Analysis,  (d) A Study and Analysis of Hybrid Intelligent Techniques for Breast Cancer Detection using Breast Thermograms, and (e) Hybridization of 2D-3D Images for Human Face Recognition. The elaborate findings of the chapters enhance the exhibition of the hybrid soft computing paradigm in the field of intelligent computing.

  3. Programming in biomolecular computation

    DEFF Research Database (Denmark)

    Hartmann, Lars Røeboe; Jones, Neil; Simonsen, Jakob Grue

    2011-01-01

    Our goal is to provide a top-down approach to biomolecular computation. In spite of widespread discussion about connections between biology and computation, one question seems notable by its absence: Where are the programs? We identify a number of common features in programming that seem conspicu...

  4. Programming the social computer.

    Science.gov (United States)

    Robertson, David; Giunchiglia, Fausto

    2013-03-28

    The aim of 'programming the global computer' was identified by Milner and others as one of the grand challenges of computing research. At the time this phrase was coined, it was natural to assume that this objective might be achieved primarily through extending programming and specification languages. The Internet, however, has brought with it a different style of computation that (although harnessing variants of traditional programming languages) operates in a style different to those with which we are familiar. The 'computer' on which we are running these computations is a social computer in the sense that many of the elementary functions of the computations it runs are performed by humans, and successful execution of a program often depends on properties of the human society over which the program operates. These sorts of programs are not programmed in a traditional way and may have to be understood in a way that is different from the traditional view of programming. This shift in perspective raises new challenges for the science of the Web and for computing in general.

  5. Hybrid Doctoral Program: Innovative Practices and Partnerships

    Science.gov (United States)

    Alvich, Dori; Manning, JoAnn; McCormick, Kathy; Campbell, Robert

    2012-01-01

    This paper reflects on how one mid-Atlantic University innovatively incorporated technology into the development of a hybrid doctoral program in educational leadership. The paper describes a hybrid doctoral degree program using a rigorous design; challenges of reworking a traditional syllabus of record to a hybrid doctoral program; the perceptions…

  6. Checkpointing for a hybrid computing node

    Energy Technology Data Exchange (ETDEWEB)

    Cher, Chen-Yong

    2016-03-08

    According to an aspect, a method for checkpointing in a hybrid computing node includes executing a task in a processing accelerator of the hybrid computing node. A checkpoint is created in a local memory of the processing accelerator. The checkpoint includes state data to restart execution of the task in the processing accelerator upon a restart operation. Execution of the task is resumed in the processing accelerator after creating the checkpoint. The state data of the checkpoint are transferred from the processing accelerator to a main processor of the hybrid computing node while the processing accelerator is executing the task.

  7. Logic via Computer Programming.

    Science.gov (United States)

    Wieschenberg, Agnes A.

    This paper proposed the question "How do we teach logical thinking and sophisticated mathematics to unsophisticated college students?" One answer among many is through the writing of computer programs. The writing of computer algorithms is mathematical problem solving and logic in disguise and it may attract students who would otherwise stop…

  8. Reachability computation for hybrid systems with Ariadne

    NARCIS (Netherlands)

    L. Benvenuti; D. Bresolin; A. Casagrande; P.J. Collins (Pieter); A. Ferrari; E. Mazzi; T. Villa; A. Sangiovanni-Vincentelli

    2008-01-01

    htmlabstractAriadne is an in-progress open environment to design algorithms for computing with hybrid automata, that relies on a rigorous computable analysis theory to represent geometric objects, in order to achieve provable approximation bounds along the computations. In this paper we discuss the

  9. Universal blind quantum computation for hybrid system

    Science.gov (United States)

    Huang, He-Liang; Bao, Wan-Su; Li, Tan; Li, Feng-Guang; Fu, Xiang-Qun; Zhang, Shuo; Zhang, Hai-Long; Wang, Xiang

    2017-08-01

    As progress on the development of building quantum computer continues to advance, first-generation practical quantum computers will be available for ordinary users in the cloud style similar to IBM's Quantum Experience nowadays. Clients can remotely access the quantum servers using some simple devices. In such a situation, it is of prime importance to keep the security of the client's information. Blind quantum computation protocols enable a client with limited quantum technology to delegate her quantum computation to a quantum server without leaking any privacy. To date, blind quantum computation has been considered only for an individual quantum system. However, practical universal quantum computer is likely to be a hybrid system. Here, we take the first step to construct a framework of blind quantum computation for the hybrid system, which provides a more feasible way for scalable blind quantum computation.

  10. Computer network programming

    Energy Technology Data Exchange (ETDEWEB)

    Hsu, J.Y. [California Polytechnic State Univ., San Luis Obispo, CA (United States)

    1996-12-31

    The programs running on a computer network can be divided into two parts, the Network Operating System and the user applications. Any high level language translator, such as C, JAVA, BASIC, FORTRAN, or COBOL, runs under NOS as a programming tool to produce network application programs or software. Each application program while running on the network provides the human user with network application services, such as remote data base search, retrieval, etc. The Network Operating System should provide a simple and elegant system interface to all the network application programs. This programming interface may request the Transport layer services on behalf of a network application program. The primary goals are to achieve programming convenience, and to avoid complexity. In a 5-layer network model, the system interface is comprised of a group of system calls which are collectively known as the session layer with its own Session Protocol Data Units. This is a position paper discussing the basic system primitives which reside between a network application program and the Transport layer, and a programming example of using such primitives.

  11. Hybrid Systems: Computation and Control.

    Science.gov (United States)

    2007-11-02

    elbow) and a pinned first joint (shoul- der) (see Figure 2); it is termed an underactuated system since it is a mechanical system with fewer...Montreal, PQ, Canada, 1998. [10] M. W. Spong. Partial feedback linearization of underactuated mechanical systems . In Proceedings, IROS󈨢, pages 314-321...control mechanism and search for optimal combinations of control variables. Besides the nonlinear and hybrid nature of powertrain systems , hardware

  12. Near-term hybrid vehicle program, phase 1

    Science.gov (United States)

    1979-01-01

    The preliminary design of a hybrid vehicle which fully meets or exceeds the requirements set forth in the Near Term Hybrid Vehicle Program is documented. Topics addressed include the general layout and styling, the power train specifications with discussion of each major component, vehicle weight and weight breakdown, vehicle performance, measures of energy consumption, and initial cost and ownership cost. Alternative design options considered and their relationship to the design adopted, computer simulation used, and maintenance and reliability considerations are also discussed.

  13. Use of a hybrid computer in engineering-seismology research

    Science.gov (United States)

    Park, R.B.; Hays, W.W.

    1977-01-01

    A hybrid computer is an important tool in the seismological research conducted by the U.S. Geological Survey in support of the Energy Research and Development Administration nuclear explosion testing program at the Nevada Test Site and the U.S. Geological Survey Earthquake Hazard Reduction Program. The hybrid computer system, which employs both digital and analog computational techniques, facilitates efficient seismic data processing. Standard data processing operations include: (1) preview of dubbed magnetic tapes of data; (2) correction of data for instrument response; (3) derivation of displacement and acceleration time histories from velocity recordings; (4) extraction of peak-amplitude data; (5) digitization of time histories; (6) rotation of instrumental axes; (7) derivation of response spectra; and (8) derivation of relative transfer functions between recording sites. Catalog of time histories and response spectra of ground motion from nuclear explosions and earthquakes that have been processed by the hybrid computer are used in the Earthquake Hazard Research Program to evaluate the effects of source, propagation path, and site effects on recorded ground motion; to assess seismic risk; to predict system response; and to solve system design problems.

  14. Optimal control computer programs

    Science.gov (United States)

    Kuo, F.

    1992-01-01

    The solution of the optimal control problem, even with low order dynamical systems, can usually strain the analytical ability of most engineers. The understanding of this subject matter, therefore, would be greatly enhanced if a software package existed that could simulate simple generic problems. Surprisingly, despite a great abundance of commercially available control software, few, if any, address the part of optimal control in its most generic form. The purpose of this paper is, therefore, to present a simple computer program that will perform simulations of optimal control problems that arise from the first necessary condition and the Pontryagin's maximum principle.

  15. Programmed cell death and hybrid incompatibility.

    Science.gov (United States)

    Frank, S A; Barr, C M

    2003-01-01

    We propose a new theory to explain developmental aberrations in plant hybrids. In our theory, hybrid incompatibilities arise from imbalances in the mechanisms that cause male sterility in hermaphroditic plants. Mitochondria often cause male sterility by killing the tapetal tissue that nurtures pollen mother cells. Recent evidence suggests that mitochondria destroy the tapetum by triggering standard pathways of programmed cell death. Some nuclear genotypes repress mitochondrial male sterility and restore pollen fertility. Normal regulation of tapetal development therefore arises from a delicate balance between the disruptive effects of mitochondria and the defensive countermeasures of the nuclear genes. In hybrids, incompatibilities between male-sterile mitochondria and nuclear restorers may frequently upset the regulatory control of programmed cell death, causing tapetal abnormalities and male sterility. We propose that hybrid misregulation of programmed cell death may also spill over into other tissues, explaining various developmental aberrations observed in hybrids.

  16. Adaptation and hybridization in computational intelligence

    CERN Document Server

    Jr, Iztok

    2015-01-01

      This carefully edited book takes a walk through recent advances in adaptation and hybridization in the Computational Intelligence (CI) domain. It consists of ten chapters that are divided into three parts. The first part illustrates background information and provides some theoretical foundation tackling the CI domain, the second part deals with the adaptation in CI algorithms, while the third part focuses on the hybridization in CI. This book can serve as an ideal reference for researchers and students of computer science, electrical and civil engineering, economy, and natural sciences that are confronted with solving the optimization, modeling and simulation problems. It covers the recent advances in CI that encompass Nature-inspired algorithms, like Artificial Neural networks, Evolutionary Algorithms and Swarm Intelligence –based algorithms.  

  17. Lyapunov exponents computation for hybrid neurons.

    Science.gov (United States)

    Bizzarri, Federico; Brambilla, Angelo; Gajani, Giancarlo Storti

    2013-10-01

    Lyapunov exponents are a basic and powerful tool to characterise the long-term behaviour of dynamical systems. The computation of Lyapunov exponents for continuous time dynamical systems is straightforward whenever they are ruled by vector fields that are sufficiently smooth to admit a variational model. Hybrid neurons do not belong to this wide class of systems since they are intrinsically non-smooth owing to the impact and sometimes switching model used to describe the integrate-and-fire (I&F) mechanism. In this paper we show how a variational model can be defined also for this class of neurons by resorting to saltation matrices. This extension allows the computation of Lyapunov exponent spectrum of hybrid neurons and of networks made up of them through a standard numerical approach even in the case of neurons firing synchronously.

  18. Hybrid Parallel Computation of Integration in GRACE

    CERN Document Server

    Yuasa, F; Kawabata, S; Perret-Gallix, D; Itakura, K; Hotta, Y; Okuda, M; Yuasa, Fukuko; Ishikawa, Tadashi; Kawabata, Setsuya; Perret-Gallix, Denis; Itakura, Kazuhiro; Hotta, Yukihiko; Okuda, Motoi

    2000-01-01

    With an integrated software package {\\tt GRACE}, it is possible to generate Feynman diagrams, calculate the total cross section and generate physics events automatically. We outline the hybrid method of parallel computation of the multi-dimensional integration of {\\tt GRACE}. We used {\\tt MPI} (Message Passing Interface) as the parallel library and, to improve the performance we embedded the mechanism of the dynamic load balancing. The reduction rate of the practical execution time was studied.

  19. Hybrid Nanoelectronics: Future of Computer Technology

    Institute of Scientific and Technical Information of China (English)

    Wei Wang; Ming Liu; Andrew Hsu

    2006-01-01

    Nanotechnology may well prove to be the 21st century's new wave of scientific knowledge that transforms people's lives. Nanotechnology research activities are booming around the globe. This article reviews the recent progresses made on nanoelectronic research in US and China, and introduces several novel hybrid solutions specifically useful for future computer technology. These exciting new directions will lead to many future inventions, and have a huge impact to research communities and industries.

  20. Computer Assisted Parallel Program Generation

    CERN Document Server

    Kawata, Shigeo

    2015-01-01

    Parallel computation is widely employed in scientific researches, engineering activities and product development. Parallel program writing itself is not always a simple task depending on problems solved. Large-scale scientific computing, huge data analyses and precise visualizations, for example, would require parallel computations, and the parallel computing needs the parallelization techniques. In this Chapter a parallel program generation support is discussed, and a computer-assisted parallel program generation system P-NCAS is introduced. Computer assisted problem solving is one of key methods to promote innovations in science and engineering, and contributes to enrich our society and our life toward a programming-free environment in computing science. Problem solving environments (PSE) research activities had started to enhance the programming power in 1970's. The P-NCAS is one of the PSEs; The PSE concept provides an integrated human-friendly computational software and hardware system to solve a target ...

  1. Accelerating Climate Simulations Through Hybrid Computing

    Science.gov (United States)

    Zhou, Shujia; Sinno, Scott; Cruz, Carlos; Purcell, Mark

    2009-01-01

    Unconventional multi-core processors (e.g., IBM Cell B/E and NYIDIDA GPU) have emerged as accelerators in climate simulation. However, climate models typically run on parallel computers with conventional processors (e.g., Intel and AMD) using MPI. Connecting accelerators to this architecture efficiently and easily becomes a critical issue. When using MPI for connection, we identified two challenges: (1) identical MPI implementation is required in both systems, and; (2) existing MPI code must be modified to accommodate the accelerators. In response, we have extended and deployed IBM Dynamic Application Virtualization (DAV) in a hybrid computing prototype system (one blade with two Intel quad-core processors, two IBM QS22 Cell blades, connected with Infiniband), allowing for seamlessly offloading compute-intensive functions to remote, heterogeneous accelerators in a scalable, load-balanced manner. Currently, a climate solar radiation model running with multiple MPI processes has been offloaded to multiple Cell blades with approx.10% network overhead.

  2. Multi level programming Paradigm for Extreme Computing

    Science.gov (United States)

    Petiton, S.; Sato, M.; Emad, N.; Calvin, C.; Tsuji, M.; Dandouna, M.

    2014-06-01

    Abstract: In order to propose a framework and programming paradigms for post-petascale computing, on the road to exascale computing and beyond, we introduced new languages, associated with a hierarchical multi-level programming paradigm, allowing scientific end-users and developers to program highly hierarchical architectures designed for extreme computing. In this paper, we explain the interest of such hierarchical multi-level programming paradigm for extreme computing and its well adaptation to several large computational science applications, such as for linear algebra solvers used for reactor core physic. We describe the YML language and framework allowing describing graphs of parallel components, which may be developed using PGAS-like language such as XMP, scheduled and computed on supercomputers. Then, we propose experimentations on supercomputers (such as the "K" and "Hooper" ones) of the hybrid method MERAM (Multiple Explicitly Restarted Arnoldi Method) as a case study for iterative methods manipulating sparse matrices, and the block Gauss-Jordan method as a case study for direct method manipulating dense matrices. We conclude proposing evolutions for this programming paradigm.

  3. Computer programs as accounting object

    Directory of Open Access Journals (Sweden)

    I.V. Perviy

    2015-03-01

    Full Text Available Existing approaches to the regulation of accounting software as one of the types of intangible assets have been considered. The features and current state of the legal protection of computer programs have been analyzed. The reasons for the need to use patent law as a means of legal protection of individual elements of computer programs have been discovered. The influence of the legal aspects of the use of computer programs for national legislation to their accounting reflection has been analyzed. The possible options for the transfer of rights from computer programs copyright owners have been analyzed that should be considered during creation of software accounting system at the enterprise. Identified and analyzed the characteristics of computer software as an intangible asset under the current law. General economic characteristics of computer programs as one of the types of intangible assets have been grounded. The main distinguishing features of software compared to other types of intellectual property have been all ocated

  4. A programming approach to computability

    CERN Document Server

    Kfoury, A J; Arbib, Michael A

    1982-01-01

    Computability theory is at the heart of theoretical computer science. Yet, ironically, many of its basic results were discovered by mathematical logicians prior to the development of the first stored-program computer. As a result, many texts on computability theory strike today's computer science students as far removed from their concerns. To remedy this, we base our approach to computability on the language of while-programs, a lean subset of PASCAL, and postpone consideration of such classic models as Turing machines, string-rewriting systems, and p. -recursive functions till the final chapter. Moreover, we balance the presentation of un solvability results such as the unsolvability of the Halting Problem with a presentation of the positive results of modern programming methodology, including the use of proof rules, and the denotational semantics of programs. Computer science seeks to provide a scientific basis for the study of information processing, the solution of problems by algorithms, and the design ...

  5. Deductive Computer Programming. Revision

    Science.gov (United States)

    1989-09-30

    Lecture Notes in Computer Science 354...automata", In Temporal Logic in Specification, Lecture Notes in Computer Science 398, Springer-Verlag, 1989, pp. 124-164. *[MP4] Z. Manna and A. Pnueli... Notes in Computer Science 372, Springer-Verlag, 1989, pp. 534-558. CONTRIBUTION TO BOOKS [MP5] Z. Manna and A. Pnueli, "An exercise in the

  6. Digital filter synthesis computer program

    Science.gov (United States)

    Moyer, R. A.; Munoz, R. M.

    1968-01-01

    Digital filter synthesis computer program expresses any continuous function of a complex variable in approximate form as a computational algorithm or difference equation. Once the difference equation has been developed, digital filtering can be performed by the program on any input data list.

  7. ANIBAL - a Hybrid Computer Language for EAI 680-PDP 8/I, FPP 12

    DEFF Research Database (Denmark)

    Højberg, Kristian Søe

    1974-01-01

    A hybrid programming language ANIBAL has been developed for use in an open-shop computing centre with an EAI-680 analog computer, a PDP8/I digital computer, and a FFP-12 floating point processor. An 8K core memory and 812k disk memory is included. The new language consists of standard FORTRAN IV...

  8. Program Verification of Numerical Computation

    OpenAIRE

    Pantelis, Garry

    2014-01-01

    These notes outline a formal method for program verification of numerical computation. It forms the basis of the software package VPC in its initial phase of development. Much of the style of presentation is in the form of notes that outline the definitions and rules upon which VPC is based. The initial motivation of this project was to address some practical issues of computation, especially of numerically intensive programs that are commonplace in computer models. The project evolved into a...

  9. NASA's computer science research program

    Science.gov (United States)

    Larsen, R. L.

    1983-01-01

    Following a major assessment of NASA's computing technology needs, a new program of computer science research has been initiated by the Agency. The program includes work in concurrent processing, management of large scale scientific databases, software engineering, reliable computing, and artificial intelligence. The program is driven by applications requirements in computational fluid dynamics, image processing, sensor data management, real-time mission control and autonomous systems. It consists of university research, in-house NASA research, and NASA's Research Institute for Advanced Computer Science (RIACS) and Institute for Computer Applications in Science and Engineering (ICASE). The overall goal is to provide the technical foundation within NASA to exploit advancing computing technology in aerospace applications.

  10. ParaHaplo 3.0: A program package for imputation and a haplotype-based whole-genome association study using hybrid parallel computing

    Directory of Open Access Journals (Sweden)

    Kamatani Naoyuki

    2011-05-01

    Full Text Available Abstract Background Use of missing genotype imputations and haplotype reconstructions are valuable in genome-wide association studies (GWASs. By modeling the patterns of linkage disequilibrium in a reference panel, genotypes not directly measured in the study samples can be imputed and used for GWASs. Since millions of single nucleotide polymorphisms need to be imputed in a GWAS, faster methods for genotype imputation and haplotype reconstruction are required. Results We developed a program package for parallel computation of genotype imputation and haplotype reconstruction. Our program package, ParaHaplo 3.0, is intended for use in workstation clusters using the Intel Message Passing Interface. We compared the performance of ParaHaplo 3.0 on the Japanese in Tokyo, Japan and Han Chinese in Beijing, and Chinese in the HapMap dataset. A parallel version of ParaHaplo 3.0 can conduct genotype imputation 20 times faster than a non-parallel version of ParaHaplo. Conclusions ParaHaplo 3.0 is an invaluable tool for conducting haplotype-based GWASs. The need for faster genotype imputation and haplotype reconstruction using parallel computing will become increasingly important as the data sizes of such projects continue to increase. ParaHaplo executable binaries and program sources are available at http://en.sourceforge.jp/projects/parallelgwas/releases/.

  11. A HYBRID METHOD FOR LINEAR PROGRAMMING

    Institute of Scientific and Technical Information of China (English)

    XIU Naihua; WU Fang

    1999-01-01

    In this paper, a hybrid method for linear programming is established.Its search direction is defined as a combination of two directions insimplex method and affine-scaling interior point method.The method is proven to have some promising convergence properties.The relation among the new method, the simplex method and the affine-scalinginteriorpoint method is discussed.

  12. Designing computer programs

    CERN Document Server

    Haigh, Jim

    1994-01-01

    This is a book for students at every level who are learning to program for the first time - and for the considerable number who learned how to program but were never taught to structure their programs. The author presents a simple set of guidelines that show the programmer how to design in a manageable structure from the outset. The method is suitable for most languages, and is based on the widely used 'JSP' method, to which the student may easily progress if it is needed at a later stage.Most language specific texts contain very little if any information on design, whilst books on des

  13. Computer Program NIKE

    DEFF Research Database (Denmark)

    Spanget-Larsen, Jens

    2014-01-01

    FORTRAN source code for program NIKE (PC version of QCPE 343). Sample input and output for two model chemical reactions are appended: I. Three consecutive monomolecular reactions, II. A simple chain mechanism...

  14. What do reversible programs compute?

    DEFF Research Database (Denmark)

    Axelsen, Holger Bock; Glück, Robert

    2011-01-01

    Reversible computing is the study of computation models that exhibit both forward and backward determinism. Understanding the fundamental properties of such models is not only relevant for reversible programming, but has also been found important in other fields, e.g., bidirectional model...... transformation, program transformations such as inversion, and general static prediction of program properties. Historically, work on reversible computing has focussed on reversible simulations of irreversible computations. Here, we take the viewpoint that the property of reversibility itself should...... are not strictly classically universal, but that they support another notion of universality; we call this RTM-universality. Thus, even though the RTMs are sub-universal in the classical sense, they are powerful enough as to include a self-interpreter. Lifting this to other computation models, we propose r...

  15. STEM image simulation with hybrid CPU/GPU programming.

    Science.gov (United States)

    Yao, Y; Ge, B H; Shen, X; Wang, Y G; Yu, R C

    2016-07-01

    STEM image simulation is achieved via hybrid CPU/GPU programming under parallel algorithm architecture to speed up calculation on a personal computer (PC). To utilize the calculation power of a PC fully, the simulation is performed using the GPU core and multi-CPU cores at the same time to significantly improve efficiency. GaSb and an artificial GaSb/InAs interface with atom diffusion have been used to verify the computation. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Autonomic Management of Application Workflows on Hybrid Computing Infrastructure

    Directory of Open Access Journals (Sweden)

    Hyunjoo Kim

    2011-01-01

    Full Text Available In this paper, we present a programming and runtime framework that enables the autonomic management of complex application workflows on hybrid computing infrastructures. The framework is designed to address system and application heterogeneity and dynamics to ensure that application objectives and constraints are satisfied. The need for such autonomic system and application management is becoming critical as computing infrastructures become increasingly heterogeneous, integrating different classes of resources from high-end HPC systems to commodity clusters and clouds. For example, the framework presented in this paper can be used to provision the appropriate mix of resources based on application requirements and constraints. The framework also monitors the system/application state and adapts the application and/or resources to respond to changing requirements or environment. To demonstrate the operation of the framework and to evaluate its ability, we employ a workflow used to characterize an oil reservoir executing on a hybrid infrastructure composed of TeraGrid nodes and Amazon EC2 instances of various types. Specifically, we show how different applications objectives such as acceleration, conservation and resilience can be effectively achieved while satisfying deadline and budget constraints, using an appropriate mix of dynamically provisioned resources. Our evaluations also demonstrate that public clouds can be used to complement and reinforce the scheduling and usage of traditional high performance computing infrastructure.

  17. The psychology of computer programming

    CERN Document Server

    Weinberg, Gerald Marvin

    1998-01-01

    This landmark 1971 classic is reprinted with a new preface, chapter-by-chapter commentary, and straight-from-the-heart observations on topics that affect the professional life of programmers. Long regarded as one of the first books to pioneer a people-oriented approach to computing, The Psychology of Computer Programming endures as a penetrating analysis of the intelligence, skill, teamwork, and problem-solving power of the computer programmer. Finding the chapters strikingly relevant to today's issues in programming, Gerald M. Weinberg adds new insights and highlights the similarities and differences between now and then. Using a conversational style that invites the reader to join him, Weinberg reunites with some of his most insightful writings on the human side of software engineering. Topics include egoless programming, intelligence, psychological measurement, personality factors, motivation, training, social problems on large projects, problem-solving ability, programming language design, team formati...

  18. Linear programming computation

    CERN Document Server

    PAN, Ping-Qi

    2014-01-01

    With emphasis on computation, this book is a real breakthrough in the field of LP. In addition to conventional topics, such as the simplex method, duality, and interior-point methods, all deduced in a fresh and clear manner, it introduces the state of the art by highlighting brand-new and advanced results, including efficient pivot rules, Phase-I approaches, reduced simplex methods, deficient-basis methods, face methods, and pivotal interior-point methods. In particular, it covers the determination of the optimal solution set, feasible-point simplex method, decomposition principle for solving large-scale problems, controlled-branch method based on generalized reduced simplex framework for solving integer LP problems.

  19. Modelling of data uncertainties on hybrid computers

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, Anke (ed.)

    2016-06-15

    The codes d{sup 3}f and r{sup 3}t are well established for modelling density-driven flow and nuclide transport in the far field of repositories for hazardous material in deep geological formations. They are applicable in porous media as well as in fractured rock or mudstone, for modelling salt- and heat transport as well as a free groundwater surface. Development of the basic framework of d{sup 3}f and r{sup 3}t had begun more than 20 years ago. Since that time significant advancements took place in the requirements for safety assessment as well as for computer hardware development. The period of safety assessment for a repository of high-level radioactive waste was extended to 1 million years, and the complexity of the models is steadily growing. Concurrently, the demands on accuracy increase. Additionally, model and parameter uncertainties become more and more important for an increased understanding of prediction reliability. All this leads to a growing demand for computational power that requires a considerable software speed-up. An effective way to achieve this is the use of modern, hybrid computer architectures which requires basically the set-up of new data structures and a corresponding code revision but offers a potential speed-up by several orders of magnitude. The original codes d{sup 3}f and r{sup 3}t were applications of the software platform UG /BAS 94/ whose development had begun in the early nineteennineties. However, UG had recently been advanced to the C++ based, substantially revised version UG4 /VOG 13/. To benefit also in the future from state-of-the-art numerical algorithms and to use hybrid computer architectures, the codes d{sup 3}f and r{sup 3}t were transferred to this new code platform. Making use of the fact that coupling between different sets of equations is natively supported in UG4, d{sup 3}f and r{sup 3}t were combined to one conjoint code d{sup 3}f++. A direct estimation of uncertainties for complex groundwater flow models with the

  20. Functional Programming in Computer Science

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Loren James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Davis, Marion Kei [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-01-19

    We explore functional programming through a 16-week internship at Los Alamos National Laboratory. Functional programming is a branch of computer science that has exploded in popularity over the past decade due to its high-level syntax, ease of parallelization, and abundant applications. First, we summarize functional programming by listing the advantages of functional programming languages over the usual imperative languages, and we introduce the concept of parsing. Second, we discuss the importance of lambda calculus in the theory of functional programming. Lambda calculus was invented by Alonzo Church in the 1930s to formalize the concept of effective computability, and every functional language is essentially some implementation of lambda calculus. Finally, we display the lasting products of the internship: additions to a compiler and runtime system for the pure functional language STG, including both a set of tests that indicate the validity of updates to the compiler and a compiler pass that checks for illegal instances of duplicate names.

  1. Hybrid cloud and cluster computing paradigms for life science applications.

    Science.gov (United States)

    Qiu, Judy; Ekanayake, Jaliya; Gunarathne, Thilina; Choi, Jong Youl; Bae, Seung-Hee; Li, Hui; Zhang, Bingjing; Wu, Tak-Lon; Ruan, Yang; Ekanayake, Saliya; Hughes, Adam; Fox, Geoffrey

    2010-12-21

    Clouds and MapReduce have shown themselves to be a broadly useful approach to scientific computing especially for parallel data intensive applications. However they have limited applicability to some areas such as data mining because MapReduce has poor performance on problems with an iterative structure present in the linear algebra that underlies much data analysis. Such problems can be run efficiently on clusters using MPI leading to a hybrid cloud and cluster environment. This motivates the design and implementation of an open source Iterative MapReduce system Twister. Comparisons of Amazon, Azure, and traditional Linux and Windows environments on common applications have shown encouraging performance and usability comparisons in several important non iterative cases. These are linked to MPI applications for final stages of the data analysis. Further we have released the open source Twister Iterative MapReduce and benchmarked it against basic MapReduce (Hadoop) and MPI in information retrieval and life sciences applications. The hybrid cloud (MapReduce) and cluster (MPI) approach offers an attractive production environment while Twister promises a uniform programming environment for many Life Sciences applications. We used commercial clouds Amazon and Azure and the NSF resource FutureGrid to perform detailed comparisons and evaluations of different approaches to data intensive computing. Several applications were developed in MPI, MapReduce and Twister in these different environments.

  2. A Massive Data Parallel Computational Framework for Petascale/Exascale Hybrid Computer Systems

    CERN Document Server

    Blazewicz, Marek; Diener, Peter; Koppelman, David M; Kurowski, Krzysztof; Löffler, Frank; Schnetter, Erik; Tao, Jian

    2012-01-01

    Heterogeneous systems are becoming more common on High Performance Computing (HPC) systems. Even using tools like CUDA and OpenCL it is a non-trivial task to obtain optimal performance on the GPU. Approaches to simplifying this task include Merge (a library based framework for heterogeneous multi-core systems), Zippy (a framework for parallel execution of codes on multiple GPUs), BSGP (a new programming language for general purpose computation on the GPU) and CUDA-lite (an enhancement to CUDA that transforms code based on annotations). In addition, efforts are underway to improve compiler tools for automatic parallelization and optimization of affine loop nests for GPUs and for automatic translation of OpenMP parallelized codes to CUDA. In this paper we present an alternative approach: a new computational framework for the development of massively data parallel scientific codes applications suitable for use on such petascale/exascale hybrid systems built upon the highly scalable Cactus framework. As the first...

  3. Hybrid Differential Dynamic Programming with Stochastic Search

    Science.gov (United States)

    Aziz, Jonathan; Parker, Jeffrey; Englander, Jacob

    2016-01-01

    Differential dynamic programming (DDP) has been demonstrated as a viable approach to low-thrust trajectory optimization, namely with the recent success of NASAs Dawn mission. The Dawn trajectory was designed with the DDP-based Static Dynamic Optimal Control algorithm used in the Mystic software. Another recently developed method, Hybrid Differential Dynamic Programming (HDDP) is a variant of the standard DDP formulation that leverages both first-order and second-order state transition matrices in addition to nonlinear programming (NLP) techniques. Areas of improvement over standard DDP include constraint handling, convergence properties, continuous dynamics, and multi-phase capability. DDP is a gradient based method and will converge to a solution nearby an initial guess. In this study, monotonic basin hopping (MBH) is employed as a stochastic search method to overcome this limitation, by augmenting the HDDP algorithm for a wider search of the solution space.

  4. Hybrid Computational Model for High-Altitude Aeroassist Vehicles Project

    Data.gov (United States)

    National Aeronautics and Space Administration — A hybrid continuum/noncontinuum computational model will be developed for analyzing the aerodynamics and heating on aeroassist vehicles. Unique features of this...

  5. Digital Potentiometer for Hybrid Computer EAI 680-PDP-8/I

    DEFF Research Database (Denmark)

    Højberg, Kristian Søe; Olsen, Jens V.

    1974-01-01

    In this article a description is given of a 12 bit digital potentiometer for hybrid computer application. The system is composed of standard building blocks. Emphasis is laid on the development problems met and the problem solutions developed.......In this article a description is given of a 12 bit digital potentiometer for hybrid computer application. The system is composed of standard building blocks. Emphasis is laid on the development problems met and the problem solutions developed....

  6. Hybrid Governance in an Adult Program: A Nuanced Relationship

    Science.gov (United States)

    Cockley, Suzanne

    2012-01-01

    Eastern Mennonite University's adult program uses a hybrid governance structure. Functions separated from the traditional program include marketing, admissions, and student advising. Functions that remain connected to the traditional program include the registrar, financial aid, and student business accounts.

  7. Hydrogen hybrid vehicle engine development: Experimental program

    Energy Technology Data Exchange (ETDEWEB)

    Van Blarigan, P. [Sandia National Lab., Livermore, CA (United States)

    1995-09-01

    A hydrogen fueled engine is being developed specifically for the auxiliary power unit (APU) in a series type hybrid vehicle. Hydrogen is different from other internal combustion (IC) engine fuels, and hybrid vehicle IC engine requirements are different from those of other IC vehicle engines. Together these differences will allow a new engine design based on first principles that will maximize thermal efficiency while minimizing principal emissions. The experimental program is proceeding in four steps: (1) Demonstration of the emissions and the indicated thermal efficiency capability of a standard CLR research engine modified for higher compression ratios and hydrogen fueled operation. (2) Design and test a new combustion chamber geometry for an existing single cylinder research engine, in an attempt to improve on the baseline indicated thermal efficiency of the CLR engine. (3) Design and build, in conjunction with an industrial collaborator, a new full scale research engine designed to maximize brake thermal efficiency. Include a full complement of combustion diagnostics. (4) Incorporate all of the knowledge thus obtained in the design and fabrication, by an industrial collaborator, of the hydrogen fueled engine for the hybrid vehicle power train illustrator. Results of the CLR baseline engine testing are presented, as well as preliminary data from the new combustion chamber engine. The CLR data confirm the low NOx produced by lean operation. The preliminary indicated thermal efficiency data from the new combustion chamber design engine show an improvement relative to the CLR engine. Comparison with previous high compression engine results shows reasonable agreement.

  8. A Hybrid Dynamic Programming Method for Concave Resource Allocation Problems

    Institute of Scientific and Technical Information of China (English)

    姜计荣; 孙小玲

    2005-01-01

    Concave resource allocation problem is an integer programming problem of minimizing a nonincreasing concave function subject to a convex nondecreasing constraint and bounded integer variables. This class of problems are encountered in optimization models involving economies of scale. In this paper, a new hybrid dynamic programming method was proposed for solving concave resource allocation problems. A convex underestimating function was used to approximate the objective function and the resulting convex subproblem was solved with dynamic programming technique after transforming it into a 0-1 linear knapsack problem. To ensure the convergence, monotonicity and domain cut technique was employed to remove certain integer boxes and partition the revised domain into a union of integer boxes. Computational results were given to show the efficiency of the algorithm.

  9. Hybrid system for computing reachable workspaces for redundant manipulators

    Science.gov (United States)

    Alameldin, Tarek K.; Sobh, Tarek M.

    1991-03-01

    An efficient computation of 3D workspaces for redundant manipulators is based on a " hybrid" a!- gorithm between direct kinematics and screw theory. Direct kinematics enjoys low computational cost but needs edge detection algorithms when workspace boundaries are needed. Screw theory has exponential computational cost per workspace point but does not need edge detection. Screw theory allows computing workspace points in prespecified directions while direct kinematics does not. Applications of the algorithm are discussed.

  10. Guide to ASIAC Computer Programs

    Science.gov (United States)

    1994-07-01

    structural model . Such programs are also often used to plot both deformed and undeformed structure or graphically display stresses, temperatures, and...element models of aerospace structures and in research in structural analysis and optimization. Last updated in 1978. ANALYZE-PC This is a micro-computer...nonlinearities may be due to material nonlinearity, in which case elastic, hyperelastic, and hypoelastic material behavior may be considered, or the nonlinear

  11. AV Programs for Computer Know-How.

    Science.gov (United States)

    Mandell, Phyllis Levy

    1985-01-01

    Lists 44 audiovisual programs (most released between 1983 and 1984) grouped in seven categories: computers in society, introduction to computers, computer operations, languages and programing, computer graphics, robotics, computer careers. Excerpts from "School Library Journal" reviews, price, and intended grade level are included. Names…

  12. Phase I of the Near Term Hybrid Passenger Vehicle Development Program. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1980-10-01

    The results of Phase I of the Near-Term Hybrid Vehicle Program are summarized. This phase of the program ws a study leading to the preliminary design of a 5-passenger hybrid vehicle utilizing two energy sources (electricity and gasoline/diesel fuel) to minimize petroleum usage on a fleet basis. This report presents the following: overall summary of the Phase I activity; summary of the individual tasks; summary of the hybrid vehicle design; summary of the alternative design options; summary of the computer simulations; summary of the economic analysis; summary of the maintenance and reliability considerations; summary of the design for crash safety; and bibliography.

  13. Effective hybrid evolutionary computational algorithms for global optimization and applied to construct prion AGAAAAGA fibril models

    CERN Document Server

    Zhang, Jiapu

    2010-01-01

    Evolutionary algorithms are parallel computing algorithms and simulated annealing algorithm is a sequential computing algorithm. This paper inserts simulated annealing into evolutionary computations and successful developed a hybrid Self-Adaptive Evolutionary Strategy $\\mu+\\lambda$ method and a hybrid Self-Adaptive Classical Evolutionary Programming method. Numerical results on more than 40 benchmark test problems of global optimization show that the hybrid methods presented in this paper are very effective. Lennard-Jones potential energy minimization is another benchmark for testing new global optimization algorithms. It is studied through the amyloid fibril constructions by this paper. To date, there is little molecular structural data available on the AGAAAAGA palindrome in the hydrophobic region (113-120) of prion proteins.This region belongs to the N-terminal unstructured region (1-123) of prion proteins, the structure of which has proved hard to determine using NMR spectroscopy or X-ray crystallography ...

  14. Hybridized Teacher Education Programs in NYC: A Missed Opportunity?

    Science.gov (United States)

    Mungal, Angus Shiva

    2015-01-01

    This qualitative study describes the development of hybrid teacher preparation programs that emerged as a result of a "forced" partnership between university-based and alternative teacher preparation programs in New York City. This hybrid experiment was a short-lived, yet innovative by-product of a somewhat pragmatic arrangement between…

  15. Hybrid Spanish Programs: A Challenging and Successful Endeavor

    Science.gov (United States)

    Hermosilla, Luis

    2014-01-01

    Several types of hybrid Spanish programs have been developed in US colleges and universities for more than ten years, but the most common structure consists of a course in which the instruction combines face-to-face time with an instructor and the use of an online platform. Studies have demonstrated that a well-developed hybrid Spanish program can…

  16. Generalised Computability and Applications to Hybrid Systems

    DEFF Research Database (Denmark)

    Korovina, Margarita V.; Kudinov, Oleg V.

    2001-01-01

    We investigate the concept of generalised computability of operators and functionals defined on the set of continuous functions, firstly introduced in [9]. By working in the reals, with equality and without equality, we study properties of generalised computable operators and functionals. Also we...

  17. Computer organization and assembly language programming

    CERN Document Server

    Peterson, James L

    1978-01-01

    Computer Organization and Assembly Language Programming deals with lower level computer programming-machine or assembly language, and how these are used in the typical computer system. The book explains the operations of the computer at the machine language level. The text reviews basic computer operations, organization, and deals primarily with the MIX computer system. The book describes assembly language programming techniques, such as defining appropriate data structures, determining the information for input or output, and the flow of control within the program. The text explains basic I/O

  18. Lighting Computer Programs in Lighting Technology

    OpenAIRE

    Ekren, Nazmi; Dursun, Bahtiyar; Ercan AYKUT

    2008-01-01

    It is well known that the computer in lighting technology is a vital component for lighting designers. Lighting computer programs are preferred in preparing architectural projects in lighting techniques, especially in lighting calculations. Lighting computer programs, which arise with the aim of helping lighting designers, gain more interest day by day. The most important property of lighting computer programs is the ability to enable the simulation of lighting projects without requiring any ...

  19. Cost Optimization Using Hybrid Evolutionary Algorithm in Cloud Computing

    Directory of Open Access Journals (Sweden)

    B. Kavitha

    2015-07-01

    Full Text Available The main aim of this research is to design the hybrid evolutionary algorithm for minimizing multiple problems of dynamic resource allocation in cloud computing. The resource allocation is one of the big problems in the distributed systems when the client wants to decrease the cost for the resource allocation for their task. In order to assign the resource for the task, the client must consider the monetary cost and computational cost. Allocation of resources by considering those two costs is difficult. To solve this problem in this study, we make the main task of client into many subtasks and we allocate resources for each subtask instead of selecting the single resource for the main task. The allocation of resources for the each subtask is completed through our proposed hybrid optimization algorithm. Here, we hybrid the Binary Particle Swarm Optimization (BPSO and Binary Cuckoo Search algorithm (BCSO by considering monetary cost and computational cost which helps to minimize the cost of the client. Finally, the experimentation is carried out and our proposed hybrid algorithm is compared with BPSO and BCSO algorithms. Also we proved the efficiency of our proposed hybrid optimization algorithm.

  20. Computer technology and computer programming research and strategies

    CERN Document Server

    Antonakos, James L

    2011-01-01

    Covering a broad range of new topics in computer technology and programming, this volume discusses encryption techniques, SQL generation, Web 2.0 technologies, and visual sensor networks. It also examines reconfigurable computing, video streaming, animation techniques, and more. Readers will learn about an educational tool and game to help students learn computer programming. The book also explores a new medical technology paradigm centered on wireless technology and cloud computing designed to overcome the problems of increasing health technology costs.

  1. Use of the computer program in a cloud computing

    Directory of Open Access Journals (Sweden)

    Radovanović Sanja

    2013-01-01

    Full Text Available Cloud computing represents a specific networking, in which a computer program simulates the operation of one or more server computers. In terms of copyright, all technological processes that take place within the cloud computing are covered by the notion of copying computer programs, and exclusive right of reproduction. However, this right suffers some limitations in order to allow normal use of computer program by users. Based on the fact that the cloud computing is virtualized network, the issue of normal use of the computer program requires to put all aspects of the permitted copying into the context of a specific computing environment and specific processes within the cloud. In this sense, the paper pointed out that the user of a computer program in cloud computing, needs to obtain the consent of the right holder for any act which he undertakes using the program. In other words, the copyright in the cloud computing is a full scale, and thus the freedom of contract (in the case of this particular restriction as well.

  2. Load flow computations in hybrid transmission - distributed power systems

    NARCIS (Netherlands)

    Wobbes, E.D.; Lahaye, D.J.P.

    2013-01-01

    We interconnect transmission and distribution power systems and perform load flow computations in the hybrid network. In the largest example we managed to build, fifty copies of a distribution network consisting of fifteen nodes is connected to the UCTE study model, resulting in a system consisting

  3. Personal Computer Transport Analysis Program

    Science.gov (United States)

    DiStefano, Frank, III; Wobick, Craig; Chapman, Kirt; McCloud, Peter

    2012-01-01

    The Personal Computer Transport Analysis Program (PCTAP) is C++ software used for analysis of thermal fluid systems. The program predicts thermal fluid system and component transients. The output consists of temperatures, flow rates, pressures, delta pressures, tank quantities, and gas quantities in the air, along with air scrubbing component performance. PCTAP s solution process assumes that the tubes in the system are well insulated so that only the heat transfer between fluid and tube wall and between adjacent tubes is modeled. The system described in the model file is broken down into its individual components; i.e., tubes, cold plates, heat exchangers, etc. A solution vector is built from the components and a flow is then simulated with fluid being transferred from one component to the next. The solution vector of components in the model file is built at the initiation of the run. This solution vector is simply a list of components in the order of their inlet dependency on other components. The component parameters are updated in the order in which they appear in the list at every time step. Once the solution vectors have been determined, PCTAP cycles through the components in the solution vector, executing their outlet function for each time-step increment.

  4. Debugging a high performance computing program

    Science.gov (United States)

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  5. The Federal electric and hybrid vehicle program

    Science.gov (United States)

    Schwartz, H. J.

    1980-01-01

    The commercial development and use of electric and hybrid vehicles is discussed with respect to its application as a possible alternative transportation system. A market demonstration is described that seeks to place 10,000 electric hybrid vehicles into public and private sector demonstrations.

  6. Using a Hybrid Approach to Facilitate Learning Introductory Programming

    Science.gov (United States)

    Cakiroglu, Unal

    2013-01-01

    In order to facilitate students' understanding in introductory programming courses, different types of teaching approaches were conducted. In this study, a hybrid approach including comment first coding (CFC), analogy and template approaches were used. The goal was to investigate the effect of such a hybrid approach on students' understanding in…

  7. The NASA computer science research program plan

    Science.gov (United States)

    1983-01-01

    A taxonomy of computer science is included, one state of the art of each of the major computer science categories is summarized. A functional breakdown of NASA programs under Aeronautics R and D, space R and T, and institutional support is also included. These areas were assessed against the computer science categories. Concurrent processing, highly reliable computing, and information management are identified.

  8. Secured Authorized Data Using Hybrid Encryption in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Dinesh Shinde

    2017-03-01

    Full Text Available In today’s world to provide a security to a public network like a cloud network is become a toughest task however more likely to reduce the cost at the time of providing security using cryptographic technique to delegate the mask of the decryption task to the cloud servers to reduce the computing cost. As a result, attributebased encryption with delegation emerges. Still, there are caveats and questions remaining in the previous relevant works. For to solution to all problems the cloud servers could tamper or replace the delegated cipher text and respond a forged computing result with malicious intent. They may also cheat the eligible users by responding them that they are ineligible for the purpose of cost saving. Furthermore, during the encryption, the access policies may not be flexible enough as well. Since policy for general circuits enables to achieve the strongest form of access control, a construction for realizing circuit cipher text-policy attribute-based hybrid encryption with verifiable delegation has been considered in our work. In such a system, combined with verifiable computation and encrypt-then-mac mechanism, the data confidentiality, the fine-grained access control and the correctness of the delegated computing results are well guaranteed at the same time. Besides, our scheme achieves security against chosen-plaintext attacks under the k-multilinear Decisional Diffie-Hellman assumption. Moreover, an extensive simulation campaign confirms the feasibility and efficiency of the proposed solution. There are two complementary forms of attribute-based encryption. One is key-policy attribute-based encryption (KP-ABE [8], [9], [10], and the other is cipher text-policy attribute-based encryption. In a KP-ABE system, the decision of access policy is made by the key distributor instead of the enciphered, which limits the practicability and usability for the system in practical applicationsthe access policy for general circuits could be

  9. Radar Landmass Simulation Computer Programming (Interim Report).

    Science.gov (United States)

    RADAR SCANNING, TERRAIN), (*NAVAL TRAINING, RADAR OPERATORS), (*FLIGHT SIMULATORS, TERRAIN AVOIDANCE), (* COMPUTER PROGRAMMING , INSTRUCTION MANUALS), PLAN POSITION INDICATORS, REAL TIME, DISPLAY SYSTEMS, RADAR IMAGES, SIMULATION

  10. Hybrid Information Flow Analysis for Programs with Arrays

    Directory of Open Access Journals (Sweden)

    Gergö Barany

    2016-07-01

    Full Text Available Information flow analysis checks whether certain pieces of (confidential data may affect the results of computations in unwanted ways and thus leak information. Dynamic information flow analysis adds instrumentation code to the target software to track flows at run time and raise alarms if a flow policy is violated; hybrid analyses combine this with preliminary static analysis. Using a subset of C as the target language, we extend previous work on hybrid information flow analysis that handled pointers to scalars. Our extended formulation handles arrays, pointers to array elements, and pointer arithmetic. Information flow through arrays of pointers is tracked precisely while arrays of non-pointer types are summarized efficiently. A prototype of our approach is implemented using the Frama-C program analysis and transformation framework. Work on a full machine-checked proof of the correctness of our approach using Isabelle/HOL is well underway; we present the existing parts and sketch the rest of the correctness argument.

  11. Integer programming theory, applications, and computations

    CERN Document Server

    Taha, Hamdy A

    1975-01-01

    Integer Programming: Theory, Applications, and Computations provides information pertinent to the theory, applications, and computations of integer programming. This book presents the computational advantages of the various techniques of integer programming.Organized into eight chapters, this book begins with an overview of the general categorization of integer applications and explains the three fundamental techniques of integer programming. This text then explores the concept of implicit enumeration, which is general in a sense that it is applicable to any well-defined binary program. Other

  12. Hybrid Algorithm for Optimal Load Sharing in Grid Computing

    Directory of Open Access Journals (Sweden)

    A. Krishnan

    2012-01-01

    Full Text Available Problem statement: Grid Computing is the fast growing industry, which shares the resources in the organization in an effective manner. Resource sharing requires more optimized algorithmic structure, otherwise the waiting time and response time are increased and the resource utilization is reduced. Approach: In order to avoid such reduction in the performances of the grid system, an optimal resource sharing algorithm is required. In recent days, many load sharing technique are proposed, which provides feasibility but there are many critical issues are still present in these algorithms. Results: In this study a hybrid algorithm for optimization of load sharing is proposed. The hybrid algorithm contains two components which are Hash Table (HT and Distributed Hash Table (DHT. Conclusion: The results of the proposed study show that the hybrid algorithm will optimize the task than existing systems.

  13. Graduate Automotive Technology Education (GATE) Program: Center of Automotive Technology Excellence in Advanced Hybrid Vehicle Technology at West Virginia University

    Energy Technology Data Exchange (ETDEWEB)

    Nigle N. Clark

    2006-12-31

    This report summarizes the technical and educational achievements of the Graduate Automotive Technology Education (GATE) Center at West Virginia University (WVU), which was created to emphasize Advanced Hybrid Vehicle Technology. The Center has supported the graduate studies of 17 students in the Department of Mechanical and Aerospace Engineering and the Lane Department of Computer Science and Electrical Engineering. These students have addressed topics such as hybrid modeling, construction of a hybrid sport utility vehicle (in conjunction with the FutureTruck program), a MEMS-based sensor, on-board data acquisition for hybrid design optimization, linear engine design and engine emissions. Courses have been developed in Hybrid Vehicle Design, Mobile Source Powerplants, Advanced Vehicle Propulsion, Power Electronics for Automotive Applications and Sensors for Automotive Applications, and have been responsible for 396 hours of graduate student coursework. The GATE program also enhanced the WVU participation in the U.S. Department of Energy Student Design Competitions, in particular FutureTruck and Challenge X. The GATE support for hybrid vehicle technology enhanced understanding of hybrid vehicle design and testing at WVU and encouraged the development of a research agenda in heavy-duty hybrid vehicles. As a result, WVU has now completed three programs in hybrid transit bus emissions characterization, and WVU faculty are leading the Transportation Research Board effort to define life cycle costs for hybrid transit buses. Research and enrollment records show that approximately 100 graduate students have benefited substantially from the hybrid vehicle GATE program at WVU.

  14. Mastering cloud computing foundations and applications programming

    CERN Document Server

    Buyya, Rajkumar; Selvi, SThamarai

    2013-01-01

    Mastering Cloud Computing is designed for undergraduate students learning to develop cloud computing applications. Tomorrow's applications won't live on a single computer but will be deployed from and reside on a virtual server, accessible anywhere, any time. Tomorrow's application developers need to understand the requirements of building apps for these virtual systems, including concurrent programming, high-performance computing, and data-intensive systems. The book introduces the principles of distributed and parallel computing underlying cloud architectures and specifical

  15. GPAW optimized for Blue Gene/P using hybrid programming

    DEFF Research Database (Denmark)

    Kristensen, Mads Ruben Burgdorff; Happe, Hans Henrik; Vinter, Brian

    2009-01-01

    In this work we present optimizations of a Grid-based projector-augmented wave method software, GPAW for the Blue Gene/P architecture. The improvements are achieved by exploring the advantage of shared and distributed memory programming also known as hybrid programming. The work focuses on optimi......In this work we present optimizations of a Grid-based projector-augmented wave method software, GPAW for the Blue Gene/P architecture. The improvements are achieved by exploring the advantage of shared and distributed memory programming also known as hybrid programming. The work focuses...

  16. A hybrid computational grid architecture for comparative genomics.

    Science.gov (United States)

    Singh, Aarti; Chen, Chen; Liu, Weiguo; Mitchell, Wayne; Schmidt, Bertil

    2008-03-01

    Comparative genomics provides a powerful tool for studying evolutionary changes among organisms, helping to identify genes that are conserved among species, as well as genes that give each organism its unique characteristics. However, the huge datasets involved makes this approach impractical on traditional computer architectures leading to prohibitively long runtimes. In this paper, we present a new computational grid architecture based on a hybrid computing model to significantly accelerate comparative genomics applications. The hybrid computing model consists of two types of parallelism: coarse grained and fine grained. The coarse-grained parallelism uses a volunteer computing infrastructure for job distribution, while the fine-grained parallelism uses commodity computer graphics hardware for fast sequence alignment. We present the deployment and evaluation of this approach on our grid test bed for the all-against-all comparison of microbial genomes. The results of this comparison are then used by phenotype--genotype explorer (PheGee). PheGee is a new tool that nominates candidate genes responsible for a given phenotype.

  17. A hybrid programming model for compressible gas dynamics using openCL

    Energy Technology Data Exchange (ETDEWEB)

    Bergen, Benjamin Karl [Los Alamos National Laboratory; Daniels, Marcus G [Los Alamos National Laboratory; Weber, Paul M [Los Alamos National Laboratory

    2010-01-01

    The current trend towards multicore/manycore and accelerated architectures presents challenges, both in portability, and also in the choices that developers must make on how to use the resources that these architectures provide. This paper explores some of the possibilities that are enabled by the Open Computing Language (OpenCL), and proposes a programming model that allows developers and scientists to more fully subscribe hybrid compute nodes, while, at the same time, reducing the impact of system failure.

  18. A Hybrid Brain-Computer Interface-Based Mail Client

    Directory of Open Access Journals (Sweden)

    Tianyou Yu

    2013-01-01

    Full Text Available Brain-computer interface-based communication plays an important role in brain-computer interface (BCI applications; electronic mail is one of the most common communication tools. In this study, we propose a hybrid BCI-based mail client that implements electronic mail communication by means of real-time classification of multimodal features extracted from scalp electroencephalography (EEG. With this BCI mail client, users can receive, read, write, and attach files to their mail. Using a BCI mouse that utilizes hybrid brain signals, that is, motor imagery and P300 potential, the user can select and activate the function keys and links on the mail client graphical user interface (GUI. An adaptive P300 speller is employed for text input. The system has been tested with 6 subjects, and the experimental results validate the efficacy of the proposed method.

  19. Computational simulation of intermingled-fiber hybrid composite behavior

    Science.gov (United States)

    Mital, Subodh K.; Chamis, Christos C.

    1992-01-01

    Three-dimensional finite-element analysis and a micromechanics based computer code ICAN (Integrated Composite Analyzer) are used to predict the composite properties and microstresses of a unidirectional graphite/epoxy primary composite with varying percentages of S-glass fibers used as hydridizing fibers at a total fiber volume of 0.54. The three-dimensional finite-element model used in the analyses consists of a group of nine fibers, all unidirectional, in a three-by-three unit cell array. There is generally good agreement between the composite properties and microstresses obtained from both methods. The results indicate that the finite-element methods and the micromechanics equations embedded in the ICAN computer code can be used to obtain the properties of intermingled fiber hybrid composites needed for the analysis/design of hybrid composite structures. However, the finite-element model should be big enough to be able to simulate the conditions assumed in the micromechanics equations.

  20. Preschool Cookbook of Computer Programming Topics

    Science.gov (United States)

    Morgado, Leonel; Cruz, Maria; Kahn, Ken

    2010-01-01

    A common problem in computer programming use for education in general, not simply as a technical skill, is that children and teachers find themselves constrained by what is possible through limited expertise in computer programming techniques. This is particularly noticeable at the preliterate level, where constructs tend to be limited to…

  1. Computer Programming Goes Back to School

    Science.gov (United States)

    Kafai, Yasmin B.; Burke, Quinn

    2013-01-01

    We are witnessing a remarkable comeback of programming. Current initiatives to promote computational thinking and to broaden participation in computing signal a renewed interest to bring programming back into K-12 schools and help develop children as producers and not simply consumers of digital media. This essay explores the re-emergence of…

  2. Preschool Cookbook of Computer Programming Topics

    Science.gov (United States)

    Morgado, Leonel; Cruz, Maria; Kahn, Ken

    2010-01-01

    A common problem in computer programming use for education in general, not simply as a technical skill, is that children and teachers find themselves constrained by what is possible through limited expertise in computer programming techniques. This is particularly noticeable at the preliterate level, where constructs tend to be limited to…

  3. Program For Displaying Computed Electromagnetic Fields

    Science.gov (United States)

    Hom, Kam W.

    1995-01-01

    EM-ANIMATE computer program specialized visualization displays and animates output data on near fields and surface currents computed by electromagnetic-field program - in particular MOM3D (LAR-15074). Program based on windows and contains user-friendly, graphical interface for setting viewing options, selecting cases, manipulating files, and like. Written in FORTRAN 77. EM-ANIMATE also available as part of package, COS-10048, includes MOM3D, IRIS program computing near-field and surface-current solutions of electromagnetic-field equations.

  4. Accelerating Climate and Weather Simulations through Hybrid Computing

    Science.gov (United States)

    Zhou, Shujia; Cruz, Carlos; Duffy, Daniel; Tucker, Robert; Purcell, Mark

    2011-01-01

    Unconventional multi- and many-core processors (e.g. IBM (R) Cell B.E.(TM) and NVIDIA (R) GPU) have emerged as effective accelerators in trial climate and weather simulations. Yet these climate and weather models typically run on parallel computers with conventional processors (e.g. Intel, AMD, and IBM) using Message Passing Interface. To address challenges involved in efficiently and easily connecting accelerators to parallel computers, we investigated using IBM's Dynamic Application Virtualization (TM) (IBM DAV) software in a prototype hybrid computing system with representative climate and weather model components. The hybrid system comprises two Intel blades and two IBM QS22 Cell B.E. blades, connected with both InfiniBand(R) (IB) and 1-Gigabit Ethernet. The system significantly accelerates a solar radiation model component by offloading compute-intensive calculations to the Cell blades. Systematic tests show that IBM DAV can seamlessly offload compute-intensive calculations from Intel blades to Cell B.E. blades in a scalable, load-balanced manner. However, noticeable communication overhead was observed, mainly due to IP over the IB protocol. Full utilization of IB Sockets Direct Protocol and the lower latency production version of IBM DAV will reduce this overhead.

  5. Hybrid Propulsion Technology Program, phase 1. Volume 1: Executive summary

    Science.gov (United States)

    1989-01-01

    The study program was contracted to evaluate concepts of hybrid propulsion, select the most optimum, and prepare a conceptual design package. Further, this study required preparation of a technology definition package to identify hybrid propulsion enabling technologies and planning to acquire that technology in Phase 2 and demonstrate that technology in Phase 3. Researchers evaluated two design philosophies for Hybrid Rocket Booster (HRB) selection. The first is an ASRM modified hybrid wherein as many components/designs as possible were used from the present Advanced Solid Rocket Motor (ASRM) design. The second was an entirely new hybrid optimized booster using ASRM criteria as a point of departure, i.e., diameter, thrust time curve, launch facilities, and external tank attach points. Researchers selected the new design based on the logic of optimizing a hybrid booster to provide NASA with a next generation vehicle in lieu of an interim advancement over the ASRM. The enabling technologies for hybrid propulsion are applicable to either and vehicle design may be selected at a downstream point (Phase 3) at NASA's discretion. The completion of these studies resulted in ranking the various concepts of boosters from the RSRM to a turbopump fed (TF) hybrid. The scoring resulting from the Figure of Merit (FOM) scoring system clearly shows a natural growth path where the turbopump fed solid liquid staged combustion hybrid provides maximized payload and the highest safety, reliability, and low life cycle costing.

  6. Fatigue of hybrid glass/carbon composites: 3D computational studies

    DEFF Research Database (Denmark)

    Dai, Gaoming; Mishnaevsky, Leon

    2014-01-01

    3D computational simulations of fatigue of hybrid carbon/glass fiber reinforced composites is carried out using X-FEM and multifiber unit cell models. A new software code for the automatic generation of unit cell multifiber models of composites with randomly misaligned fibers of various properties...... and geometrical parameters is developed. With the use of this program code and the X-FEM method, systematic investigations of the effect of microstructure of hybrid composites (fraction of carbon versus glass fibers, misalignment, and interface strength) and the loading conditions (tensile versus compression...... cyclic loading effects) on fatigue behavior of the materials are carried out. It was demonstrated that the higher fraction of carbon fibers in hybrid composites is beneficial for the fatigue lifetime of the composites under tension-tension cyclic loading, but might have negative effect on the lifetime...

  7. Report on Computer Programs for Robotic Vision

    Science.gov (United States)

    Cunningham, R. T.; Kan, E. P.

    1986-01-01

    Collection of programs supports robotic research. Report describes computer-vision software library NASA's Jet Propulsion Laboratory. Programs evolved during past 10 years of research into robotics. Collection includes low- and high-level image-processing software proved in applications ranging from factory automation to spacecraft tracking and grappling. Programs fall into several overlapping categories. Image utilities category are low-level routines that provide computer access to image data and some simple graphical capabilities for displaying results of image processing.

  8. Report on Computer Programs for Robotic Vision

    Science.gov (United States)

    Cunningham, R. T.; Kan, E. P.

    1986-01-01

    Collection of programs supports robotic research. Report describes computer-vision software library NASA's Jet Propulsion Laboratory. Programs evolved during past 10 years of research into robotics. Collection includes low- and high-level image-processing software proved in applications ranging from factory automation to spacecraft tracking and grappling. Programs fall into several overlapping categories. Image utilities category are low-level routines that provide computer access to image data and some simple graphical capabilities for displaying results of image processing.

  9. TARDEC Hybrid Electric (HE) Technology Program

    Science.gov (United States)

    2011-02-05

    System Generator /Motor...ireDifferential Differential Energy Storage System Generator / Motor Inverter Engine Generator/Motor Transmission 3-Phase AC power into Motor Inverter When...Hybrid Electric Drive Propulsion TireTire Tire Energy Storage System Generator Controller 3Ø A/C to HV DC Generator Controller rectifies AC to DC

  10. Structured Parallel Programming Patterns for Efficient Computation

    CERN Document Server

    McCool, Michael; Robison, Arch

    2012-01-01

    Programming is now parallel programming. Much as structured programming revolutionized traditional serial programming decades ago, a new kind of structured programming, based on patterns, is relevant to parallel programming today. Parallel computing experts and industry insiders Michael McCool, Arch Robison, and James Reinders describe how to design and implement maintainable and efficient parallel algorithms using a pattern-based approach. They present both theory and practice, and give detailed concrete examples using multiple programming models. Examples are primarily given using two of th

  11. Evaluation of a Compact Hybrid Brain-Computer Interface System

    Directory of Open Access Journals (Sweden)

    Jaeyoung Shin

    2017-01-01

    Full Text Available We realized a compact hybrid brain-computer interface (BCI system by integrating a portable near-infrared spectroscopy (NIRS device with an economical electroencephalography (EEG system. The NIRS array was located on the subjects’ forehead, covering the prefrontal area. The EEG electrodes were distributed over the frontal, motor/temporal, and parietal areas. The experimental paradigm involved a Stroop word-picture matching test in combination with mental arithmetic (MA and baseline (BL tasks, in which the subjects were asked to perform either MA or BL in response to congruent or incongruent conditions, respectively. We compared the classification accuracies of each of the modalities (NIRS or EEG with that of the hybrid system. We showed that the hybrid system outperforms the unimodal EEG and NIRS systems by 6.2% and 2.5%, respectively. Since the proposed hybrid system is based on portable platforms, it is not confined to a laboratory environment and has the potential to be used in real-life situations, such as in neurorehabilitation.

  12. Energy efficient hybrid computing systems using spin devices

    Science.gov (United States)

    Sharad, Mrigank

    Emerging spin-devices like magnetic tunnel junctions (MTJ's), spin-valves and domain wall magnets (DWM) have opened new avenues for spin-based logic design. This work explored potential computing applications which can exploit such devices for higher energy-efficiency and performance. The proposed applications involve hybrid design schemes, where charge-based devices supplement the spin-devices, to gain large benefits at the system level. As an example, lateral spin valves (LSV) involve switching of nanomagnets using spin-polarized current injection through a metallic channel such as Cu. Such spin-torque based devices possess several interesting properties that can be exploited for ultra-low power computation. Analog characteristic of spin current facilitate non-Boolean computation like majority evaluation that can be used to model a neuron. The magneto-metallic neurons can operate at ultra-low terminal voltage of ˜20mV, thereby resulting in small computation power. Moreover, since nano-magnets inherently act as memory elements, these devices can facilitate integration of logic and memory in interesting ways. The spin based neurons can be integrated with CMOS and other emerging devices leading to different classes of neuromorphic/non-Von-Neumann architectures. The spin-based designs involve `mixed-mode' processing and hence can provide very compact and ultra-low energy solutions for complex computation blocks, both digital as well as analog. Such low-power, hybrid designs can be suitable for various data processing applications like cognitive computing, associative memory, and currentmode on-chip global interconnects. Simulation results for these applications based on device-circuit co-simulation framework predict more than ˜100x improvement in computation energy as compared to state of the art CMOS design, for optimal spin-device parameters.

  13. CSP: A Multifaceted Hybrid Architecture for Space Computing

    Science.gov (United States)

    Rudolph, Dylan; Wilson, Christopher; Stewart, Jacob; Gauvin, Patrick; George, Alan; Lam, Herman; Crum, Gary Alex; Wirthlin, Mike; Wilson, Alex; Stoddard, Aaron

    2014-01-01

    Research on the CHREC Space Processor (CSP) takes a multifaceted hybrid approach to embedded space computing. Working closely with the NASA Goddard SpaceCube team, researchers at the National Science Foundation (NSF) Center for High-Performance Reconfigurable Computing (CHREC) at the University of Florida and Brigham Young University are developing hybrid space computers that feature an innovative combination of three technologies: commercial-off-the-shelf (COTS) devices, radiation-hardened (RadHard) devices, and fault-tolerant computing. Modern COTS processors provide the utmost in performance and energy-efficiency but are susceptible to ionizing radiation in space, whereas RadHard processors are virtually immune to this radiation but are more expensive, larger, less energy-efficient, and generations behind in speed and functionality. By featuring COTS devices to perform the critical data processing, supported by simpler RadHard devices that monitor and manage the COTS devices, and augmented with novel uses of fault-tolerant hardware, software, information, and networking within and between COTS devices, the resulting system can maximize performance and reliability while minimizing energy consumption and cost. NASA Goddard has adopted the CSP concept and technology with plans underway to feature flight-ready CSP boards on two upcoming space missions.

  14. Comparison of visual programming and hybrid programming environments in transferring programming skills

    Science.gov (United States)

    Alrubaye, Hussein

    Teaching students programming skills at an early age is one of the most important aspects for researchers in recent decades. It may seem more practical to leverage an existing reservoir of knowledge by extending the block-based environment; which uses blocks to build apps towards text-based. Text-based is text code only, rather than starting to learn a whole new programming language. To simplify the learning process, there is a new coding environment that's been introduced named block-based environment (Pencil Code, Scratch, App inventor) that is used by millions of students. There are some challenges teachers are facing to bring the text-based environment to the classroom. One is block-based tools do not allow students to write real-world programs, which limit the student's abilities to writing only simple programs. Also, there is a big gap between the block and text-based environments. When students want to transfer from block-based to text-based, they feel that they are in a totally new environment. Since block-code transition involves movement between different code styles and code representations with different syntax. They move from commands with nice shapes and colors to new environments with only commands, also they have to memorize all the commands and learn the programming syntax. We want to bridge the gap between the block-based and text-based by developing a new environment named hybrid-based, that allows the student to drag and drop block code and see real code instead of seeing it blocks only. The study was done on 18 students, by dividing them into two groups. One group used block-based, and another group used hybrid-based, then both groups learned to write code with text-based. We found that hybrid-based environments are better than block-based environments in transferring programming skills to text-based because hybrid-based enhances the students' abilities to learn programming foundations, code modification, memorizing commands, and syntax error

  15. Programming in Biomolecular Computation: Programs, Self-Interpretation and Visualisation

    Directory of Open Access Journals (Sweden)

    J.G. Simonsen

    2011-01-01

    Full Text Available Our goal is to provide a top-down approach to biomolecular computation. In spite of widespread discussion about connections between biology and computation, one question seems notable by its absence: Where are the programs? We identify a number of common features in programming that seem conspicuously absent from the literature on biomolecular computing; to partially redress this absence, we introduce a model of computation that is evidently programmable, by programs reminiscent of low-level computer machine code; and at the same time biologically plausible: its functioning is defined by a single and relatively small set of chemical-like reaction rules. Further properties: the model is stored-program: programs are the same as data, so programs are not only executable, but are also compilable and interpretable. It is universal: all computable functions can be computed (in natural ways and without arcane encodings of data and algorithm; it is also uniform: new ``hardware'' is not needed to solve new problems; and (last but not least it is Turing complete in a strong sense: a universal algorithm exists, that is able to execute any program, and is not asymptotically inefficient.

  16. FORTRAN computer program for seismic risk analysis

    Science.gov (United States)

    McGuire, Robin K.

    1976-01-01

    A program for seismic risk analysis is described which combines generality of application, efficiency and accuracy of operation, and the advantage of small storage requirements. The theoretical basis for the program is first reviewed, and the computational algorithms used to apply this theory are described. The information required for running the program is listed. Published attenuation functions describing the variation with earthquake magnitude and distance of expected values for various ground motion parameters are summarized for reference by the program user. Finally, suggestions for use of the program are made, an example problem is described (along with example problem input and output) and the program is listed.

  17. Computer program calculates transonic velocities in turbomachines

    Science.gov (United States)

    Katsanis, T.

    1971-01-01

    Computer program, TSONIC, combines velocity gradient and finite difference methods to obtain numerical solution for ideal, transonic, compressible flow for axial, radial, or mixed flow cascade of turbomachinery blades.

  18. Computer Programming Projects in Technology Courses.

    Science.gov (United States)

    Thomas, Charles R.

    1985-01-01

    Discusses programming projects in applied technology courses, examining documentation, formal reports, and implementation. Includes recommendations based on experience with a sophomore machine elements course which provided computers for problem solving exercises. (DH)

  19. Primer for Purchasing Computer Programs: Part 3.

    Science.gov (United States)

    Delf, Robert M.

    1981-01-01

    The last section of a three-part series deals with computer hardware requirements, program installation, and evaluation techniques. The first two parts appeared in the July and September 1981 issues. (Author/MLF)

  20. Primer for Purchasing Computer Programs: Part 2.

    Science.gov (United States)

    Delf, Robert M.

    1981-01-01

    The second article in a series of three to help purchasers obtain the best computer programs for their budgets, deals with bid solicitation and software evaluation. The first article appeared in the July 1981 issue. (Author/MLF)

  1. Computational hybrid anthropometric paediatric phantom library for internal radiation dosimetry

    Science.gov (United States)

    Xie, Tianwu; Kuster, Niels; Zaidi, Habib

    2017-04-01

    Hybrid computational phantoms combine voxel-based and simplified equation-based modelling approaches to provide unique advantages and more realism for the construction of anthropomorphic models. In this work, a methodology and C++ code are developed to generate hybrid computational phantoms covering statistical distributions of body morphometry in the paediatric population. The paediatric phantoms of the Virtual Population Series (IT’IS Foundation, Switzerland) were modified to match target anthropometric parameters, including body mass, body length, standing height and sitting height/stature ratio, determined from reference databases of the National Centre for Health Statistics and the National Health and Nutrition Examination Survey. The phantoms were selected as representative anchor phantoms for the newborn, 1, 2, 5, 10 and 15 years-old children, and were subsequently remodelled to create 1100 female and male phantoms with 10th, 25th, 50th, 75th and 90th body morphometries. Evaluation was performed qualitatively using 3D visualization and quantitatively by analysing internal organ masses. Overall, the newly generated phantoms appear very reasonable and representative of the main characteristics of the paediatric population at various ages and for different genders, body sizes and sitting stature ratios. The mass of internal organs increases with height and body mass. The comparison of organ masses of the heart, kidney, liver, lung and spleen with published autopsy and ICRP reference data for children demonstrated that they follow the same trend when correlated with age. The constructed hybrid computational phantom library opens up the prospect of comprehensive radiation dosimetry calculations and risk assessment for the paediatric population of different age groups and diverse anthropometric parameters.

  2. Program Verification of Numerical Computation - Part 2

    OpenAIRE

    Pantelis, Garry

    2014-01-01

    These notes present some extensions of a formal method introduced in an earlier paper. The formal method is designed as a tool for program verification of numerical computation and forms the basis of the software package VPC. Included in the extensions that are presented here are disjunctions and methods for detecting non-computable programs. A more comprehensive list of the construction rules as higher order constructs is also presented.

  3. Computer programming and architecture the VAX

    CERN Document Server

    Levy, Henry

    2014-01-01

    Takes a unique systems approach to programming and architecture of the VAXUsing the VAX as a detailed example, the first half of this book offers a complete course in assembly language programming. The second describes higher-level systems issues in computer architecture. Highlights include the VAX assembler and debugger, other modern architectures such as RISCs, multiprocessing and parallel computing, microprogramming, caches and translation buffers, and an appendix on the Berkeley UNIX assembler.

  4. Intraply Hybrid Composite Design

    Science.gov (United States)

    Chamis, C. C.; Sinclair, J. H.

    1986-01-01

    Several theoretical approaches combined in program. Intraply hybrid composites investigated theoretically and experimentally at Lewis Research Center. Theories developed during investigations and corroborated by attendant experiments used to develop computer program identified as INHYD (Intraply Hybrid Composite Design). INHYD includes several composites micromechanics theories, intraply hybrid composite theories, and integrated hygrothermomechanical theory. Equations from theories used by program as appropriate for user's specific applications.

  5. MFTF sensor verification computer program

    Energy Technology Data Exchange (ETDEWEB)

    Chow, H.K.

    1984-11-09

    The design, requirements document and implementation of the MFE Sensor Verification System were accomplished by the Measurement Engineering Section (MES), a group which provides instrumentation for the MFTF magnet diagnostics. The sensors, installed on and around the magnets and solenoids, housed in a vacuum chamber, will supply information about the temperature, strain, pressure, liquid helium level and magnet voltage to the facility operator for evaluation. As the sensors are installed, records must be maintained as to their initial resistance values. Also, as the work progresses, monthly checks will be made to insure continued sensor health. Finally, after the MFTF-B demonstration, yearly checks will be performed as well as checks of sensors as problem develops. The software to acquire and store the data was written by Harry Chow, Computations Department. The acquired data will be transferred to the MFE data base computer system.

  6. NASA High-End Computing Program Website

    Science.gov (United States)

    Cohen, Jarrett S.

    2008-01-01

    If you are a NASA-sponsored scientist or engineer. computing time is available to you at the High-End Computing (HEC) Program's NASA Advanced Supercomputing (NAS) Facility and NASA Center for Computational Sciences (NCCS). The Science Mission Directorate will select from requests NCCS Portals submitted to the e-Books online system for awards beginning on May 1. Current projects set to explore on April 30 must have a request in e-Books to be considered for renewal

  7. NASA High-End Computing Program Website

    Science.gov (United States)

    Cohen, Jarrett S.

    2008-01-01

    If you are a NASA-sponsored scientist or engineer. computing time is available to you at the High-End Computing (HEC) Program's NASA Advanced Supercomputing (NAS) Facility and NASA Center for Computational Sciences (NCCS). The Science Mission Directorate will select from requests NCCS Portals submitted to the e-Books online system for awards beginning on May 1. Current projects set to explore on April 30 must have a request in e-Books to be considered for renewal

  8. A Review of Hybrid Brain-Computer Interface Systems

    Directory of Open Access Journals (Sweden)

    Setare Amiri

    2013-01-01

    Full Text Available Increasing number of research activities and different types of studies in brain-computer interface (BCI systems show potential in this young research area. Research teams have studied features of different data acquisition techniques, brain activity patterns, feature extraction techniques, methods of classifications, and many other aspects of a BCI system. However, conventional BCIs have not become totally applicable, due to the lack of high accuracy, reliability, low information transfer rate, and user acceptability. A new approach to create a more reliable BCI that takes advantage of each system is to combine two or more BCI systems with different brain activity patterns or different input signal sources. This type of BCI, called hybrid BCI, may reduce disadvantages of each conventional BCI system. In addition, hybrid BCIs may create more applications and possibly increase the accuracy and the information transfer rate. However, the type of BCIs and their combinations should be considered carefully. In this paper, after introducing several types of BCIs and their combinations, we review and discuss hybrid BCIs, different possibilities to combine them, and their advantages and disadvantages.

  9. Computer-Aided Corrosion Program Management

    Science.gov (United States)

    MacDowell, Louis

    2010-01-01

    This viewgraph presentation reviews Computer-Aided Corrosion Program Management at John F. Kennedy Space Center. The contents include: 1) Corrosion at the Kennedy Space Center (KSC); 2) Requirements and Objectives; 3) Program Description, Background and History; 4) Approach and Implementation; 5) Challenges; 6) Lessons Learned; 7) Successes and Benefits; and 8) Summary and Conclusions.

  10. Integer Programming Models for Computational Biology Problems

    Institute of Scientific and Technical Information of China (English)

    Giuseppe Lancia

    2004-01-01

    The recent years have seen an impressive increase in the use of Integer Programming models for the solution of optimization problems originating in Molecular Biology. In this survey, some of the most successful Integer Programming approaches are described, while a broad overview of application areas being is given in modern Computational Molecular Biology.

  11. Computer Programming by Kindergarten Children Using LOGO.

    Science.gov (United States)

    Munro-Mavrias, Sandra

    Conservation ability, spatial motor ability, age, and gender were used as predictive variables in a study of 26 kindergarten children's computer programming ability. A preliminary pilot study with first graders had suggested that programming success was related to the ability to reverse thought processes. In both studies, children were taught to…

  12. Computer Programs for Settlement Analysis.

    Science.gov (United States)

    1980-10-01

    istrnibt ion. is itiifornt from top to bottomk arid thtat the presurre tit til’ riiddleto itile sti-atuitt (depth 25 ftct I represent s tilie average...1.1206 0 5000 0 1590 2 4 0.9694 20000 1,060 1 0000 0 1600 2 S 0.3200 4.0000 09see 2.0060 1 1610 11620 3 .095 6 0 58 Table 16 Output Data File for Program

  13. Research in mathematical theory of computation. [computer programming applications

    Science.gov (United States)

    Mccarthy, J.

    1973-01-01

    Research progress in the following areas is reviewed: (1) new version of computer program LCF (logic for computable functions) including a facility to search for proofs automatically; (2) the description of the language PASCAL in terms of both LCF and in first order logic; (3) discussion of LISP semantics in LCF and attempt to prove the correctness of the London compilers in a formal way; (4) design of both special purpose and domain independent proving procedures specifically program correctness in mind; (5) design of languages for describing such proof procedures; and (6) the embedding of ideas in the first order checker.

  14. Passive and Hybrid Solar Energy Program

    Energy Technology Data Exchange (ETDEWEB)

    1980-11-01

    The background and scope of the program is presented in general terms. The Program Plan is summarized describing how individual projects are categorized into mission-oriented tasks according to market sector categories. The individual projects funded by DOE are presented as follows: residential buildings, commercial buildings, solar products, solar cities and towns, and agricultural buildings. A summary list of projects by institution (contractors) and indexed by market application area is included. (MHR)

  15. Hybrid programming model for implicit PDE simulations on multicore architectures

    KAUST Repository

    Kaushik, Dinesh K.

    2011-01-01

    The complexity of programming modern multicore processor based clusters is rapidly rising, with GPUs adding further demand for fine-grained parallelism. This paper analyzes the performance of the hybrid (MPI+OpenMP) programming model in the context of an implicit unstructured mesh CFD code. At the implementation level, the effects of cache locality, update management, work division, and synchronization frequency are studied. The hybrid model presents interesting algorithmic opportunities as well: the convergence of linear system solver is quicker than the pure MPI case since the parallel preconditioner stays stronger when hybrid model is used. This implies significant savings in the cost of communication and synchronization (explicit and implicit). Even though OpenMP based parallelism is easier to implement (with in a subdomain assigned to one MPI process for simplicity), getting good performance needs attention to data partitioning issues similar to those in the message-passing case. © 2011 Springer-Verlag.

  16. Advancing Scholarship, Team Building, and Collaboration in a Hybrid Doctoral Program in Educational Leadership

    Science.gov (United States)

    Holmes, Barbara; Trimble, Meridee; Morrison-Danner, Dietrich

    2014-01-01

    Hybrid programs are changing the landscape of doctoral programs at American universities and colleges. The increased demand for hybrid doctoral programs, particularly for educational and career advancement, serves as an innovative way to increase scholarship, advance service, and promote leadership. Hybrid programs serve as excellent venues for…

  17. The revised solar array synthesis computer program

    Science.gov (United States)

    1970-01-01

    The Revised Solar Array Synthesis Computer Program is described. It is a general-purpose program which computes solar array output characteristics while accounting for the effects of temperature, incidence angle, charged-particle irradiation, and other degradation effects on various solar array configurations in either circular or elliptical orbits. Array configurations may consist of up to 75 solar cell panels arranged in any series-parallel combination not exceeding three series-connected panels in a parallel string and no more than 25 parallel strings in an array. Up to 100 separate solar array current-voltage characteristics, corresponding to 100 equal-time increments during the sunlight illuminated portion of an orbit or any 100 user-specified combinations of incidence angle and temperature, can be computed and printed out during one complete computer execution. Individual panel incidence angles may be computed and printed out at the user's option.

  18. Using a Hybrid Approach for a Leadership Cohort Program

    Science.gov (United States)

    Norman, Maxine A.

    2013-01-01

    Because information technology continues to change rapidly, Extension is challenged with learning and using technology appropriately. We assert Extension cannot shy away from the challenges but must embrace technology because audiences and external forces demand it. A hybrid, or blended, format of a leadership cohort program was offered to public…

  19. An introduction to Python and computer programming

    CERN Document Server

    Zhang, Yue

    2015-01-01

    This book introduces Python programming language and fundamental concepts in algorithms and computing. Its target audience includes students and engineers with little or no background in programming, who need to master a practical programming language and learn the basic thinking in computer science/programming. The main contents come from lecture notes for engineering students from all disciplines, and has received high ratings. Its materials and ordering have been adjusted repeatedly according to classroom reception. Compared to alternative textbooks in the market, this book introduces the underlying Python implementation of number, string, list, tuple, dict, function, class, instance and module objects in a consistent and easy-to-understand way, making assignment, function definition, function call, mutability and binding environments understandable inside-out. By giving the abstraction of implementation mechanisms, this book builds a solid understanding of the Python programming language.

  20. The Computational Physics Program of the national MFE Computer Center

    Energy Technology Data Exchange (ETDEWEB)

    Mirin, A.A.

    1989-01-01

    Since June 1974, the MFE Computer Center has been engaged in a significant computational physics effort. The principal objective of the Computational Physics Group is to develop advanced numerical models for the investigation of plasma phenomena and the simulation of present and future magnetic confinement devices. Another major objective of the group is to develop efficient algorithms and programming techniques for current and future generations of supercomputers. The Computational Physics Group has been involved in several areas of fusion research. One main area is the application of Fokker-Planck/quasilinear codes to tokamaks. Another major area is the investigation of resistive magnetohydrodynamics in three dimensions, with applications to tokamaks and compact toroids. A third area is the investigation of kinetic instabilities using a 3-D particle code; this work is often coupled with the task of numerically generating equilibria which model experimental devices. Ways to apply statistical closure approximations to study tokamak-edge plasma turbulence have been under examination, with the hope of being able to explain anomalous transport. Also, we are collaborating in an international effort to evaluate fully three-dimensional linear stability of toroidal devices. In addition to these computational physics studies, the group has developed a number of linear systems solvers for general classes of physics problems and has been making a major effort at ascertaining how to efficiently utilize multiprocessor computers. A summary of these programs are included in this paper. 6 tabs.

  1. Permanent-File-Validation Utility Computer Program

    Science.gov (United States)

    Derry, Stephen D.

    1988-01-01

    Errors in files detected and corrected during operation. Permanent File Validation (PFVAL) utility computer program provides CDC CYBER NOS sites with mechanism to verify integrity of permanent file base. Locates and identifies permanent file errors in Mass Storage Table (MST) and Track Reservation Table (TRT), in permanent file catalog entries (PFC's) in permit sectors, and in disk sector linkage. All detected errors written to listing file and system and job day files. Program operates by reading system tables , catalog track, permit sectors, and disk linkage bytes to vaidate expected and actual file linkages. Used extensively to identify and locate errors in permanent files and enable online correction, reducing computer-system downtime.

  2. Maze learning by a hybrid brain-computer system

    Science.gov (United States)

    Wu, Zhaohui; Zheng, Nenggan; Zhang, Shaowu; Zheng, Xiaoxiang; Gao, Liqiang; Su, Lijuan

    2016-09-01

    The combination of biological and artificial intelligence is particularly driven by two major strands of research: one involves the control of mechanical, usually prosthetic, devices by conscious biological subjects, whereas the other involves the control of animal behaviour by stimulating nervous systems electrically or optically. However, to our knowledge, no study has demonstrated that spatial learning in a computer-based system can affect the learning and decision making behaviour of the biological component, namely a rat, when these two types of intelligence are wired together to form a new intelligent entity. Here, we show how rule operations conducted by computing components contribute to a novel hybrid brain-computer system, i.e., ratbots, exhibit superior learning abilities in a maze learning task, even when their vision and whisker sensation were blocked. We anticipate that our study will encourage other researchers to investigate combinations of various rule operations and other artificial intelligence algorithms with the learning and memory processes of organic brains to develop more powerful cyborg intelligence systems. Our results potentially have profound implications for a variety of applications in intelligent systems and neural rehabilitation.

  3. Computational fluid dynamics challenges for hybrid air vehicle applications

    Science.gov (United States)

    Carrin, M.; Biava, M.; Steijl, R.; Barakos, G. N.; Stewart, D.

    2017-06-01

    This paper begins by comparing turbulence models for the prediction of hybrid air vehicle (HAV) flows. A 6 : 1 prolate spheroid is employed for validation of the computational fluid dynamics (CFD) method. An analysis of turbulent quantities is presented and the Shear Stress Transport (SST) k-ω model is compared against a k-ω Explicit Algebraic Stress model (EASM) within the unsteady Reynolds-Averaged Navier-Stokes (RANS) framework. Further comparisons involve Scale Adaptative Simulation models and a local transition transport model. The results show that the flow around the vehicle at low pitch angles is sensitive to transition effects. At high pitch angles, the vortices generated on the suction side provide substantial lift augmentation and are better resolved by EASMs. The validated CFD method is employed for the flow around a shape similar to the Airlander aircraft of Hybrid Air Vehicles Ltd. The sensitivity of the transition location to the Reynolds number is demonstrated and the role of each vehicle£s component is analyzed. It was found that the ¦ns contributed the most to increase the lift and drag.

  4. Universal quantum computation using all-optical hybrid encoding

    Institute of Scientific and Technical Information of China (English)

    郭奇; 程留永; 王洪福; 张寿

    2015-01-01

    By employing displacement operations, single-photon subtractions, and weak cross-Kerr nonlinearity, we propose an alternative way of implementing several universal quantum logical gates for all-optical hybrid qubits encoded in both single-photon polarization state and coherent state. Since these schemes can be straightforwardly implemented only using local operations without teleportation procedure, therefore, less physical resources and simpler operations are required than the existing schemes. With the help of displacement operations, a large phase shift of the coherent state can be obtained via currently available tiny cross-Kerr nonlinearity. Thus, all of these schemes are nearly deterministic and feasible under current technology conditions, which makes them suitable for large-scale quantum computing.

  5. "Hybrids" and the Gendering of Computing Jobs in Australia

    Directory of Open Access Journals (Sweden)

    Gillian Whitehouse

    2005-05-01

    Full Text Available This paper presents recent Australian evidence on the extent to which women are entering “hybrid” computing jobs combining technical and communication or “people management” skills, and the way these skill combinations are valued at organisational level. We draw on a survey of detailed occupational roles in large IT firms to examine the representation of women in a range of jobs consistent with the notion of “hybrid”, and analyse the discourse around these sorts of skills in a set of organisational case studies. Our research shows a traditional picture of labour market segmentation, with limited representation of women in high status jobs, and their relatively greater prevalence in more routine areas of the industry. While our case studies highlight perceptions of the need for hybrid roles and assumptions about the suitability of women for such jobs, the ongoing masculinity of core development functions appears untouched by this discourse.

  6. A CAD (Classroom Assessment Design) of a Computer Programming Course

    Science.gov (United States)

    Hawi, Nazir S.

    2012-01-01

    This paper presents a CAD (classroom assessment design) of an entry-level undergraduate computer programming course "Computer Programming I". CAD has been the product of a long experience in teaching computer programming courses including teaching "Computer Programming I" 22 times. Each semester, CAD is evaluated and modified…

  7. [Computer programming for radiocardiography and radiocyclography].

    Science.gov (United States)

    Khorvat, M; Sabo, D; Tomor, B; Almashi, L; Debretsi, T; Ludvig, K

    1977-09-01

    Digital radiocardiography with linear programming, on the ALGOL language, by means of an ODRA-1204 computer is described. Determination of cardiac dynamical parameters of patients in rest and under load was carried out. Radioisotopic methods were combined with conventional cardiologic examinations, particularly the microcatheterization.

  8. Contributions to computational stereology and parallel programming

    DEFF Research Database (Denmark)

    Rasmusson, Allan

    rotator, even without the need for isotropic sections. To meet the need for computational power to perform image restoration of virtual tissue sections, parallel programming on GPUs has also been part of the project. This has lead to a significant change in paradigm for a previously developed surgical...

  9. Computer Program Re-layers Engineering Drawings

    Science.gov (United States)

    Crosby, Dewey C., III

    1990-01-01

    RULCHK computer program aids in structuring layers of information pertaining to part or assembly designed with software described in article "Software for Drawing Design Details Concurrently" (MFS-28444). Checks and optionally updates structure of layers for part. Enables designer to construct model and annotate its documentation without burden of manually layering part to conform to standards at design time.

  10. Data systems and computer science programs: Overview

    Science.gov (United States)

    Smith, Paul H.; Hunter, Paul

    1991-01-01

    An external review of the Integrated Technology Plan for the Civil Space Program is presented. The topics are presented in viewgraph form and include the following: onboard memory and storage technology; advanced flight computers; special purpose flight processors; onboard networking and testbeds; information archive, access, and retrieval; visualization; neural networks; software engineering; and flight control and operations.

  11. Using Wikis to Learn Computer Programming

    Science.gov (United States)

    González-Ortega, David; Díaz-Pernas, Francisco Javier; Martínez-Zarzuela, Mario; Antón-Rodríguez, Míriam; Díez-Higuera, José Fernando; Boto-Giralda, Daniel; de La Torre-Díez, Isabel

    In this paper, we analyze the suitability of wikis in education, especially to learn computer programming, and present a wiki-based teaching innovation activity carried out in the first course of Telecommunication Engineering during two academic courses. The activity consisted in the creation of a wiki to collect errors made by students while they were coding programs in C language. The activity was framed in a collaborative learning strategy in which all the students had to collaborate and be responsible for the final result, but also in a competitive learning strategy, in which the groups had to compete to make original meaningful contributions to the wiki. The use of a wiki for learning computer programming was very satisfactory. A wiki allows to monitor continuously the work of the students, who become publishers and evaluators of contents rather than mere consumers of information, in an active learning approach.

  12. Employing subgoals in computer programming education

    Science.gov (United States)

    Margulieux, Lauren E.; Catrambone, Richard; Guzdial, Mark

    2016-01-01

    The rapid integration of technology into our professional and personal lives has left many education systems ill-equipped to deal with the influx of people seeking computing education. To improve computing education, we are applying techniques that have been developed for other procedural fields. The present study applied such a technique, subgoal labeled worked examples, to explore whether it would improve programming instruction. The first two experiments, conducted in a laboratory, suggest that the intervention improves undergraduate learners' problem-solving performance and affects how learners approach problem-solving. The third experiment demonstrates that the intervention has similar, and perhaps stronger, effects in an online learning environment with in-service K-12 teachers who want to become qualified to teach computing courses. By implementing this subgoal intervention as a tool for educators to teach themselves and their students, education systems could improve computing education and better prepare learners for an increasingly technical world.

  13. Comparing Hybrid Learning with Traditional Approaches on Learning the Microsoft Office Power Point 2003 Program in Tertiary Education

    Science.gov (United States)

    Vernadakis, Nikolaos; Antoniou, Panagiotis; Giannousi, Maria; Zetou, Eleni; Kioumourtzoglou, Efthimis

    2011-01-01

    The purpose of this study was to determine the effectiveness of a hybrid learning approach to deliver a computer science course concerning the Microsoft office PowerPoint 2003 program in comparison to delivering the same course content in the form of traditional lectures. A hundred and seventy-two first year university students were randomly…

  14. HyCFS, a high-resolution shock capturing code for numerical simulation on hybrid computational clusters

    Science.gov (United States)

    Shershnev, Anton A.; Kudryavtsev, Alexey N.; Kashkovsky, Alexander V.; Khotyanovsky, Dmitry V.

    2016-10-01

    The present paper describes HyCFS code, developed for numerical simulation of compressible high-speed flows on hybrid CPU/GPU (Central Processing Unit / Graphical Processing Unit) computational clusters on the basis of full unsteady Navier-Stokes equations, using modern shock capturing high-order TVD (Total Variation Diminishing) and WENO (Weighted Essentially Non-Oscillatory) schemes on general curvilinear structured grids. We discuss the specific features of hybrid architecture and details of program implementation and present the results of code verification.

  15. Electric and hybrid vehicle program; Site Operator Program

    Energy Technology Data Exchange (ETDEWEB)

    Warren, J.F.

    1992-05-01

    Activities during the second quarter included the second meeting of the Site Operators in Phoenix, AZ in late April. The meeting was held in conjunction with the Solar and Electric 500 Race activities. Delivery of vehicles ordered previously has begun, although two of the operators are experiencing some delays in receiving their vehicles. Public demonstration activities continue, with an apparent increasing level of awareness and interest being displayed by the public. Initial problems with the Site Operator Database have been corrected and revised copies of the program have been supplied to the Program participants. Operating and Maintenance data is being supplied and submitted to INEL on a monthly basis. Interest in the Site Operator Program is being reflected in requests for information from several organizations from across the country, representing a wide diversity of interests. These organizations have been referred to existing Site Operators with the explanation that the program will not be adding new participants, but that most of the existing organizations are willing to work with other groups. The exception to this was the addition of Potomac Electric Power Company (PEPCO) to the program. PEPCO has been awarded a subcontract to operate and maintain the DOE owned G-Van and Escort located in Washington, DC. They will provide data on these vehicles, as well as a Solectria Force which PEPCO has purchased. The Task Force intends to be actively involved in the infrastructure development in a wide range of areas. These include, among others, personnel development, safety, charging, and servicing. Work continues in these areas. York Technical College (YORK) has completed the draft outline for the EV Technician course. This is being circulated to organizations around the country for comments. Kansas State University (KSU) is working with a private sector company to develop a energy dispensing meter for opportunity charging in public areas.

  16. Electric and hybrid vehicle program; Site Operator Program

    Science.gov (United States)

    Warren, J. F.

    1992-05-01

    Activities during the second quarter included the second meeting of the Site Operators in Phoenix, AZ in late April. The meeting was held in conjunction with the Solar and Electric 500 Race activities. Delivery of vehicles ordered previously has begun, although two of the operators are experiencing some delays in receiving their vehicles. Public demonstration activities continue, with an apparent increasing level of awareness and interest being displayed by the public. Initial problems with the Site Operator Database have been corrected and revised copies of the program have been supplied to the program participants. Operating and Maintenance data is being supplied and submitted to INEL on a monthly basis. Interest in the Site Operator Program is being reflected in requests for information from several organizations from across the country, representing a wide diversity of interests. These organizations have been referred to existing Site Operators with the explanation that the program will not be adding new participants, but that most of the existing organizations are willing to work with other groups. The exception to this was the addition of Potomac Electric Power Company (PEPCO) to the program. PEPCO has been awarded a subcontract to operate and maintain the DOE owned G-Van and Escort located in Washington, DC. They will provide data on these vehicles, as well as a Solectria Force which PEPCO has purchased. The Task Force intends to be actively involved in the infrastructure development in a wide range of areas. These include, among others, personnel development, safety, charging, and servicing. Work continues in these areas. York Technical College (YORK) has completed the draft outline for the EV Technician course. This is being circulated to organizations around the country for comments. Kansas State University (KSU) is working with a private sector company to develop a energy dispensing meter for opportunity charging in public areas.

  17. Phase I of the Near-Term Hybrid Passenger-Vehicle Development Program. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1980-10-01

    Under contract to the Jet Propulsion Laboratory of the California Institute of Technology, Minicars conducted Phase I of the Near-Term Hybrid Passenger Vehicle (NTHV) Development Program. This program led to the preliminary design of a hybrid (electric and internal combustion engine powered) vehicle and fulfilled the objectives set by JPL. JPL requested that the report address certain specific topics. A brief summary of all Phase I activities is given initially; the hybrid vehicle preliminary design is described in Sections 4, 5, and 6. Table 2 of the Summary lists performance projections for the overall vehicle and some of its subsystems. Section 4.5 gives references to the more-detailed design information found in the Preliminary Design Data Package (Appendix C). Alternative hybrid-vehicle design options are discussed in Sections 3 through 6. A listing of the tradeoff study alternatives is included in Section 3. Computer simulations are discussed in Section 9. Section 8 describes the supporting economic analyses. Reliability and safety considerations are discussed specifically in Section 7 and are mentioned in Sections 4, 5, and 6. Section 10 lists conclusions and recommendations arrived at during the performance of Phase I. A complete bibliography follows the list of references.

  18. Near term hybrid passenger vehicle development program, phase 1

    Science.gov (United States)

    1980-01-01

    Missions for hybrid vehicles that promise to yield high petroleum impact were identified and a preliminary design, was developed that satisfies the mission requirements and performance specifications. Technologies that are critical to successful vehicle design, development and fabrication were determined. Trade-off studies to maximize fuel savings were used to develop initial design specifications of the near term hybrid vehicle. Various designs were "driven" through detailed computer simulations which calculate the petroleum consumption in standard driving cycles, the petroleum and electricity consumptions over the specified missions, and the vehicle's life cycle costs over a 10 year vehicle lifetime. Particular attention was given to the selection of the electric motor, heat engine, drivetrain, battery pack and control system. The preliminary design reflects a modified current compact car powered by a currently available turbocharged diesel engine and a 24 kW (peak) compound dc electric motor.

  19. A Hybrid Segmentation Framework for Computer-Assisted Dental Procedures

    Science.gov (United States)

    Hosntalab, Mohammad; Aghaeizadeh Zoroofi, Reza; Abbaspour Tehrani-Fard, Ali; Shirani, Gholamreza; Reza Asharif, Mohammad

    Teeth segmentation in computed tomography (CT) images is a major and challenging task for various computer assisted procedures. In this paper, we introduced a hybrid method for quantification of teeth in CT volumetric dataset inspired by our previous experiences and anatomical knowledge of teeth and jaws. In this regard, we propose a novel segmentation technique using an adaptive thresholding, morphological operations, panoramic re-sampling and variational level set algorithm. The proposed method consists of several steps as follows: first, we determine the operation region in CT slices. Second, the bony tissues are separated from other tissues by utilizing an adaptive thresholding technique based on the 3D pulses coupled neural networks (PCNN). Third, teeth tissue is classified from other bony tissues by employing panorex lines and anatomical knowledge of teeth in the jaws. In this case, the panorex lines are estimated using Otsu thresholding and mathematical morphology operators. Then, the proposed method is followed by calculating the orthogonal lines corresponding to panorex lines and panoramic re-sampling of the dataset. Separation of upper and lower jaws and initial segmentation of teeth are performed by employing the integral projections of the panoramic dataset. Based the above mentioned procedures an initial mask for each tooth is obtained. Finally, we utilize the initial mask of teeth and apply a variational level set to refine initial teeth boundaries to final contour. In the last step a surface rendering algorithm known as marching cubes (MC) is applied to volumetric visualization. The proposed algorithm was evaluated in the presence of 30 cases. Segmented images were compared with manually outlined contours. We compared the performance of segmentation method using ROC analysis of the thresholding, watershed and our previous works. The proposed method performed best. Also, our algorithm has the advantage of high speed compared to our previous works.

  20. Scientific Computing in the CH Programming Language

    Directory of Open Access Journals (Sweden)

    Harry H. Cheng

    1993-01-01

    Full Text Available We have developed a general-purpose block-structured interpretive programming Ianguage. The syntax and semantics of this language called CH are similar to C. CH retains most features of C from the scientific computing point of view. In this paper, the extension of C to CH for numerical computation of real numbers will be described. Metanumbers of −0.0, 0.0, Inf, −Inf, and NaN are introduced in CH. Through these metanumbers, the power of the IEEE 754 arithmetic standard is easily available to the programmer. These metanumbers are extended to commonly used mathematical functions in the spirit of the IEEE 754 standard and ANSI C. The definitions for manipulation of these metanumbers in I/O; arithmetic, relational, and logic operations; and built-in polymorphic mathematical functions are defined. The capabilities of bitwise, assignment, address and indirection, increment and decrement, as well as type conversion operations in ANSI C are extended in CH. In this paper, mainly new linguistic features of CH in comparison to C will be described. Example programs programmed in CH with metanumbers and polymorphic mathematical functions will demonstrate capabilities of CH in scientific computing.

  1. Computational analysis on plug-in hybrid electric motorcycle chassis

    Science.gov (United States)

    Teoh, S. J.; Bakar, R. A.; Gan, L. M.

    2013-12-01

    Plug-in hybrid electric motorcycle (PHEM) is an alternative to promote sustainability lower emissions. However, the PHEM overall system packaging is constrained by limited space in a motorcycle chassis. In this paper, a chassis applying the concept of a Chopper is analysed to apply in PHEM. The chassis 3dimensional (3D) modelling is built with CAD software. The PHEM power-train components and drive-train mechanisms are intergraded into the 3D modelling to ensure the chassis provides sufficient space. Besides that, a human dummy model is built into the 3D modelling to ensure the rider?s ergonomics and comfort. The chassis 3D model then undergoes stress-strain simulation. The simulation predicts the stress distribution, displacement and factor of safety (FOS). The data are used to identify the critical point, thus suggesting the chassis design is applicable or need to redesign/ modify to meet the require strength. Critical points mean highest stress which might cause the chassis to fail. This point occurs at the joints at triple tree and bracket rear absorber for a motorcycle chassis. As a conclusion, computational analysis predicts the stress distribution and guideline to develop a safe prototype chassis.

  2. Do Teachers Need to Know about Computer Programming?

    Science.gov (United States)

    Yoder, Sharon; Moursund, David

    1993-01-01

    The article explores some of the history that has led to the current emphasis on teaching educators to use computer applications without teaching the underlying computer programing or computer science, arguing that all teachers should receive some instruction in computer science and computer programing. (SM)

  3. A research program in empirical computer science

    Science.gov (United States)

    Knight, J. C.

    1991-01-01

    During the grant reporting period our primary activities have been to begin preparation for the establishment of a research program in experimental computer science. The focus of research in this program will be safety-critical systems. Many questions that arise in the effort to improve software dependability can only be addressed empirically. For example, there is no way to predict the performance of the various proposed approaches to building fault-tolerant software. Performance models, though valuable, are parameterized and cannot be used to make quantitative predictions without experimental determination of underlying distributions. In the past, experimentation has been able to shed some light on the practical benefits and limitations of software fault tolerance. It is common, also, for experimentation to reveal new questions or new aspects of problems that were previously unknown. A good example is the Consistent Comparison Problem that was revealed by experimentation and subsequently studied in depth. The result was a clear understanding of a previously unknown problem with software fault tolerance. The purpose of a research program in empirical computer science is to perform controlled experiments in the area of real-time, embedded control systems. The goal of the various experiments will be to determine better approaches to the construction of the software for computing systems that have to be relied upon. As such it will validate research concepts from other sources, provide new research results, and facilitate the transition of research results from concepts to practical procedures that can be applied with low risk to NASA flight projects. The target of experimentation will be the production software development activities undertaken by any organization prepared to contribute to the research program. Experimental goals, procedures, data analysis and result reporting will be performed for the most part by the University of Virginia.

  4. Gender Differences in the Use of Computers, Programming, and Peer Interactions in Computer Science Classrooms

    Science.gov (United States)

    Stoilescu, Dorian; Egodawatte, Gunawardena

    2010-01-01

    Research shows that female and male students in undergraduate computer science programs view computer culture differently. Female students are interested more in the use of computers than in doing programming, whereas male students see computer science mainly as a programming activity. The overall purpose of our research was not to find new…

  5. Gender Differences in the Use of Computers, Programming, and Peer Interactions in Computer Science Classrooms

    Science.gov (United States)

    Stoilescu, Dorian; Egodawatte, Gunawardena

    2010-01-01

    Research shows that female and male students in undergraduate computer science programs view computer culture differently. Female students are interested more in the use of computers than in doing programming, whereas male students see computer science mainly as a programming activity. The overall purpose of our research was not to find new…

  6. Computer programming for generating visual stimuli.

    Science.gov (United States)

    Bukhari, Farhan; Kurylo, Daniel D

    2008-02-01

    Critical to vision research is the generation of visual displays with precise control over stimulus metrics. Generating stimuli often requires adapting commercial software or developing specialized software for specific research applications. In order to facilitate this process, we give here an overview that allows nonexpert users to generate and customize stimuli for vision research. We first give a review of relevant hardware and software considerations, to allow the selection of display hardware, operating system, programming language, and graphics packages most appropriate for specific research applications. We then describe the framework of a generic computer program that can be adapted for use with a broad range of experimental applications. Stimuli are generated in the context of trial events, allowing the display of text messages, the monitoring of subject responses and reaction times, and the inclusion of contingency algorithms. This approach allows direct control and management of computer-generated visual stimuli while utilizing the full capabilities of modern hardware and software systems. The flowchart and source code for the stimulus-generating program may be downloaded from www.psychonomic.org/archive.

  7. Translator program converts computer printout into braille language

    Science.gov (United States)

    Powell, R. A.

    1967-01-01

    Computer program converts print image tape files into six dot Braille cells, enabling a blind computer programmer to monitor and evaluate data generated by his own programs. The Braille output is printed 8 lines per inch.

  8. An Application of Programming and Mathematics: Writing a Computer Graphing Program.

    Science.gov (United States)

    Waits, Bert; Demana, Franklin

    1988-01-01

    Suggests computer graphing as a topic for computer programing. Reviews Apple II computer graphics information and gives suggestions for writing the programs. Presents equations to help place information onto the screen with proper coordinates. (MVL)

  9. STEW A Nonlinear Data Modeling Computer Program

    CERN Document Server

    Chen, H

    2000-01-01

    A nonlinear data modeling computer program, STEW, employing the Levenberg-Marquardt algorithm, has been developed to model the experimental sup 2 sup 3 sup 9 Pu(n,f) and sup 2 sup 3 sup 5 U(n,f) cross sections. This report presents results of the modeling of the sup 2 sup 3 sup 9 Pu(n,f) and sup 2 sup 3 sup 5 U(n,f) cross-section data. The calculation of the fission transmission coefficient is based on the double-humped-fission-barrier model of Bjornholm and Lynn. Incident neutron energies of up to 5 MeV are considered.

  10. An overview of the SAFSIM computer program

    Energy Technology Data Exchange (ETDEWEB)

    Dobranich, D.

    1993-01-01

    SAFSIM (System Analysis Flow SIMulator) is a FORTRAN computer program that provides engineering simulations of user-specified flow networks at the system level. It includes fluid mechanics, heat transfer, and reactor dynamics capabilities. SAFSIM provides sufficient versatility to allow the simulation of almost any flow system, from a backyard sprinkler system to a clustered nuclear reactor propulsion system. In addition to versatility, speed and robustness are primary goals of SAFSIM development. The current capabilities of SAFSIM are summarized and some sample applications are presented. It is applied here to a nuclear thermal propulsion system and nuclear rocket engine test facility.

  11. STEW: A Nonlinear Data Modeling Computer Program

    Energy Technology Data Exchange (ETDEWEB)

    Chen, H.

    2000-03-04

    A nonlinear data modeling computer program, STEW, employing the Levenberg-Marquardt algorithm, has been developed to model the experimental {sup 239}Pu(n,f) and {sup 235}U(n,f) cross sections. This report presents results of the modeling of the {sup 239}Pu(n,f) and {sup 235}U(n,f) cross-section data. The calculation of the fission transmission coefficient is based on the double-humped-fission-barrier model of Bjornholm and Lynn. Incident neutron energies of up to 5 MeV are considered.

  12. 78 FR 15730 - Privacy Act of 1974; Computer Matching Program

    Science.gov (United States)

    2013-03-12

    ... SECURITY Office of the Secretary Privacy Act of 1974; Computer Matching Program AGENCY: U.S. Citizenship...: Privacy Act of 1974; Computer Matching Program between the Department of Homeland Security, U.S... notice of the existence of a computer matching program between the Department of Homeland Security,...

  13. Advanced Certification Program for Computer Graphic Specialists. Final Performance Report.

    Science.gov (United States)

    Parkland Coll., Champaign, IL.

    A pioneer program in computer graphics was implemented at Parkland College (Illinois) to meet the demand for specialized technicians to visualize data generated on high performance computers. In summer 1989, 23 students were accepted into the pilot program. Courses included C programming, calculus and analytic geometry, computer graphics, and…

  14. A Hybrid Program Projects Selection Model for Nonprofit TV Stations

    Directory of Open Access Journals (Sweden)

    Kuei-Lun Chang

    2015-01-01

    Full Text Available This study develops a hybrid multiple criteria decision making (MCDM model to select program projects for nonprofit TV stations on the basis of managers’ perceptions. By the concept of balanced scorecard (BSC and corporate social responsibility (CSR, we collect criteria for selecting the best program project. Fuzzy Delphi method, which can lead to better criteria selection, is used to modify criteria. Next, considering the interdependence among the selection criteria, analytic network process (ANP is then used to obtain the weights of them. To avoid calculation and additional pairwise comparisons of ANP, technique for order preference by similarity to ideal solution (TOPSIS is used to rank the alternatives. A case study is presented to demonstrate the applicability of the proposed model.

  15. ALTERNATIVES TO IMPROVE HYBRIDIZATION EFFICIENCY IN Eucalyptus BREEDING PROGRAMS

    Directory of Open Access Journals (Sweden)

    Roselaine Cristina Pereira

    2002-01-01

    Full Text Available Simple and quick hybridization procedures and ways to keep pollen grains viable for long periods are sought in plant breeding programs to provide greater work flexibility. The presentstudy was carried out to assess the efficiency of pollinations made shortly after flower emasculationand the viability of stored pollen from Eucalyptus camaldulensis and Eucalyptus urophylla clones cultivated in Northwestern Minas Gerais State. Controlled pollinations were carried out at zero, one,three, five and seven days after emasculation. Hybridization efficiency was assessed by thepercentage of viable fruits, number of seeds produced per fruit, percentage of viable seeds and also bycytological observation of the pollen development along the style. Flower buds from clones of the twospecies were collected close to anthesis to assess the viability of pollen grain storage. Pollen was thencollected and stored in a freezer (-18oC for 1, 2 and 3 months. Pollen assessed was carried out by invitro and in vivo germination tests. The efficiency of the pollinations varied with their delay and alsobetween species. The greatest pollination efficiency was obtained when they were carried out on thethird and fifth day after emasculation, but those performed simultaneously with emasculationproduced enough seeds to allow this practice in breeding programs. The decrease in pollen viabilitywith storage was not sufficiently significant to preclude the use of this procedure in artificialhybridization.

  16. Mixed model approaches for the identification of QTLs within a maize hybrid breeding program.

    Science.gov (United States)

    van Eeuwijk, Fred A; Boer, Martin; Totir, L Radu; Bink, Marco; Wright, Deanne; Winkler, Christopher R; Podlich, Dean; Boldman, Keith; Baumgarten, Andy; Smalley, Matt; Arbelbide, Martin; ter Braak, Cajo J F; Cooper, Mark

    2010-01-01

    Two outlines for mixed model based approaches to quantitative trait locus (QTL) mapping in existing maize hybrid selection programs are presented: a restricted maximum likelihood (REML) and a Bayesian Markov Chain Monte Carlo (MCMC) approach. The methods use the in-silico-mapping procedure developed by Parisseaux and Bernardo (2004) as a starting point. The original single-point approach is extended to a multi-point approach that facilitates interval mapping procedures. For computational and conceptual reasons, we partition the full set of relationships from founders to parents of hybrids into two types of relations by defining so-called intermediate founders. QTL effects are defined in terms of those intermediate founders. Marker based identity by descent relationships between intermediate founders define structuring matrices for the QTL effects that change along the genome. The dimension of the vector of QTL effects is reduced by the fact that there are fewer intermediate founders than parents. Furthermore, additional reduction in the number of QTL effects follows from the identification of founder groups by various algorithms. As a result, we obtain a powerful mixed model based statistical framework to identify QTLs in genetic backgrounds relevant to the elite germplasm of a commercial breeding program. The identification of such QTLs will provide the foundation for effective marker assisted and genome wide selection strategies. Analyses of an example data set show that QTLs are primarily identified in different heterotic groups and point to complementation of additive QTL effects as an important factor in hybrid performance.

  17. 40 CFR Appendix C to Part 67 - Computer Program

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 15 2010-07-01 2010-07-01 false Computer Program C Appendix C to Part 67 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) EPA APPROVAL OF STATE NONCOMPLIANCE PENALTY PROGRAM Pt. 67, App. C Appendix C to Part 67—Computer Program...

  18. Computer modeling for investigating the stress-strainstate of beams with hybrid reinforcement

    Directory of Open Access Journals (Sweden)

    Rakhmonov Ahmadzhon Dzhamoliddinovich

    2014-01-01

    Full Text Available In this article the operation of a continuous double-span beam with hybrid reinforcement, steel and composite reinforcement under the action of concentrated forces is considered. The nature of stress-strain state of structures is investigated with the help of computer modeling using a three-dimensional model. Five models of beams with different characteristics were studied. According to the results of numerical studies the data on the distribution of stresses and displacements in continuous beams was provided. The dependence of the stress-strain state on increasing the percentage of the top reinforcement (composite of fittings and change in the concrete class is determined and presented in the article. Currently, the interest in the use of composite reinforcement as a working reinforcement of concrete structures in Russia has increased significantly, which is reflected in the increase of the number of scientific and practical publications devoted to the study of the properties and use of composite materials in construction, as well as emerging draft documents for design of such structures. One of the proposals for basalt reinforcement application is to use it in bending elements with combined reinforcement. For theoretical justification of the proposed nature of reinforcement and improvement of the calculation method the authors conduct a study of stress-strain state of continuous beams with the use of modern computing systems. The software program LIRA is most often used compared to other programs representing strain-stress state analysis of concrete structures.

  19. Hybrid NN/SVM Computational System for Optimizing Designs

    Science.gov (United States)

    Rai, Man Mohan

    2009-01-01

    A computational method and system based on a hybrid of an artificial neural network (NN) and a support vector machine (SVM) (see figure) has been conceived as a means of maximizing or minimizing an objective function, optionally subject to one or more constraints. Such maximization or minimization could be performed, for example, to optimize solve a data-regression or data-classification problem or to optimize a design associated with a response function. A response function can be considered as a subset of a response surface, which is a surface in a vector space of design and performance parameters. A typical example of a design problem that the method and system can be used to solve is that of an airfoil, for which a response function could be the spatial distribution of pressure over the airfoil. In this example, the response surface would describe the pressure distribution as a function of the operating conditions and the geometric parameters of the airfoil. The use of NNs to analyze physical objects in order to optimize their responses under specified physical conditions is well known. NN analysis is suitable for multidimensional interpolation of data that lack structure and enables the representation and optimization of a succession of numerical solutions of increasing complexity or increasing fidelity to the real world. NN analysis is especially useful in helping to satisfy multiple design objectives. Feedforward NNs can be used to make estimates based on nonlinear mathematical models. One difficulty associated with use of a feedforward NN arises from the need for nonlinear optimization to determine connection weights among input, intermediate, and output variables. It can be very expensive to train an NN in cases in which it is necessary to model large amounts of information. Less widely known (in comparison with NNs) are support vector machines (SVMs), which were originally applied in statistical learning theory. In terms that are necessarily

  20. Requirements for Control Room Computer-Based Procedures for use in Hybrid Control Rooms

    Energy Technology Data Exchange (ETDEWEB)

    Le Blanc, Katya Lee [Idaho National Lab. (INL), Idaho Falls, ID (United States); Oxstrand, Johanna Helene [Idaho National Lab. (INL), Idaho Falls, ID (United States); Joe, Jeffrey Clark [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-05-01

    Many plants in the U.S. are currently undergoing control room modernization. The main drivers for modernization are the aging and obsolescence of existing equipment, which typically results in a like-for-like replacement of analogue equipment with digital systems. However, the modernization efforts present an opportunity to employ advanced technology that would not only extend the life, but enhance the efficiency and cost competitiveness of nuclear power. Computer-based procedures (CBPs) are one example of near-term advanced technology that may provide enhanced efficiencies above and beyond like for like replacements of analog systems. Researchers in the LWRS program are investigating the benefits of advanced technologies such as CBPs, with the goal of assisting utilities in decision making during modernization projects. This report will describe the existing research on CBPs, discuss the unique issues related to using CBPs in hybrid control rooms (i.e., partially modernized analog control rooms), and define the requirements of CBPs for hybrid control rooms.

  1. CPUG: Computational Physics UG Degree Program at Oregon State University

    Science.gov (United States)

    Landau, Rubin H.

    2004-03-01

    A four-year undergraduate degree program leading to a Bachelor's degree in Computational Physics is described. The courses and texts under development are research- and Web-rich, and culminate in an advanced computational laboratory derived from graduate theses and faculty research. The five computational courses and course materials developed for this program act as a bridge connecting the physics with the computation and the mathematics, and as a link to the computational science community.

  2. Model-Invariant Hybrid Computations of Separated Flows for RCA Standard Test Cases

    Science.gov (United States)

    Woodruff, Stephen

    2016-01-01

    NASA's Revolutionary Computational Aerosciences (RCA) subproject has identified several smooth-body separated flows as standard test cases to emphasize the challenge these flows present for computational methods and their importance to the aerospace community. Results of computations of two of these test cases, the NASA hump and the FAITH experiment, are presented. The computations were performed with the model-invariant hybrid LES-RANS formulation, implemented in the NASA code VULCAN-CFD. The model- invariant formulation employs gradual LES-RANS transitions and compensation for model variation to provide more accurate and efficient hybrid computations. Comparisons revealed that the LES-RANS transitions employed in these computations were sufficiently gradual that the compensating terms were unnecessary. Agreement with experiment was achieved only after reducing the turbulent viscosity to mitigate the effect of numerical dissipation. The stream-wise evolution of peak Reynolds shear stress was employed as a measure of turbulence dynamics in separated flows useful for evaluating computations.

  3. Hybrid computing: CPU+GPU co-processing and its application to tomographic reconstruction.

    Science.gov (United States)

    Agulleiro, J I; Vázquez, F; Garzón, E M; Fernández, J J

    2012-04-01

    Modern computers are equipped with powerful computing engines like multicore processors and GPUs. The 3DEM community has rapidly adapted to this scenario and many software packages now make use of high performance computing techniques to exploit these devices. However, the implementations thus far are purely focused on either GPUs or CPUs. This work presents a hybrid approach that collaboratively combines the GPUs and CPUs available in a computer and applies it to the problem of tomographic reconstruction. Proper orchestration of workload in such a heterogeneous system is an issue. Here we use an on-demand strategy whereby the computing devices request a new piece of work to do when idle. Our hybrid approach thus takes advantage of the whole computing power available in modern computers and further reduces the processing time. This CPU+GPU co-processing can be readily extended to other image processing tasks in 3DEM. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Computer programs for the concordance correlation coefficient.

    Science.gov (United States)

    Crawford, Sara B; Kosinski, Andrzej S; Lin, Hung-Mo; Williamson, John M; Barnhart, Huiman X

    2007-10-01

    The CCC macro is presented for computation of the concordance correlation coefficient (CCC), a common measure of reproducibility. The macro has been produced in both SAS and R, and a detailed presentation of the macro input and output for the SAS program is included. The macro provides estimation of three versions of the CCC, as presented by Lin [L.I.-K. Lin, A concordance correlation coefficient to evaluate reproducibility, Biometrics 45 (1989) 255-268], Barnhart et al. [H.X. Barnhart, J.L. Haber, J.L. Song, Overall concordance correlation coefficient for evaluating agreement among multiple observers, Biometrics 58 (2002) 1020-1027], and Williamson et al. [J.M. Williamson, S.B. Crawford, H.M. Lin, Resampling dependent concordance correlation coefficients, J. Biopharm. Stat. 17 (2007) 685-696]. It also provides bootstrap confidence intervals for the CCC, as well as for the difference in CCCs for both independent and dependent samples. The macro is designed for balanced data only. Detailed explanation of the involved computations and macro variable definitions are provided in the text. Two biomedical examples are included to illustrate that the macro can be easily implemented.

  5. Experiences with Efficient Methodologies for Teaching Computer Programming to Geoscientists

    Science.gov (United States)

    Jacobs, Christian T.; Gorman, Gerard J.; Rees, Huw E.; Craig, Lorraine E.

    2016-01-01

    Computer programming was once thought of as a skill required only by professional software developers. But today, given the ubiquitous nature of computation and data science it is quickly becoming necessary for all scientists and engineers to have at least a basic knowledge of how to program. Teaching how to program, particularly to those students…

  6. 01010000 01001100 01000001 01011001: Play Elements in Computer Programming

    Science.gov (United States)

    Breslin, Samantha

    2013-01-01

    This article explores the role of play in human interaction with computers in the context of computer programming. The author considers many facets of programming including the literary practice of coding, the abstract design of programs, and more mundane activities such as testing, debugging, and hacking. She discusses how these incorporate the…

  7. Graphics and composite material computer program enhancements for SPAR

    Science.gov (United States)

    Farley, G. L.; Baker, D. J.

    1980-01-01

    User documentation is provided for additional computer programs developed for use in conjunction with SPAR. These programs plot digital data, simplify input for composite material section properties, and compute lamina stresses and strains. Sample problems are presented including execution procedures, program input, and graphical output.

  8. Reducing the Digital Divide among Children Who Received Desktop or Hybrid Computers for the Home

    Directory of Open Access Journals (Sweden)

    Gila Cohen Zilka

    2016-06-01

    Full Text Available Researchers and policy makers have been exploring ways to reduce the digital divide. Parameters commonly used to examine the digital divide worldwide, as well as in this study, are: (a the digital divide in the accessibility and mobility of the ICT infrastructure and of the content infrastructure (e.g., sites used in school; and (b the digital divide in literacy skills. In the present study we examined the degree of effectiveness of receiving a desktop or hybrid computer for the home in reducing the digital divide among children of low socio-economic status aged 8-12 from various localities across Israel. The sample consisted of 1,248 respondents assessed in two measurements. As part of the mixed-method study, 128 children were also interviewed. Findings indicate that after the children received desktop or hybrid computers, changes occurred in their frequency of access, mobility, and computer literacy. Differences were found between the groups: hybrid computers reduce disparities and promote work with the computer and surfing the Internet more than do desktop computers. Narrowing the digital divide for this age group has many implications for the acquisition of skills and study habits, and consequently, for the realization of individual potential. The children spoke about self improvement as a result of exposure to the digital environment, about a sense of empowerment and of improvement in their advantage in the social fabric. Many children expressed a desire to continue their education and expand their knowledge of computer applications, the use of software, of games, and more. Therefore, if there is no computer in the home and it is necessary to decide between a desktop and a hybrid computer, a hybrid computer is preferable.

  9. Hybrid Computational Simulation and Study of Terahertz Pulsed Photoconductive Antennas

    Science.gov (United States)

    Emadi, R.; Barani, N.; Safian, R.; Nezhad, A. Zeidaabadi

    2016-08-01

    A photoconductive antenna (PCA) has been numerically investigated in the terahertz (THz) frequency band based on a hybrid simulation method. This hybrid method utilizes an optoelectronic solver, Silvaco TCAD, and a full-wave electromagnetic solver, CST. The optoelectronic solver is used to find the accurate THz photocurrent by considering realistic material parameters. Performance of photoconductive antennas and temporal behavior of the excited photocurrent for various active region geometries such as bare-gap electrode, interdigitated electrodes, and tip-to-tip rectangular electrodes are investigated. Moreover, investigations have been done on the center of the laser illumination on the substrate, substrate carrier lifetime, and diffusion photocurrent associated with the carriers temperature, to achieve efficient and accurate photocurrent. Finally, using the full-wave electromagnetic solver and the calculated photocurrent obtained from the optoelectronic solver, electromagnetic radiation of the antenna and its associated detected THz signal are calculated and compared with a measurement reference for verification.

  10. Performance Comparison of Hybrid Signed Digit Arithmetic in Efficient Computing

    Directory of Open Access Journals (Sweden)

    VISHAL AWASTHI

    2011-10-01

    Full Text Available In redundant representations, addition can be carried out in a constant time independent of the word length of the operands. Adder forms a fundamental building block in almost majority of VLSI designs. A hybrid adder can add an unsigned number to a signed-digit number and hence their efficient performance greatly determinesthe quality of the final output of the concerned circuit. In this paper we designed and compared the speed of adders by reducing the carry propagation time with the help of combined effect of improved architectures of adders and signed digit representation of number systems. The key idea is to draw out a compromise between execution time of fast adding process and area available which is often very limited. In this paper we also tried to verify the various algorithms of signed digit and hybrid signed digit adders.

  11. Hybrid Computational Simulation and Study of Terahertz Pulsed Photoconductive Antennas

    Science.gov (United States)

    Emadi, R.; Barani, N.; Safian, R.; Nezhad, A. Zeidaabadi

    2016-11-01

    A photoconductive antenna (PCA) has been numerically investigated in the terahertz (THz) frequency band based on a hybrid simulation method. This hybrid method utilizes an optoelectronic solver, Silvaco TCAD, and a full-wave electromagnetic solver, CST. The optoelectronic solver is used to find the accurate THz photocurrent by considering realistic material parameters. Performance of photoconductive antennas and temporal behavior of the excited photocurrent for various active region geometries such as bare-gap electrode, interdigitated electrodes, and tip-to-tip rectangular electrodes are investigated. Moreover, investigations have been done on the center of the laser illumination on the substrate, substrate carrier lifetime, and diffusion photocurrent associated with the carriers temperature, to achieve efficient and accurate photocurrent. Finally, using the full-wave electromagnetic solver and the calculated photocurrent obtained from the optoelectronic solver, electromagnetic radiation of the antenna and its associated detected THz signal are calculated and compared with a measurement reference for verification.

  12. Computational and experimental study of air hybrid engine concepts

    OpenAIRE

    Lee, Cho-Yu

    2011-01-01

    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University The air hybrid engine absorbs the vehicle kinetic energy during braking, stores it in an air tank in the form of compressed air, and reuses it to start the engine and to propel a vehicle during cruising and acceleration. Capturing, storing and reusing this braking energy to achieve stop-start operation and to give additional power can therefore improve fuel economy, particularly in cities and ...

  13. A Course in Algebra and Trigonometry with Computer Programming.

    Science.gov (United States)

    Beavers, Mildred; And Others

    This textbook was developed by the Colorado Schools Computing Science (CSCS) Curriculum Development Project. It can be used with high school or college students in an integrated presentation of second-year algebra, trigonometry, and beginning computer programing. (MK)

  14. Programs for Use in Teaching Research Methods for Small Computers

    Science.gov (United States)

    Halley, Fred S.

    1975-01-01

    Description of Sociology Library (SOLIB), presented as a package of computer programs designed for smaller computers used in research methods courses and by students performing independent research. (Author/ND)

  15. Hybrid computer techniques for solving partial differential equations

    Science.gov (United States)

    Hammond, J. L., Jr.; Odowd, W. M.

    1971-01-01

    Techniques overcome equipment limitations that restrict other computer techniques in solving trivial cases. The use of curve fitting by quadratic interpolation greatly reduces required digital storage space.

  16. FLOWCHART; a computer program for plotting flowcharts

    Science.gov (United States)

    Bender, Bernice

    1982-01-01

    The computer program FLOWCHART can be used to very quickly and easily produce flowcharts of high quality for publication. FLOWCHART centers each element or block of text that it processes on one of a set of (imaginary) vertical lines. It can enclose a text block in a rectangle, circle or other selected figure. It can draw a 'line connecting the midpoint of any side of any figure with the midpoint of any side of any other figure and insert an arrow pointing in the direction of flow. It can write 'yes' or 'no' next to the line joining two figures. FLOWCHART creates flowcharts using some basic plotting subroutine* which permit plots to be generated interactively and inspected on a Tektronix compatible graphics screen or plotted in a deferred mode on a Houston Instruments 42' pen plotter. The size of the plot, character set and character height in inches are inputs to the program. Plots generated using the pen plotter can be up to 42' high--the larger size plots being directly usable as visual aids in a talk. FLOWCHART centers each block of text on an imaginary column line. (The number of columns and column width are specified as input.) The midpoint of the longest line of text within the block is defined to be the center of the block and is placed on the column line. The spacing of individual words within the block is not altered when the block is positioned. The program writes the first block of text in a designated column and continues placing each subsequent block below the previous block in the same column. A block of text may be placed in a different column by specifying the number of the column and an earlier block of text with which the new block is to be aligned. If block zero is given as the earlier block, the new text is placed in the new column continuing down the page below the previous block. Optionally a column and number of inches from the top of the page may be given for positioning the next block of text. The program will normally draw one of five

  17. A generalized hybrid transfinite element computational approach for nonlinear/linear unified thermal/structural analysis

    Science.gov (United States)

    Tamma, Kumar K.; Railkar, Sudhir B.

    1987-01-01

    The present paper describes the development of a new hybrid computational approach for applicability for nonlinear/linear thermal structural analysis. The proposed transfinite element approach is a hybrid scheme as it combines the modeling versatility of contemporary finite elements in conjunction with transform methods and the classical Bubnov-Galerkin schemes. Applicability of the proposed formulations for nonlinear analysis is also developed. Several test cases are presented to include nonlinear/linear unified thermal-stress and thermal-stress wave propagations. Comparative results validate the fundamental capablities of the proposed hybrid transfinite element methodology.

  18. Hybrid computing: CPU+GPU co-processing and its application to tomographic reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Agulleiro, J.I.; Vazquez, F.; Garzon, E.M. [Supercomputing and Algorithms Group, Associated Unit CSIC-UAL, University of Almeria, 04120 Almeria (Spain); Fernandez, J.J., E-mail: JJ.Fernandez@csic.es [National Centre for Biotechnology, National Research Council (CNB-CSIC), Campus UAM, C/Darwin 3, Cantoblanco, 28049 Madrid (Spain)

    2012-04-15

    Modern computers are equipped with powerful computing engines like multicore processors and GPUs. The 3DEM community has rapidly adapted to this scenario and many software packages now make use of high performance computing techniques to exploit these devices. However, the implementations thus far are purely focused on either GPUs or CPUs. This work presents a hybrid approach that collaboratively combines the GPUs and CPUs available in a computer and applies it to the problem of tomographic reconstruction. Proper orchestration of workload in such a heterogeneous system is an issue. Here we use an on-demand strategy whereby the computing devices request a new piece of work to do when idle. Our hybrid approach thus takes advantage of the whole computing power available in modern computers and further reduces the processing time. This CPU+GPU co-processing can be readily extended to other image processing tasks in 3DEM. -- Highlights: Black-Right-Pointing-Pointer Hybrid computing allows full exploitation of the power (CPU+GPU) in a computer. Black-Right-Pointing-Pointer Proper orchestration of workload is managed by an on-demand strategy. Black-Right-Pointing-Pointer Total number of threads running in the system should be limited to the number of CPUs.

  19. COMPUTER PROGRAMMING TECHNIQUES FOR INTELLIGENCE ANALYST APPLICATION. VOLUME II.

    Science.gov (United States)

    COMPUTER PROGRAMMING , STATISTICAL PROCESSES), (*MAN MACHINE SYSTEMS, DISPLAY SYSTEMS), GRAPHICS, INFORMATION RETRIEVAL, DATA PROCESSING, SYSTEMS ENGINEERING, MILITARY INTELLIGENCE, CLASSIFICATION, AIR FORCE PERSONNEL.

  20. Program listing for the reliability block diagram computation program of JPL Technical Report 32-1543

    Science.gov (United States)

    Chelson, P. O.; Eckstein, R. E.

    1971-01-01

    The computer program listing for the reliability block diagram computation program described in Reliability Computation From Reliability Block Diagrams is given. The program is written in FORTRAN 4 and is currently running on a Univac 1108. Each subroutine contains a description of its function.

  1. Positioning Continuing Education Computer Programs for the Corporate Market.

    Science.gov (United States)

    Tilney, Ceil

    1993-01-01

    Summarizes the findings of the market assessment phase of Bellevue Community College's evaluation of its continuing education computer training program. Indicates that marketing efforts must stress program quality and software training to help overcome strong antiacademic client sentiment. (MGB)

  2. Computer program for calculation of ideal gas thermodynamic data

    Science.gov (United States)

    Gordon, S.; Mc Bride, B. J.

    1968-01-01

    Computer program calculates ideal gas thermodynamic properties for any species for which molecular constant data is available. Partial functions and derivatives from formulas based on statistical mechanics are provided by the program which is written in FORTRAN 4 and MAP.

  3. A Hybrid Circular Queue Method for Iterative Stencil Computations on GPUs

    Institute of Scientific and Technical Information of China (English)

    Yang Yang; Hui-Min Cui; Xiao-Bing Feng; Jing-Ling Xue

    2012-01-01

    In this paper,we present a hybrid circular queue method that can significantly boost the performance of stencil computations on GPU by carefully balancing usage of registers and shared-memory.Unlike earlier methods that rely on circular queues predominantly implemented using indirectly addressable shared memory,our hybrid method exploits a new reuse pattern spanning across the multiple time steps in stencil computations so that circular queues can be implemented by both shared memory and registers effectively in a balanced manner.We describe a framework that automatically finds the best placement of data in registers and shared memory in order to maximize the performance of stencil computations.Validation using four different types of stencils on three different GPU platforms shows that our hybrid method achieves speedups up to 2.93X over methods that use circular queues implemented with shared-memory only.

  4. A new hybrid transfinite element computational methodology for applicability to conduction/convection/radiation heat transfer

    Science.gov (United States)

    Tamma, Kumar K.; Railkar, Sudhir B.

    1988-01-01

    This paper describes new and recent advances in the development of a hybrid transfinite element computational methodology for applicability to conduction/convection/radiation heat transfer problems. The transfinite element methodology, while retaining the modeling versatility of contemporary finite element formulations, is based on application of transform techniques in conjunction with classical Galerkin schemes and is a hybrid approach. The purpose of this paper is to provide a viable hybrid computational methodology for applicability to general transient thermal analysis. Highlights and features of the methodology are described and developed via generalized formulations and applications to several test problems. The proposed transfinite element methodology successfully provides a viable computational approach and numerical test problems validate the proposed developments for conduction/convection/radiation thermal analysis.

  5. A new hybrid transfinite element computational methodology for applicability to conduction/convection/radiation heat transfer

    Science.gov (United States)

    Tamma, Kumar K.; Railkar, Sudhir B.

    1988-01-01

    This paper describes new and recent advances in the development of a hybrid transfinite element computational methodology for applicability to conduction/convection/radiation heat transfer problems. The transfinite element methodology, while retaining the modeling versatility of contemporary finite element formulations, is based on application of transform techniques in conjunction with classical Galerkin schemes and is a hybrid approach. The purpose of this paper is to provide a viable hybrid computational methodology for applicability to general transient thermal analysis. Highlights and features of the methodology are described and developed via generalized formulations and applications to several test problems. The proposed transfinite element methodology successfully provides a viable computational approach and numerical test problems validate the proposed developments for conduction/convection/radiation thermal analysis.

  6. Stochastic linear programming models, theory, and computation

    CERN Document Server

    Kall, Peter

    2011-01-01

    This new edition of Stochastic Linear Programming: Models, Theory and Computation has been brought completely up to date, either dealing with or at least referring to new material on models and methods, including DEA with stochastic outputs modeled via constraints on special risk functions (generalizing chance constraints, ICC’s and CVaR constraints), material on Sharpe-ratio, and Asset Liability Management models involving CVaR in a multi-stage setup. To facilitate use as a text, exercises are included throughout the book, and web access is provided to a student version of the authors’ SLP-IOR software. Additionally, the authors have updated the Guide to Available Software, and they have included newer algorithms and modeling systems for SLP. The book is thus suitable as a text for advanced courses in stochastic optimization, and as a reference to the field. From Reviews of the First Edition: "The book presents a comprehensive study of stochastic linear optimization problems and their applications. … T...

  7. A Hybrid Program for Fitting Rotationally Resolved Spectra of Floppy Molecules with One Large-Amplitude Rotatory Motion and One Large-Amplitude Oscillatory Motion

    Science.gov (United States)

    Kleiner, Isabelle; Hougen, Jon T.

    2015-01-01

    A new hybrid-model fitting program for methylamine-like molecules has been developed, based on an effective Hamiltonian in which the ammonia-like inversion motion is treated using a tunneling formalism, while the internal-rotation motion is treated using an explicit kinetic energy operator and potential energy function. The Hamiltonian in the computer program is set up as a 2×2 partitioned matrix, where each diagonal block contains a traditional torsion-rotation Hamiltonian (as in the earlier program BELGI), and the two off-diagonal blocks contain tunneling terms. This hybrid formulation permits the use of the permutation-inversion group G6 (isomorphic to C3v) for terms in the two diagonal blocks, but requires G12 for terms in the off-diagonal blocks. The first application of the new program is to 2-methylmalonaldehyde. Microwave data for this molecule were previously fit using an all-tunneling Hamiltonian formalism to treat both large-amplitude-motions. For 2-methylmalonaldehyde, the hybrid program achieves the same quality of fit as was obtained with the all-tunneling program, but fits with the hybrid program eliminate a large discrepancy between internal rotation barriers in the OH and OD isotopologs of 2-methylmalonaldehyde that arose in fits with the all-tunneling program. This large isotopic shift in internal rotation barrier is thus almost certainly an artifact of the all-tunneling model. Other molecules for application of the hybrid program are mentioned. PMID:26439709

  8. SOLIB: A Social Science Program Library for Small Computers.

    Science.gov (United States)

    Halley, Fred S.

    A package of social science programs--Sociology Library (SOLIB)--for small computers provides users with a partial solution to the problems stemming from the heterogeneity of social science applications programs. SOLIB offers a uniform approach to data handling and program documentation; all its programs are written in standard FORTRAN for the IBM…

  9. Three Computer Programs for Use in Introductory Level Physics Laboratories.

    Science.gov (United States)

    Kagan, David T.

    1984-01-01

    Describes three computer programs which operate on Apple II+ microcomputers: (1) a menu-driven graph drawing program; (2) a simulation of the Millikan oil drop experiment; and (3) a program used to study the half-life of silver. (Instructions for obtaining the programs from the author are included.) (JN)

  10. Enhancing Digital Fluency through a Training Program for Creative Problem Solving Using Computer Programming

    Science.gov (United States)

    Kim, SugHee; Chung, KwangSik; Yu, HeonChang

    2013-01-01

    The purpose of this paper is to propose a training program for creative problem solving based on computer programming. The proposed program will encourage students to solve real-life problems through a creative thinking spiral related to cognitive skills with computer programming. With the goal of enhancing digital fluency through this proposed…

  11. High-fidelity quantum memory using nitrogen-vacancy center ensemble for hybrid quantum computation

    CERN Document Server

    Yang, W L; Hu, Y; Feng, M; Du, J F

    2011-01-01

    We study a hybrid quantum computing system using nitrogen-vacancy center ensemble (NVE) as quantum memory, current-biased Josephson junction (CBJJ) superconducting qubit fabricated in a transmission line resonator (TLR) as quantum computing processor and the microwave photons in TLR as quantum data bus. The storage process is seriously treated by considering all kinds of decoherence mechanisms. Such a hybrid quantum device can also be used to create multi-qubit W states of NVEs through a common CBJJ. The experimental feasibility and challenge are justified using currently available technology.

  12. Hybrid Computational Model for High-Altitude Aeroassist Vehicles Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed effort addresses a need for accurate computational models to support aeroassist and entry vehicle system design over a broad range of flight conditions...

  13. Hybrid PSO-MOBA for Profit Maximization in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Dr. Salu George

    2015-02-01

    Full Text Available Cloud service provider, infrastructure vendor and clients/Cloud user’s are main actors in any cloud enterprise like Amazon web service’s cloud or Google’s cloud. Now these enterprises take care in infrastructure deployment and cloud services management (IaaS/PaaS/SaaS. Cloud user ‘s need to provide correct amount of services needed and characteristic of workload in order to avoid over – provisioning of resources and it’s the important pricing factor. Cloud service provider need to manage the resources and as well as optimize the resources to maximize the profit. To manage the profit we consider the M/M/m queuing model which manages the queue of job and provide average execution time. Resource Scheduling is one of the main concerns in profit maximization for which we take HYBRID PSO-MOBA as it resolves the global convergence problem, faster convergence, less parameter to tune, easier searching in very large problem spaces and locating the right resource. In HYBRID PSO-MOBA we are combining the features of PSO and MOBA to achieve the benefits of both PSO and MOBA and have greater compatibility.

  14. City of Las Vegas Plug-in Hybrid Electric Vehicle Demonstration Program

    Energy Technology Data Exchange (ETDEWEB)

    None

    2013-12-31

    The City of Las Vegas was awarded Department of Energy (DOE) project funding in 2009, for the City of Las Vegas Plug-in Hybrid Electric Vehicle Demonstration Program. This project allowed the City of Las Vegas to purchase electric and plug-in hybrid electric vehicles and associated electric vehicle charging infrastructure. The City anticipated the electric vehicles having lower overall operating costs and emissions similar to traditional and hybrid vehicles.

  15. Variation Theory Applied to Students' Conceptions of Computer Programming

    Science.gov (United States)

    Thune, Michael; Eckerdal, Anna

    2009-01-01

    The present work has its focus on university-level engineering education students that do not intend to major in computer science but still have to take a mandatory programming course. Phenomenography and variation theory are applied to empirical data from a study of students' conceptions of computer programming. A phenomenographic outcome space…

  16. Learning Motivation in E-Learning Facilitated Computer Programming Courses

    Science.gov (United States)

    Law, Kris M. Y.; Lee, Victor C. S.; Yu, Y. T.

    2010-01-01

    Computer programming skills constitute one of the core competencies that graduates from many disciplines, such as engineering and computer science, are expected to possess. Developing good programming skills typically requires students to do a lot of practice, which cannot sustain unless they are adequately motivated. This paper reports a…

  17. Case Studies of Liberal Arts Computer Science Programs

    Science.gov (United States)

    Baldwin, D.; Brady, A.; Danyluk, A.; Adams, J.; Lawrence, A.

    2010-01-01

    Many undergraduate liberal arts institutions offer computer science majors. This article illustrates how quality computer science programs can be realized in a wide variety of liberal arts settings by describing and contrasting the actual programs at five liberal arts colleges: Williams College, Kalamazoo College, the State University of New York…

  18. Case Studies of Liberal Arts Computer Science Programs

    Science.gov (United States)

    Baldwin, D.; Brady, A.; Danyluk, A.; Adams, J.; Lawrence, A.

    2010-01-01

    Many undergraduate liberal arts institutions offer computer science majors. This article illustrates how quality computer science programs can be realized in a wide variety of liberal arts settings by describing and contrasting the actual programs at five liberal arts colleges: Williams College, Kalamazoo College, the State University of New York…

  19. Language Facilities for Programming User-Computer Dialogues.

    Science.gov (United States)

    Lafuente, J. M.; Gries, D.

    1978-01-01

    Proposes extensions to PASCAL that provide for programing man-computer dialogues. An interactive dialogue application program is viewed as a sequence of frames and separate computational steps. PASCAL extensions allow the description of the items of information in each frame and the inclusion of behavior rules specifying the interactive dialogue.…

  20. Software survey: VOSviewer, a computer program for bibliometric mapping

    NARCIS (Netherlands)

    N.J.P. van Eck (Nees Jan); L. Waltman (Ludo)

    2010-01-01

    textabstractWe present VOSviewer, a freely available computer program that we have developed for constructing and viewing bibliometric maps. Unlike most computer programs that are used for bibliometric mapping, VOSviewer pays special attention to the graphical representation of bibliometric maps. Th

  1. Soft computing applications: the advent of hybrid systems

    Science.gov (United States)

    Bonissone, Piero P.

    1998-10-01

    Soft computing is a new field of computer sciences that deals with the integration of problem- solving technologies such as fuzzy logic, probabilistic reasoning, neural networks, and genetic algorithms. Each of these technologies provide us with complementary reasoning and searching methods to solve complex, real-world problems. We will analyze some of the most synergistic combinations of self computing technologies, with an emphasis on the development of smart algorithm-controllers, such as the use of FL to control GAs and NNs parameters. We will also discuss the application of GAs to evolve NNs or tune FL controllers; and the implementation of FL controllers as NNs tuned by backpropagation-type algorithms. We will conclude with a detailed description of a GA-tuned fuzzy controller to implement a train handling control.

  2. Method and computer program product for maintenance and modernization backlogging

    Science.gov (United States)

    Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

    2013-02-19

    According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

  3. Attitude, Gender and Achievement in Computer Programming

    Science.gov (United States)

    Baser, Mustafa

    2013-01-01

    The aim of this research was to explore the relationship among students' attitudes toward programming, gender and academic achievement in programming. The scale used for measuring students' attitudes toward programming was developed by the researcher and consisted of 35 five-point Likert type items in four subscales. The scale was administered to…

  4. Designing Educational Games for Computer Programming: A Holistic Framework

    Science.gov (United States)

    Malliarakis, Christos; Satratzemi, Maya; Xinogalos, Stelios

    2014-01-01

    Computer science is continuously evolving during the past decades. This has also brought forth new knowledge that should be incorporated and new learning strategies must be adopted for the successful teaching of all sub-domains. For example, computer programming is a vital knowledge area within computer science with constantly changing curriculum…

  5. 76 FR 11435 - Privacy Act of 1974; Computer Matching Program

    Science.gov (United States)

    2011-03-02

    ... the Social Security Administration (SSA) (source agency). This renewal of the computer matching... Privacy Act of 1974; Computer Matching Program AGENCY: Department of Education. ACTION: Notice--Computer Matching between the U.S. Department of Education and the Social Security Administration. SUMMARY:...

  6. Seventy Years of Computing in the Nuclear Weapons Program

    Energy Technology Data Exchange (ETDEWEB)

    Archer, Billy Joe [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-05-30

    Los Alamos has continuously been on the forefront of scientific computing since it helped found the field. This talk will explore the rich history of computing in the Los Alamos weapons program. The current status of computing will be discussed, as will the expectations for the near future.

  7. Earth Tide Algorithms for the OMNIS Computer Program System.

    Science.gov (United States)

    1986-04-01

    This report presents five computer algorithms that jointly specify the gravitational action by which the tidal redistributions of the Earth’s masses...routine is a simplified version of the fourth and is provided for use during computer program verification. All computer algorithms express the tidal

  8. 78 FR 45513 - Privacy Act of 1974; Computer Matching Program

    Science.gov (United States)

    2013-07-29

    .... DESCRIPTION OF COMPUTER MATCHING PROGRAM: Each participating SPAA will send ACF an electronic file of eligible public assistance client information. These files are non- Federal computer records maintained by the... on no more than 10,000,000 public assistance beneficiaries. 2. The DMDC computer database...

  9. Basic design of parallel computational program for probabilistic structural analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kaji, Yoshiyuki; Arai, Taketoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Gu, Wenwei; Nakamura, Hitoshi

    1999-06-01

    In our laboratory, for `development of damage evaluation method of structural brittle materials by microscopic fracture mechanics and probabilistic theory` (nuclear computational science cross-over research) we examine computational method related to super parallel computation system which is coupled with material strength theory based on microscopic fracture mechanics for latent cracks and continuum structural model to develop new structural reliability evaluation methods for ceramic structures. This technical report is the review results regarding probabilistic structural mechanics theory, basic terms of formula and program methods of parallel computation which are related to principal terms in basic design of computational mechanics program. (author)

  10. A Best Practice Modular Design of a Hybrid Course Delivery Structure for an Executive Education Program

    Science.gov (United States)

    Klotz, Dorothy E.; Wright, Thomas A.

    2017-01-01

    This article highlights a best practice approach that showcases the highly successful deployment of a hybrid course delivery structure for an Operations core course in an Executive MBA Program. A key design element of the approach was the modular design of both the course itself and the learning materials. While other hybrid deployments may stress…

  11. A Best Practice Modular Design of a Hybrid Course Delivery Structure for an Executive Education Program

    Science.gov (United States)

    Klotz, Dorothy E.; Wright, Thomas A.

    2017-01-01

    This article highlights a best practice approach that showcases the highly successful deployment of a hybrid course delivery structure for an Operations core course in an Executive MBA Program. A key design element of the approach was the modular design of both the course itself and the learning materials. While other hybrid deployments may stress…

  12. Carbon nanotube reinforced hybrid composites: Computational modeling of environmental fatigue and usability for wind blades

    DEFF Research Database (Denmark)

    Dai, Gaoming; Mishnaevsky, Leon

    2015-01-01

    The potential of advanced carbon/glass hybrid reinforced composites with secondary carbon nanotube reinforcement for wind energy applications is investigated here with the use of computational experiments. Fatigue behavior of hybrid as well as glass and carbon fiber reinforced composites...... with the secondary CNT reinforcements (especially, aligned tubes) present superior fatigue performances than those without reinforcements, also under combined environmental and cyclic mechanical loading. This effect is stronger for carbon composites, than for hybrid and glass composites....... automatically using the Python based code. 3D computational studies of environment and fatigue analyses of multiscale composites with secondary nano-scale reinforcement in different material phases and different CNTs arrangements are carried out systematically in this paper. It was demonstrated that composites...

  13. Hybrid and hierarchical nanoreinforced polymer composites: Computational modelling of structure–properties relationships

    DEFF Research Database (Denmark)

    Mishnaevsky, Leon; Dai, Gaoming

    2014-01-01

    Hybrid and hierarchical polymer composites represent a promising group of materials for engineering applications. In this paper, computational studies of the strength and damage resistance of hybrid and hierarchical composites are reviewed. The reserves of the composite improvement are explored...... by using computational micromechanical models. It is shown that while glass/carbon fibers hybrid composites clearly demonstrate higher stiffness and lower weight with increasing the carbon content, they can have lower strength as compared with usual glass fiber polymer composites. Secondary...... nanoreinforcement can drastically increase the fatigue lifetime of composites. Especially, composites with the nanoplatelets localized in the fiber/matrix interface layer (fiber sizing) ensure much higher fatigue lifetime than those with the nanoplatelets in the matrix....

  14. Hybrid Computation Model for Intelligent System Design by Synergism of Modified EFC with Neural Network

    OpenAIRE

    2015-01-01

    In recent past, it has been seen in many applications that synergism of computational intelligence techniques outperforms over an individual technique. This paper proposes a new hybrid computation model which is a novel synergism of modified evolutionary fuzzy clustering with associated neural networks. It consists of two modules: fuzzy distribution and neural classifier. In first module, mean patterns are distributed into the number of clusters based on the modified evolutionary fuzzy cluste...

  15. Digital signal processing system design LabVIEW-bases hybrid programming

    CERN Document Server

    Kehtarnavaz, Nasser; Peng, Qingzhong

    2008-01-01

    Reflecting LabView's new MathScripting feature, the new edition of this book combines textual and graphical programming to form a hybrid programming approach, enabling a more effective means of building and analyzing DSP systems. The hybrid programming approach allows the use of previously developed textual programming solutions to be integrated into LabVIEW's highly interactive and visual environment, providing an easier and quicker method for building DSP systems.Features * The only DSP laboratory book that combines both textual and graphical programming * 12 lab experime

  16. Time course of programmed cell death, which included autophagic features, in hybrid tobacco cells expressing hybrid lethality.

    Science.gov (United States)

    Ueno, Naoya; Nihei, Saori; Miyakawa, Naoto; Hirasawa, Tadashi; Kanekatsu, Motoki; Marubashi, Wataru; van Doorn, Wouter G; Yamada, Tetsuya

    2016-12-01

    PCD with features of vacuolar cell death including autophagy-related features were detected in hybrid tobacco cells, and detailed time course of features of vacuolar cell death were established. A type of interspecific Nicotiana hybrid, Nicotiana suaveolens × N. tabacum exhibits temperature-sensitive lethality. This lethality results from programmed cell death (PCD) in hybrid seedlings, but this PCD occurs only in seedlings and suspension-cultured cells grown at 28 °C, not those grown at 36 °C. Plant PCD can be classified as vacuolar cell death or necrotic cell death. Induction of autophagy, vacuolar membrane collapse and actin disorganization are each known features of vacuolar cell death, but observed cases of PCD showing all these features simultaneously are rare. In this study, these features of vacuolar cell death were evident in hybrid tobacco cells expressing hybrid lethality. Ion leakage, plasma membrane disruption, increased activity of vacuolar processing enzyme, vacuolar membrane collapse, and formation of punctate F-actin foci were each evident in these cells. Transmission electron microscopy revealed that macroautophagic structures formed and tonoplasts ruptured in these cells. The number of cells that contained monodansylcadaverine (MDC)-stained structures and the abundance of nine autophagy-related gene transcripts increased just before cell death at 28 °C; these features were not evident at 36 °C. We assessed whether an autophagic inhibitor, wortmannin (WM), influenced lethality in hybrid cells. After the hybrid cell began to die, WM suppressed increases in ion leakage and cell deaths, and it decreased the number of cells containing MDC-stained structures. These results showed that several features indicative of autophagy and vacuolar cell death were evident in the hybrid tobacco cells subject to lethality. In addition, we documented a detailed time course of these vacuolar cell death features.

  17. CaKernel – A Parallel Application Programming Framework for Heterogenous Computing Architectures

    Directory of Open Access Journals (Sweden)

    Marek Blazewicz

    2011-01-01

    Full Text Available With the recent advent of new heterogeneous computing architectures there is still a lack of parallel problem solving environments that can help scientists to use easily and efficiently hybrid supercomputers. Many scientific simulations that use structured grids to solve partial differential equations in fact rely on stencil computations. Stencil computations have become crucial in solving many challenging problems in various domains, e.g., engineering or physics. Although many parallel stencil computing approaches have been proposed, in most cases they solve only particular problems. As a result, scientists are struggling when it comes to the subject of implementing a new stencil-based simulation, especially on high performance hybrid supercomputers. In response to the presented need we extend our previous work on a parallel programming framework for CUDA – CaCUDA that now supports OpenCL. We present CaKernel – a tool that simplifies the development of parallel scientific applications on hybrid systems. CaKernel is built on the highly scalable and portable Cactus framework. In the CaKernel framework, Cactus manages the inter-process communication via MPI while CaKernel manages the code running on Graphics Processing Units (GPUs and interactions between them. As a non-trivial test case we have developed a 3D CFD code to demonstrate the performance and scalability of the automatically generated code.

  18. Employing Subgoals in Computer Programming Education

    Science.gov (United States)

    Margulieux, Lauren E.; Catrambone, Richard; Guzdial, Mark

    2016-01-01

    The rapid integration of technology into our professional and personal lives has left many education systems ill-equipped to deal with the influx of people seeking computing education. To improve computing education, we are applying techniques that have been developed for other procedural fields. The present study applied such a technique, subgoal…

  19. SED/Apple Computer, Inc., Partnership Program.

    Science.gov (United States)

    Stoll, Peter F.

    1991-01-01

    In 1990, the New York State Education Department (SED), Apple Computer, Inc., Boards of Cooperative Educational Services (BOCES), and school districts formed a partnership to explore the contribution technology can make to schools based on Apple Computer's Learning Society and SED's Long-Range Plan for Technology in Elementary and Secondary…

  20. Simulation Concept - How to Exploit Tools for Computing Hybrids

    Science.gov (United States)

    2010-06-01

    and Technology - QuIST ). The results of these programs will play a role in demonstrating how biotechnology and quantum sciences can provide new...partial differential equation q heat flux QuIST Quantum Information Science and Technology RF radio frequency R&D research and development

  1. Computer-aided diagnosis system: a Bayesian hybrid classification method.

    Science.gov (United States)

    Calle-Alonso, F; Pérez, C J; Arias-Nicolás, J P; Martín, J

    2013-10-01

    A novel method to classify multi-class biomedical objects is presented. The method is based on a hybrid approach which combines pairwise comparison, Bayesian regression and the k-nearest neighbor technique. It can be applied in a fully automatic way or in a relevance feedback framework. In the latter case, the information obtained from both an expert and the automatic classification is iteratively used to improve the results until a certain accuracy level is achieved, then, the learning process is finished and new classifications can be automatically performed. The method has been applied in two biomedical contexts by following the same cross-validation schemes as in the original studies. The first one refers to cancer diagnosis, leading to an accuracy of 77.35% versus 66.37%, originally obtained. The second one considers the diagnosis of pathologies of the vertebral column. The original method achieves accuracies ranging from 76.5% to 96.7%, and from 82.3% to 97.1% in two different cross-validation schemes. Even with no supervision, the proposed method reaches 96.71% and 97.32% in these two cases. By using a supervised framework the achieved accuracy is 97.74%. Furthermore, all abnormal cases were correctly classified.

  2. Evolutionary computer programming of protein folding and structure predictions.

    Science.gov (United States)

    Nölting, Bengt; Jülich, Dennis; Vonau, Winfried; Andert, Karl

    2004-07-07

    In order to understand the mechanism of protein folding and to assist the rational de-novo design of fast-folding, non-aggregating and stable artificial enzymes it is very helpful to be able to simulate protein folding reactions and to predict the structures of proteins and other biomacromolecules. Here, we use a method of computer programming called "evolutionary computer programming" in which a program evolves depending on the evolutionary pressure exerted on the program. In the case of the presented application of this method on a computer program for folding simulations, the evolutionary pressure exerted was towards faster finding deep minima in the energy landscape of protein folding. Already after 20 evolution steps, the evolved program was able to find deep minima in the energy landscape more than 10 times faster than the original program prior to the evolution process.

  3. Computer Aided Design System for Developing Musical Fountain Programs

    Institute of Scientific and Technical Information of China (English)

    刘丹; 张乃尧; 朱汉城

    2003-01-01

    A computer aided design system for developing musical fountain programs was developed with multiple functions such as intelligent design, 3-D animation, manual modification and synchronized motion to make the development process more efficient. The system first analyzed the music form and sentiment using many basic features of the music to select a basic fountain program. Then, this program is simulated with 3-D animation and modified manually to achieve the desired results. Finally, the program is transformed to a computer control program to control the musical fountain in time with the music. A prototype system for the musical fountain was also developed. It was tested with many styles of music and users were quite satisfied with its performance. By integrating various functions, the proposed computer aided design system for developing musical fountain programs greatly simplified the design of the musical fountain programs.

  4. The Swedish electric and hybrid vehicle R, D and D program. Seminar October 1998

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-12-01

    This publication presents a selection of the ongoing projects in the form of abstracts, within the KFB RDD-program Electric- and Hybrid Vehicles. These projects were presented at a project manager seminar 20-21 October 1998

  5. Hybrid computing using a neural network with dynamic external memory.

    Science.gov (United States)

    Graves, Alex; Wayne, Greg; Reynolds, Malcolm; Harley, Tim; Danihelka, Ivo; Grabska-Barwińska, Agnieszka; Colmenarejo, Sergio Gómez; Grefenstette, Edward; Ramalho, Tiago; Agapiou, John; Badia, Adrià Puigdomènech; Hermann, Karl Moritz; Zwols, Yori; Ostrovski, Georg; Cain, Adam; King, Helen; Summerfield, Christopher; Blunsom, Phil; Kavukcuoglu, Koray; Hassabis, Demis

    2016-10-27

    Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read-write memory.

  6. Computer simulation program is adaptable to industrial processes

    Science.gov (United States)

    Schultz, F. E.

    1966-01-01

    The Reaction kinetics ablation program /REKAP/, developed to simulate ablation of various materials, provides mathematical formulations for computer programs which can simulate certain industrial processes. The programs are based on the use of nonsymmetrical difference equations that are employed to solve complex partial differential equation systems.

  7. The Effectiveness of a Computer-Assisted Math Learning Program

    Science.gov (United States)

    De Witte, K.; Haelermans, C.; Rogge, N.

    2015-01-01

    Computer-assisted instruction (CAI) programs are considered as a way to improve learning outcomes of students. However, little is known on the schools who implement such programs as well as on the effectiveness of similar information and communication technology programs. We provide a literature review that pays special attention to the existing…

  8. Residue Management: A Computer Program About Conservation Tillage Decisions.

    Science.gov (United States)

    Thien, Steve J.

    1986-01-01

    Describes a computer program, Residue Management, which is designed to supplement discussions on the Universal Soil Loss Equation and the impact of tillage on soil properties for introductory soil courses. The program advances the user through three stages of residue management. Information on obtaining the program is also included. (ML)

  9. DNA sequence handling programs in BASIC for home computers.

    OpenAIRE

    Biro, P A

    1984-01-01

    This paper describes a DNA sequence handling program written entirely in BASIC and designed to be run on an Atari home computer. Many of the features common to more sophisticated programs have been included. The advantage of this program are its convenience, its transportability and its potential for user modification. The disadvantages are lack of sophistication and speed.

  10. Stretchable living materials and devices with hydrogel-elastomer hybrids hosting programmed cells.

    Science.gov (United States)

    Liu, Xinyue; Tang, Tzu-Chieh; Tham, Eléonore; Yuk, Hyunwoo; Lin, Shaoting; Lu, Timothy K; Zhao, Xuanhe

    2017-02-28

    Living systems, such as bacteria, yeasts, and mammalian cells, can be genetically programmed with synthetic circuits that execute sensing, computing, memory, and response functions. Integrating these functional living components into materials and devices will provide powerful tools for scientific research and enable new technological applications. However, it has been a grand challenge to maintain the viability, functionality, and safety of living components in freestanding materials and devices, which frequently undergo deformations during applications. Here, we report the design of a set of living materials and devices based on stretchable, robust, and biocompatible hydrogel-elastomer hybrids that host various types of genetically engineered bacterial cells. The hydrogel provides sustainable supplies of water and nutrients, and the elastomer is air-permeable, maintaining long-term viability and functionality of the encapsulated cells. Communication between different bacterial strains and with the environment is achieved via diffusion of molecules in the hydrogel. The high stretchability and robustness of the hydrogel-elastomer hybrids prevent leakage of cells from the living materials and devices, even under large deformations. We show functions and applications of stretchable living sensors that are responsive to multiple chemicals in a variety of form factors, including skin patches and gloves-based sensors. We further develop a quantitative model that couples transportation of signaling molecules and cellular response to aid the design of future living materials and devices.

  11. Stretchable living materials and devices with hydrogel–elastomer hybrids hosting programmed cells

    Science.gov (United States)

    Liu, Xinyue; Tang, Tzu-Chieh; Tham, Eléonore; Yuk, Hyunwoo; Lin, Shaoting; Lu, Timothy K.; Zhao, Xuanhe

    2017-01-01

    Living systems, such as bacteria, yeasts, and mammalian cells, can be genetically programmed with synthetic circuits that execute sensing, computing, memory, and response functions. Integrating these functional living components into materials and devices will provide powerful tools for scientific research and enable new technological applications. However, it has been a grand challenge to maintain the viability, functionality, and safety of living components in freestanding materials and devices, which frequently undergo deformations during applications. Here, we report the design of a set of living materials and devices based on stretchable, robust, and biocompatible hydrogel–elastomer hybrids that host various types of genetically engineered bacterial cells. The hydrogel provides sustainable supplies of water and nutrients, and the elastomer is air-permeable, maintaining long-term viability and functionality of the encapsulated cells. Communication between different bacterial strains and with the environment is achieved via diffusion of molecules in the hydrogel. The high stretchability and robustness of the hydrogel–elastomer hybrids prevent leakage of cells from the living materials and devices, even under large deformations. We show functions and applications of stretchable living sensors that are responsive to multiple chemicals in a variety of form factors, including skin patches and gloves-based sensors. We further develop a quantitative model that couples transportation of signaling molecules and cellular response to aid the design of future living materials and devices. PMID:28202725

  12. Integrating Concurrency and Object-Oriented Programming: An Evaluation of Hybrid

    OpenAIRE

    Konstantas, Dimitri; Papathomas, Michael

    1990-01-01

    In this paper we address the effective use of the object-oriented programming approach for concurrent programming from a language design viewpoint. We present a set of requirements for the design of concurrent object-oriented languages. We then use a particular language, Hybrid, as a concrete example and examine to what extent its features meet these requirements. We identify the solutions offered by Hybrid and its shortcomings and we underline both the difficulties and promising directions f...

  13. The Denver universal microspectroradiometer (DUM). II. Computer configuration and modular programming for radiometry.

    Science.gov (United States)

    Galbraith, W; Geyer, S B; David, G B

    1975-12-01

    This paper describes and discusses for microscopists and spectroscopists the choice of computer equipment and the design of programs used in the Denver Universal Microspectroradiometer (DUM). This instrument is an accurate computerized photon-counting microspectrophotometer, microspectrofluorimeter and microrefractometer. The computer is used to control the operation of the system, to acquire radiometric data of various kinds, and to reduce, analyse and output the data in a readily usable form. Since the radiometer was designed to carry out many kinds of measurements in a variety of micro- and macroscopic specimens, and since different methods of microscopy or spectroscopy have to be combined in various ways fro the study of any one specimen, no single master-program could fulfill efficiently all foreseeable requirements. Therefore, the programming developed is interactive, modular, hierarchical and hybrid. Modular interactive programming makes it possible for almost any kind of main program, applicable to almost any kind of measurement, to be assembled quickly from a collection of hierarchical subroutines. Main programs are short and composed mainly of Fortran statements calling subroutines; subroutines, in turn, automatically call other subroutines over many levels. The subroutines are independently written and optimized for maximum operational efficiency in the computer system used, or for maximum ease of transfer to other systems. This approach to programming enables someone unfamiliar with computer languages to operate the radiometric system from the console of the CRT terminal. The writing of new main programs, by linking groups of existing subroutines, requires only a minimum acquaintance with Fortran; only the writing and revision of subroutines requires programming experience. Differences and similarities in the method of computer operation between the present system and other computerized radiometers are briefly discussed.

  14. Hybrid slime mould-based system for unconventional computing

    Science.gov (United States)

    Berzina, T.; Dimonte, A.; Cifarelli, A.; Erokhin, V.

    2015-04-01

    Physarum polycephalum is considered to be promising for the realization of unconventional computational systems. In this work, we present results of three slime mould-based systems. We have demonstrated the possibility of transporting biocompatible microparticles using attractors, repellents and a DEFLECTOR. The latter is an external tool that enables to conduct Physarum motion. We also present interactions between slime mould and conducting polymers, resulting in a variation of their colour and conductivity. Finally, incorporation of the Physarum into the organic memristive device resulted in a variation of its electrical characteristics due to the slime mould internal activity.

  15. Generic Assessment Rubrics for Computer Programming Courses

    Science.gov (United States)

    Mustapha, Aida; Samsudin, Noor Azah; Arbaiy, Nurieze; Mohammed, Rozlini; Hamid, Isredza Rahmi

    2016-01-01

    In programming, one problem can usually be solved using different logics and constructs but still producing the same output. Sometimes students get marked down inappropriately if their solutions do not follow the answer scheme. In addition, lab exercises and programming assignments are not necessary graded by the instructors but most of the time…

  16. A Hybrid Dynamic Programming for Solving Fixed Cost Transportation with Discounted Mechanism

    Directory of Open Access Journals (Sweden)

    Farhad Ghassemi Tari

    2016-01-01

    Full Text Available The problem of allocating different types of vehicles for transporting a set of products from a manufacturer to its depots/cross docks, in an existing transportation network, to minimize the total transportation costs, is considered. The distribution network involves a heterogeneous fleet of vehicles, with a variable transportation cost and a fixed cost in which a discount mechanism is applied on the fixed part of the transportation costs. It is assumed that the number of available vehicles is limited for some types. A mathematical programming model in the form of the discrete nonlinear optimization model is proposed. A hybrid dynamic programming algorithm is developed for finding the optimal solution. To increase the computational efficiency of the solution algorithm, several concepts and routines, such as the imbedded state routine, surrogate constraint concept, and bounding schemes, are incorporated in the dynamic programming algorithm. A real world case problem is selected and solved by the proposed solution algorithm, and the optimal solution is obtained.

  17. All-optical quantum computing with a hybrid solid-state processing unit

    CERN Document Server

    Pei, Pei; Li, Chong

    2011-01-01

    We develop an architecture of hybrid quantum solid-state processing unit for universal quantum computing. The architecture allows distant and nonidentical solid-state qubits in distinct physical systems to interact and work collaboratively. All the quantum computing procedures are controlled by optical methods using classical fields and cavity QED. Our methods have prominent advantage of the insensitivity to dissipation process due to the virtual excitation of subsystems. Moreover, the QND measurements and state transfer for the solid-state qubits are proposed. The architecture opens promising perspectives for implementing scalable quantum computation in a broader sense that different solid systems can merge and be integrated into one quantum processor afterwards.

  18. Special purpose hybrid transfinite elements and unified computational methodology for accurately predicting thermoelastic stress waves

    Science.gov (United States)

    Tamma, Kumar K.; Railkar, Sudhir B.

    1988-01-01

    This paper represents an attempt to apply extensions of a hybrid transfinite element computational approach for accurately predicting thermoelastic stress waves. The applicability of the present formulations for capturing the thermal stress waves induced by boundary heating for the well known Danilovskaya problems is demonstrated. A unique feature of the proposed formulations for applicability to the Danilovskaya problem of thermal stress waves in elastic solids lies in the hybrid nature of the unified formulations and the development of special purpose transfinite elements in conjunction with the classical Galerkin techniques and transformation concepts. Numerical test cases validate the applicability and superior capability to capture the thermal stress waves induced due to boundary heating.

  19. Application of Computational Intelligence in Order to Develop Hybrid Orbit Propagation Methods

    Directory of Open Access Journals (Sweden)

    Iván Pérez

    2013-01-01

    Full Text Available We present a new approach in astrodynamics and celestial mechanics fields, called hybrid perturbation theory. A hybrid perturbation theory combines an integrating technique, general perturbation theory or special perturbation theory or semianalytical method, with a forecasting technique, statistical time series model or computational intelligence method. This combination permits an increase in the accuracy of the integrating technique, through the modeling of higher-order terms and other external forces not considered in the integrating technique. In this paper, neural networks have been used as time series forecasters in order to help two economic general perturbation theories describe the motion of an orbiter only perturbed by the Earth’s oblateness.

  20. Computer Program Predicts Turbine-Stage Performance

    Science.gov (United States)

    Boyle, Robert J.; Haas, Jeffrey E.; Katsanis, Theodore

    1988-01-01

    MTSBL updated version of flow-analysis programs MERIDL and TSONIC coupled to boundary-layer program BLAYER. Method uses quasi-three-dimensional, inviscid, stream-function flow analysis iteratively coupled to calculated losses so changes in losses result in changes in flow distribution. Manner effects both configuration on flow distribution and flow distribution on losses taken into account in prediction of performance of stage. Written in FORTRAN IV.

  1. Structure and Interpretation of Computer Programs

    OpenAIRE

    Narayan, Ganesh M.; Gopinath, K.; R. SRIDHAR

    2008-01-01

    Call graphs depict the static, caller-callee relation between "functions" in a program. With most source/target languages supporting functions as the primitive unit of composition, call graphs naturally form the fundamental control flow representation available to understand/develop software. They are also the substrate on which various interprocedural analyses are performed and are integral part of program comprehension/testing. Given their universality and usefulness, it is imperative to as...

  2. Advanced wellbore thermal simulator GEOTEMP2. Appendix. Computer program listing

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, R.F.

    1982-02-01

    This appendix gives the program listing of GEOTEMP2 with comments and discussion to make the program organization more understandable. This appendix is divided into an introduction and four main blocks of code: main program, program initiation, wellbore flow, and wellbore heat transfer. The purpose and use of each subprogram is discussed and the program listing is given. Flowcharts will be included to clarify code organization when needed. GEOTEMP2 was written in FORTRAN IV. Efforts have been made to keep the programing as conventional as possible so that GEOTEMP2 will run without modification on most computers.

  3. Hybrid VLSI/QCA Architecture for Computing FFTs

    Science.gov (United States)

    Fijany, Amir; Toomarian, Nikzad; Modarres, Katayoon; Spotnitz, Matthew

    2003-01-01

    A data-processor architecture that would incorporate elements of both conventional very-large-scale integrated (VLSI) circuitry and quantum-dot cellular automata (QCA) has been proposed to enable the highly parallel and systolic computation of fast Fourier transforms (FFTs). The proposed circuit would complement the QCA-based circuits described in several prior NASA Tech Briefs articles, namely Implementing Permutation Matrices by Use of Quantum Dots (NPO-20801), Vol. 25, No. 10 (October 2001), page 42; Compact Interconnection Networks Based on Quantum Dots (NPO-20855) Vol. 27, No. 1 (January 2003), page 32; and Bit-Serial Adder Based on Quantum Dots (NPO-20869), Vol. 27, No. 1 (January 2003), page 35. The cited prior articles described the limitations of very-large-scale integrated (VLSI) circuitry and the major potential advantage afforded by QCA. To recapitulate: In a VLSI circuit, signal paths that are required not to interact with each other must not cross in the same plane. In contrast, for reasons too complex to describe in the limited space available for this article, suitably designed and operated QCAbased signal paths that are required not to interact with each other can nevertheless be allowed to cross each other in the same plane without adverse effect. In principle, this characteristic could be exploited to design compact, coplanar, simple (relative to VLSI) QCA-based networks to implement complex, advanced interconnection schemes.

  4. Secure Data Sharing in Cloud Computing using Hybrid cloud

    Directory of Open Access Journals (Sweden)

    Er. Inderdeep Singh

    2015-06-01

    Full Text Available Cloud computing is fast growing technology that enables the users to store and access their data remotely. Using cloud services users can enjoy the benefits of on-demand cloud applications and data with limited local infrastructure available with them. While accessing the data from cloud, different users may have relationship among them depending on some attributes, and thus sharing of data along with user privacy and data security becomes important to get effective results. Most of the research has been done to secure the data authentication so that user’s don’t lose their private data stored on public cloud. But still data sharing is a significant hurdle to overcome by researchers. Research is going on to provide secure data sharing with enhanced user privacy and data access security. In this paper various research and challenges in this area are discussed in detail. It will definitely help the cloud users to understand the topic and researchers to develop a method to overcome these challenges.

  5. A Pascal computer program for digitizing lateral cephalometric radiographs.

    Science.gov (United States)

    Konchak, P A; Koehler, J A

    1985-03-01

    The authors describe a new program for cephalometric analysis which uses a commonly available microprocessor (computer) and digitizing pad to register fifteen commonly identified cephalometric landmarks to produce a meaningful analysis which is printed out for permanent or hard-copy record. Conventional and digitizing errors of cephalometric measurement are reviewed, with a discussion of the advantages of computer-assisted programs. The authors describe a program that uses vectors and vector algebra and the capabilities of the Pascal computer language to determine angular measurements and distances. It is suggested that computer-assisted cephalometric programs will likely be widely used in the near future, providing the orthodontist with a superior method of cephalometric analysis with respect to accuracy and speed of completion.

  6. MINEXP, A Computer-Simulated Mineral Exploration Program

    Science.gov (United States)

    Smith, Michael J.; And Others

    1978-01-01

    This computer simulation is designed to put students into a realistic decision making situation in mineral exploration. This program can be used with different exploration situations such as ore deposits, petroleum, ground water, etc. (MR)

  7. Intelligent physical blocks for introducing computer programming in developing countries

    CSIR Research Space (South Africa)

    Smith, Adrew C

    2007-05-01

    Full Text Available This paper reports on the evaluation of a novel affordable system that incorporates intelligent physical blocks to introduce illiterate children in developing countries to the logical thinking process required in computer programming. Both...

  8. Hydropower Computation Using Visual Basic for Application Programming

    Science.gov (United States)

    Yan, Wang; Hongliang, Hu

    Hydropower computation is essential to determine the operating conditions of hydroelectric station. Among the existing methods for hydropower computation, equal monthly hydropower output and dynamic programming are the most commonly used methods, but both of them are too complex in computation and hard to be finished manually. Taking the advantage of the data processing ability of Microsoft Excel and its attached Visual Basic for Application (VBA) program, the complex hydropower computation can be easily achieved. An instance was analyzed in two methods and all delt with VBA. VBA demonstrates its powerful function in solving problem with complex computation, visualizing, and secondary data processing. The results show that the dynamic programming method was more receptive than the other one.

  9. Decoding of four movement directions using hybrid NIRS-EEG brain-computer interface

    Directory of Open Access Journals (Sweden)

    M. Jawad Khan

    2014-04-01

    Full Text Available The hybrid brain-computer interface (BCI’s multimodal technology enables precision brain-signal classification that can be used in the formulation of control commands. In the present study, an experimental hybrid near-infrared spectroscopy-electroencephalography (NIRS-EEG technique was used to extract and decode four different types of brain signals. The NIRS setup was positioned over the prefrontal brain region, and the EEG over the left and right motor cortex regions. Twelve subjects participating in the experiment were shown four direction symbols, namely, forward, backward, left and right. The control commands for forward and backward movement were estimated by performing arithmetic mental tasks related to oxy-hemoglobin (HbO changes. The left and right directions commands were associated with right and left hand tapping, respectively. The high classification accuracies achieved showed that the four different control signals can be accurately estimated using the hybrid NIRS-EEG technology.

  10. Carbon nanotube reinforced hybrid composites: Computational modeling of environmental fatigue and usability for wind blades

    DEFF Research Database (Denmark)

    Dai, Gaoming; Mishnaevsky, Leon

    2015-01-01

    The potential of advanced carbon/glass hybrid reinforced composites with secondary carbon nanotube reinforcement for wind energy applications is investigated here with the use of computational experiments. Fatigue behavior of hybrid as well as glass and carbon fiber reinforced composites...... with and without secondary CNT reinforcement is simulated using multiscale 3D unit cells. The materials behavior under both mechanical cyclic loading and combined mechanical and environmental loading (with phase properties degraded due to the moisture effects) is studied. The multiscale unit cells are generated...... with the secondary CNT reinforcements (especially, aligned tubes) present superior fatigue performances than those without reinforcements, also under combined environmental and cyclic mechanical loading. This effect is stronger for carbon composites, than for hybrid and glass composites....

  11. EZLP: An Interactive Computer Program for Solving Linear Programming Problems. Final Report.

    Science.gov (United States)

    Jarvis, John J.; And Others

    Designed for student use in solving linear programming problems, the interactive computer program described (EZLP) permits the student to input the linear programming model in exactly the same manner in which it would be written on paper. This report includes a brief review of the development of EZLP; narrative descriptions of program features,…

  12. Newnes circuit calculations pocket book with computer programs

    CERN Document Server

    Davies, Thomas J

    2013-01-01

    Newnes Circuit Calculations Pocket Book: With Computer Programs presents equations, examples, and problems in circuit calculations. The text includes 300 computer programs that help solve the problems presented. The book is comprised of 20 chapters that tackle different aspects of circuit calculation. The coverage of the text includes dc voltage, dc circuits, and network theorems. The book also covers oscillators, phasors, and transformers. The text will be useful to electrical engineers and other professionals whose work involves electronic circuitry.

  13. CICT Computing, Information, and Communications Technology Program

    Science.gov (United States)

    Laufenberg, Lawrence; Tu, Eugene (Technical Monitor)

    2002-01-01

    The CICT Program is part of the NASA Aerospace Technology Enterprise's fundamental technology thrust to develop tools. processes, and technologies that enable new aerospace system capabilities and missions. The CICT Program's four key objectives are: Provide seamless access to NASA resources- including ground-, air-, and space-based distributed information technology resources-so that NASA scientists and engineers can more easily control missions, make new scientific discoveries, and design the next-generation space vehicles, provide high-data delivery from these assets directly to users for missions, develop goal-oriented human-centered systems, and research, develop and evaluate revolutionary technology.

  14. Epigenetic Programming:The Challenge to Species Hybridization

    Institute of Scientific and Technical Information of China (English)

    Ryo lshikawa; Tetsu Kinoshita

    2009-01-01

    In many organisms,the genomes of individual species are isolated by a range of reproductive barriers that act before or after fertilization.Successful mating between species results in the presence of different genomes within a cell (hybridization),which can lead to incompatibility in cellular events due to adverse genetic interactions.In addition to such genetic interactions,recent studies have shown that the epigenetic control of the genome,silencing of transposons,control of non-additive gene expression and genomic imprinting might also contribute to reproductive barriers in plant and animal species.These genetic and epigenetic mechanisms play a significant role in the prevention of gene flow between species.In this review,we focus on aspects of epigenetic control related to hybrid incompatibility during species hybridization,and also consider key mechanism(s) in the interaction between different genomes.

  15. The engineering design integration (EDIN) system. [digital computer program complex

    Science.gov (United States)

    Glatt, C. R.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Reiners, S. J.

    1974-01-01

    A digital computer program complex for the evaluation of aerospace vehicle preliminary designs is described. The system consists of a Univac 1100 series computer and peripherals using the Exec 8 operating system, a set of demand access terminals of the alphanumeric and graphics types, and a library of independent computer programs. Modification of the partial run streams, data base maintenance and construction, and control of program sequencing are provided by a data manipulation program called the DLG processor. The executive control of library program execution is performed by the Univac Exec 8 operating system through a user established run stream. A combination of demand and batch operations is employed in the evaluation of preliminary designs. Applications accomplished with the EDIN system are described.

  16. Applications integration in a hybrid cloud computing environment: modelling and platform

    Science.gov (United States)

    Li, Qing; Wang, Ze-yuan; Li, Wei-hua; Li, Jun; Wang, Cheng; Du, Rui-yang

    2013-08-01

    With the development of application services providers and cloud computing, more and more small- and medium-sized business enterprises use software services and even infrastructure services provided by professional information service companies to replace all or part of their information systems (ISs). These information service companies provide applications, such as data storage, computing processes, document sharing and even management information system services as public resources to support the business process management of their customers. However, no cloud computing service vendor can satisfy the full functional IS requirements of an enterprise. As a result, enterprises often have to simultaneously use systems distributed in different clouds and their intra enterprise ISs. Thus, this article presents a framework to integrate applications deployed in public clouds and intra ISs. A run-time platform is developed and a cross-computing environment process modelling technique is also developed to improve the feasibility of ISs under hybrid cloud computing environments.

  17. A Research Program in Computer Technology

    Science.gov (United States)

    1990-12-31

    systlems. The initial program applications included anl autonomous land vehicle. a pilot’s associate. andl a carrier battle group hattle management system...and Internet connectivity (Telnet). The environment provided by thebe jiodes and server-, conceals the fine-grain detail from outside users; users

  18. Energy management programs - computer technology, a tool

    Energy Technology Data Exchange (ETDEWEB)

    Perron, G

    1996-08-01

    Energy management systems were defined and reviewed, focusing on how the development in computer technology has impacted on the development of energy management systems. It was shown that the rise of micro-computer systems made it possible to create a tool that is well adapted to the urgent need for optimizing electromechanical systems to meet energy reduction criteria while still maintaining occupant comfort. Two case studies were cited to show the kind of savings realized by the different energy management systems installed. Besides managing energy, energy management systems can also help in detecting certain operating failures or irregularities in equipment configurations, monitoring and measuring energy consumption, as well as performing such peripherally related functions as gathering data about operating and space temperatures.

  19. Gender Digital Divide and Challenges in Undergraduate Computer Science Programs

    Science.gov (United States)

    Stoilescu, Dorian; McDougall, Douglas

    2011-01-01

    Previous research revealed a reduced number of female students registered in computer science studies. In addition, the female students feel isolated, have reduced confidence, and underperform. This article explores differences between female and male students in undergraduate computer science programs in a mid-size university in Ontario. Based on…

  20. Computing, Information, and Communications Technology (CICT) Program Overview

    Science.gov (United States)

    VanDalsem, William R.

    2003-01-01

    The Computing, Information and Communications Technology (CICT) Program's goal is to enable NASA's Scientific Research, Space Exploration, and Aerospace Technology Missions with greater mission assurance, for less cost, with increased science return through the development and use of advanced computing, information and communication technologies

  1. The Westinghouse Hanford Company Unclassified Computer Security Program

    Energy Technology Data Exchange (ETDEWEB)

    Gurth, R.J.

    1994-02-01

    This paper describes the evolution of the Westinghouse Hanford Company (WHC) Unclassified Computer Security (UCS) Program over the past seven years. The intent has been to satisfy the requirements included in the DOE Order 1360.2B (DOE 1992) for Unclassified Computer Security in the most efficient and cost-effective manner.

  2. Computational Journalism. When journalism meets programming

    OpenAIRE

    Stavelin, Eirik

    2014-01-01

    Digital data sources and platforms allow journalists to produce news in new and different ways. The shift from an analog to digital workflow introduces computation as a central component of news production. This enables variability for end users, automation of tedious tasks for newsrooms, and allows journalists to tackle analysis of the increasingly large sets of data relevant to citizens. To journalism, computerization is a promising path for news production, particularly for ...

  3. Debugging Geographers: Teaching Programming to Non-Computer Scientists

    Science.gov (United States)

    Muller, Catherine L.; Kidd, Chris

    2014-01-01

    The steep learning curve associated with computer programming can be a daunting prospect, particularly for those not well aligned with this way of logical thinking. However, programming is a skill that is becoming increasingly important. Geography graduates entering careers in atmospheric science are one example of a particularly diverse group who…

  4. Computer program for high pressure real gas effects

    Science.gov (United States)

    Johnson, R. C.

    1969-01-01

    Computer program obtains the real-gas isentropic flow functions and thermodynamic properties of gases for which the equation of state is known. The program uses FORTRAN 4 subroutines which were designed for calculations of nitrogen and helium. These subroutines are easily modified for calculations of other gases.

  5. Investigating Difficulties of Learning Computer Programming in Saudi Arabia

    Science.gov (United States)

    Alakeel, Ali M.

    2015-01-01

    Learning computer programming is one of the main requirements of many educational study plans in higher education. Research has shown that many students face difficulties acquiring reasonable programming skills during their first year of college. In Saudi Arabia, there are twenty-three state-owned universities scattered around the country that…

  6. The Hyper Apuntes Interactive Learning Environment for Computer Programming Teaching.

    Science.gov (United States)

    Sommaruga, Lorenzo; Catenazzi, Nadia

    1998-01-01

    Describes the "Hyper Apuntes" interactive learning environment, used as a didactic support to a computer programming course taught at the University Carlos III of Madrid, Spain. The system allows students to study the material and see examples, edit, compile and run programs, and evaluate their learning degree. It is installed on a Web server,…

  7. Computer program simplifies selection of structural steel columns

    Science.gov (United States)

    Vissing, G. S.

    1966-01-01

    Computer program rapidly selects appropriate size steel columns and base plates for construction of multistory structures. The program produces a printed record containing the size of a section required at a particular elevation, the stress produced by the loads, and the allowable stresses for that section.

  8. Computer Technology and Its Impact on Recreation and Sport Programs.

    Science.gov (United States)

    Ross, Craig M.

    This paper describes several types of computer programs that can be useful to sports and recreation programs. Computerized tournament scheduling software is helpful to recreation and parks staff working with tournaments of 50 teams/individuals or more. Important features include team capacity, league formation, scheduling conflicts, scheduling…

  9. 40 CFR Appendix C to Part 66 - Computer Program

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 15 2010-07-01 2010-07-01 false Computer Program C Appendix C to Part 66 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ASSESSMENT AND COLLECTION OF NONCOMPLIANCE PENALTIES BY EPA Pt. 66, App. C Appendix C to Part...

  10. A Research Program in Computer Technology

    Science.gov (United States)

    1979-01-01

    14 (7), 1971, 453-360. 5. Donzeau-Gouge, V., G. Kahn, and B. Lang , A Complete Machine-Checked Definition of a Simple Programming Language Using...Denotational Semantics, IRIA Laborla, Technical Report 330, October 1978. 6. Donzeau-Gouge, V., G. Kahn, and B. Lang , Formal Definition of Ada, Honeywell...May 1976. r S.-..-. . . . . . . . 12. ARPANET TENEX SERVICE T’fhttiral Staff Marion McKinley, Jr. William H. Moore Robert Hines Serge Poievitzky Edward

  11. Portable computer system architecture for the Space Station Freedom program

    Science.gov (United States)

    Alena, Richard; Liu, Yuan-Kwei; Fernquist, Alan R.

    1993-01-01

    This paper outlines various mission requirements and technical approaches that support the potential use of portable computers in several defined activities within the Space Station Freedom (SSF) program. Specifically, the use of portable computers as consoles for both spacecraft control and payload applications is presented. Various issues and proposed solutions regarding the incorporation of portable computers within the program are presented. The primary issues presented regard architecture (standard interface for expansion, advanced processors and displays), integration (methods of high-speed data communication, peripheral interfaces, and interconnectivity within various support networks), and evolution (wireless communications and multimedia data interface methods).

  12. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network

    OpenAIRE

    Lukas Falat; Dusan Marcek; Maria Durisova

    2016-01-01

    This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the sug...

  13. Solving Problems in Various Domains by Hybrid Models of High Performance Computations

    Directory of Open Access Journals (Sweden)

    Yurii Rogozhin

    2014-03-01

    Full Text Available This work presents a hybrid model of high performance computations. The model is based on membrane system (P~system where some membranes may contain quantum device that is triggered by the data entering the membrane. This model is supposed to take advantages of both biomolecular and quantum paradigms and to overcome some of their inherent limitations. The proposed approach is demonstrated through two selected problems: SAT, and image retrieving.

  14. Programs=data=first-class citizens in a computational world.

    Science.gov (United States)

    Jones, Neil D; Simonsen, Jakob Grue

    2012-07-28

    From a programming perspective, Alan Turing's epochal 1936 paper on computable functions introduced several new concepts, including what is today known as self-interpreters and programs as data, and invented a great many now-common programming techniques. We begin by reviewing Turing's contribution from a programming perspective; and then systematize and mention some of the many ways that later developments in models of computation (MOCs) have interacted with computability theory and programming language research. Next, we describe the 'blob' MOC: a recent stored-program computational model without pointers. In the blob model, programs are truly first-class citizens, capable of being automatically compiled, or interpreted, or executed directly. Further, the blob model appears closer to being physically realizable than earlier computation models. In part, this is due to strong finiteness owing to early binding in the program; and a strong adjacency property: the active instruction is always adjacent to the piece of data on which it operates. The model is Turing complete in a strong sense: a universal interpretation algorithm exists that is able to run any program in a natural way and without arcane data encodings. Next, some of the best known among the numerous existing MOCs are described, and we develop a list of traits an 'ideal' MOC should possess from our perspective. We make no attempt to consider all models put forth since Turing's 1936 paper, and the selection of models covered concerns only models with discrete, atomic computation steps. The next step is to classify the selected models by qualitative rather than quantitative features. Finally, we describe how the blob model differs from an 'ideal' MOC, and identify some natural next steps to achieve such a model.

  15. Computer program for automatic generation of BWR control rod patterns

    Energy Technology Data Exchange (ETDEWEB)

    Taner, M.S.; Levine, S.H.; Hsia, M.Y. (Pennsylvania State Univ., University Park (United States))

    1990-01-01

    A computer program named OCTOPUS has been developed to automatically determine a control rod pattern that approximates some desired target power distribution as closely as possible without violating any thermal safety or reactor criticality constraints. The program OCTOPUS performs a semi-optimization task based on the method of approximation programming (MAP) to develop control rod patterns. The SIMULATE-E code is used to determine the nucleonic characteristics of the reactor core state.

  16. Preliminary design data package. Appendices C1 and C3. [HYBRID 2; VSYS; and CRASH

    Energy Technology Data Exchange (ETDEWEB)

    1979-07-25

    The computer programs, including HYBRID, VSYS, VEHIC and CRASH, used to compute the energy and fuel consumption, life-cycle costs and performance characteristics of a hybrid electric-powered vehicle are described and their use documented. (LCL)

  17. DNA computation model to solve 0-1 programming problem.

    Science.gov (United States)

    Zhang, Fengyue; Yin, Zhixiang; Liu, Bo; Xu, Jin

    2004-01-01

    0-1 programming problem is an important problem in opsearch with very widespread applications. In this paper, a new DNA computation model utilizing solution-based and surface-based methods is presented to solve the 0-1 programming problem. This model contains the major benefits of both solution-based and surface-based methods; including vast parallelism, extraordinary information density and ease of operation. The result, verified by biological experimentation, revealed the potential of DNA computation in solving complex programming problem.

  18. Programs=data=first-class citizens in a computational world

    DEFF Research Database (Denmark)

    Jones, Neil; Simonsen, Jakob Grue

    2012-01-01

    From a programming perspective, Alan Turing's epochal 1936 paper on computable functions introduced several new concepts, including what is today known as self-interpreters and programs as data, and invented a great many now-common programming techniques. We begin by reviewing Turing's contribution...... concerns only models with discrete, atomic computation steps. The next step is to classify the selected models by qualitative rather than quantitative features. Finally, we describe how the blob model differs from an ‘ideal’ MOC, and identify some natural next steps to achieve such a model....

  19. On Computational Power of Quantum Read-Once Branching Programs

    Directory of Open Access Journals (Sweden)

    Farid Ablayev

    2011-03-01

    Full Text Available In this paper we review our current results concerning the computational power of quantum read-once branching programs. First of all, based on the circuit presentation of quantum branching programs and our variant of quantum fingerprinting technique, we show that any Boolean function with linear polynomial presentation can be computed by a quantum read-once branching program using a relatively small (usually logarithmic in the size of input number of qubits. Then we show that the described class of Boolean functions is closed under the polynomial projections.

  20. A Tangible Programming Tool for Children to Cultivate Computational Thinking

    Science.gov (United States)

    Wang, Danli; Liu, Zhen

    2014-01-01

    Game and creation are activities which have good potential for computational thinking skills. In this paper we present T-Maze, an economical tangible programming tool for children aged 5–9 to build computer programs in maze games by placing wooden blocks. Through the use of computer vision technology, T-Maze provides a live programming interface with real-time graphical and voice feedback. We conducted a user study with 7 children using T-Maze to play two levels of maze-escape games and create their own mazes. The results show that T-Maze is not only easy to use, but also has the potential to help children cultivate computational thinking like abstraction, problem decomposition, and creativity. PMID:24719575

  1. A Tangible Programming Tool for Children to Cultivate Computational Thinking

    Directory of Open Access Journals (Sweden)

    Danli Wang

    2014-01-01

    Full Text Available Game and creation are activities which have good potential for computational thinking skills. In this paper we present T-Maze, an economical tangible programming tool for children aged 5–9 to build computer programs in maze games by placing wooden blocks. Through the use of computer vision technology, T-Maze provides a live programming interface with real-time graphical and voice feedback. We conducted a user study with 7 children using T-Maze to play two levels of maze-escape games and create their own mazes. The results show that T-Maze is not only easy to use, but also has the potential to help children cultivate computational thinking like abstraction, problem decomposition, and creativity.

  2. A tangible programming tool for children to cultivate computational thinking.

    Science.gov (United States)

    Wang, Danli; Wang, Tingting; Liu, Zhen

    2014-01-01

    Game and creation are activities which have good potential for computational thinking skills. In this paper we present T-Maze, an economical tangible programming tool for children aged 5-9 to build computer programs in maze games by placing wooden blocks. Through the use of computer vision technology, T-Maze provides a live programming interface with real-time graphical and voice feedback. We conducted a user study with 7 children using T-Maze to play two levels of maze-escape games and create their own mazes. The results show that T-Maze is not only easy to use, but also has the potential to help children cultivate computational thinking like abstraction, problem decomposition, and creativity.

  3. Computationally efficient double hybrid density functional theory using dual basis methods

    CERN Document Server

    Byrd, Jason N

    2015-01-01

    We examine the application of the recently developed dual basis methods of Head-Gordon and co-workers to double hybrid density functional computations. Using the B2-PLYP, B2GP-PLYP, DSD-BLYP and DSD-PBEP86 density functionals, we assess the performance of dual basis methods for the calculation of conformational energy changes in C$_4$-C$_7$ alkanes and for the S22 set of noncovalent interaction energies. The dual basis methods, combined with resolution-of-the-identity second-order M{\\o}ller-Plesset theory, are shown to give results in excellent agreement with conventional methods at a much reduced computational cost.

  4. Injecting Artificial Memory Errors Into a Running Computer Program

    Science.gov (United States)

    Bornstein, Benjamin J.; Granat, Robert A.; Wagstaff, Kiri L.

    2008-01-01

    Single-event upsets (SEUs) or bitflips are computer memory errors caused by radiation. BITFLIPS (Basic Instrumentation Tool for Fault Localized Injection of Probabilistic SEUs) is a computer program that deliberately injects SEUs into another computer program, while the latter is running, for the purpose of evaluating the fault tolerance of that program. BITFLIPS was written as a plug-in extension of the open-source Valgrind debugging and profiling software. BITFLIPS can inject SEUs into any program that can be run on the Linux operating system, without needing to modify the program s source code. Further, if access to the original program source code is available, BITFLIPS offers fine-grained control over exactly when and which areas of memory (as specified via program variables) will be subjected to SEUs. The rate of injection of SEUs is controlled by specifying either a fault probability or a fault rate based on memory size and radiation exposure time, in units of SEUs per byte per second. BITFLIPS can also log each SEU that it injects and, if program source code is available, report the magnitude of effect of the SEU on a floating-point value or other program variable.

  5. The Use of a Computer for Programmed Instruction Presentation of a Pre-School Classification Program.

    Science.gov (United States)

    Holland, James G.

    Certain tasks in programed instruction can be performed only by computer. One such area is the arrangement of differential reinforcement for sophisticated reinforcement contingencies. That is, the capacity of the computer is required to determine whether the student has met the criterion for reinforcement. With this in mind, a computer-controlled…

  6. Hybrid propulsion technology program. Volume 1: Conceptional design package

    Science.gov (United States)

    Jensen, Gordon E.; Holzman, Allen L.; Leisch, Steven O.; Keilbach, Joseph; Parsley, Randy; Humphrey, John

    1989-01-01

    A concept design study was performed to configure two sizes of hybrid boosters; one which duplicates the advanced shuttle rocket motor vacuum thrust time curve and a smaller, quarter thrust level booster. Two sizes of hybrid boosters were configured for either pump-fed or pressure-fed oxygen feed systems. Performance analyses show improved payload capability relative to a solid propellant booster. Size optimization and fuel safety considerations resulted in a 4.57 m (180 inch) diameter large booster with an inert hydrocarbon fuel. The preferred diameter for the quarter thrust level booster is 2.53 m (96 inches). As part of the design study critical technology issues were identified and a technology acquisition and demonstration plan was formulated.

  7. Multithreaded transactions in scientific computing. The Growth06_v2 program

    Science.gov (United States)

    Daniluk, Andrzej

    2009-07-01

    efficient than the previous ones [3]. Summary of revisions:The design pattern (See Fig. 2 of Ref. [3]) has been modified according to the scheme shown on Fig. 1. A graphical user interface (GUI) for the program has been reconstructed. Fig. 2 presents a hybrid diagram of a GUI that shows how onscreen objects connect to use cases. The program has been compiled with English/USA regional and language options. Note: The figures mentioned above are contained in the program distribution file. Unusual features: The program is distributed in the form of source project GROWTH06_v2.dpr with associated files, and should be compiled using Borland Delphi compilers versions 6 or latter (including Borland Developer Studio 2006 and Code Gear compilers for Delphi). Additional comments: Two figures are included in the program distribution file. These are captioned Static classes model for Transaction design pattern. A model of a window that shows how onscreen objects connect to use cases. Running time: The typical running time is machine and user-parameters dependent. References: [1] A. Daniluk, Comput. Phys. Comm. 170 (2005) 265. [2] W.H. Press, B.P. Flannery, S.A. Teukolsky, W.T. Vetterling, Numerical Recipes in Pascal: The Art of Scientific Computing, first ed., Cambridge University Press, 1989. [3] M. Brzuszek, A. Daniluk, Comput. Phys. Comm. 175 (2006) 678.

  8. Adaptive, Active and Multifunctional Composite and Hybrid Materials Program: Composite and Hybrid Materials ERA

    Science.gov (United States)

    2014-04-01

    16 4.2.4.3 Fabrication and Modeling of Rubber Muscle Actuators ..........17 4.2.4.4 Modeling of Power Response of SMP/SMA...Processing of BMI/Preceramic Polymer Blends .................................28 4.9 Task 9.0 Hybrid Material Processing and Fabrication...electrical stimulus, similar in action to the natural response of the conformation of a bird wing during flight vs. takeoff or landing, a muscle pair

  9. Application of modern computer technology to EPRI (Electric Power Research Institute) nuclear computer programs: Final report

    Energy Technology Data Exchange (ETDEWEB)

    Feinauer, L.R.

    1989-08-01

    Many of the nuclear analysis programs in use today were designed and developed well over a decade ago. Within this time frame, tremendous changes in hardware and software technologies have made it necessary to revise and/or restructure most of the analysis programs to take advantage of these changes. As computer programs mature from the development phase to being production programs, program maintenance and portability become very important issues. The maintenance costs associated with a particular computer program can generally be expected to exceed the total development costs by as much as a factor of two. Many of the problems associated with high maintenance costs can be traced back to either poorly designed coding structure, or ''quick fix'' modifications which do not preserve the original coding structure. The lack of standardization between hardware designs presents an obstacle to the software designer in providing 100% portable coding; however, conformance to certain guidelines can ensure portability between a wide variety of machines and operating systems. This report presents guidelines for upgrading EPRI nuclear computer programs to conform to current programming standards while maintaining flexibility for accommodating future hardware and software design trends. Guidelines for development of new computer programs are also presented. 22 refs., 10 figs.

  10. 16th International Conference on Hybrid Intelligent Systems and the 8th World Congress on Nature and Biologically Inspired Computing

    CERN Document Server

    Haqiq, Abdelkrim; Alimi, Adel; Mezzour, Ghita; Rokbani, Nizar; Muda, Azah

    2017-01-01

    This book presents the latest research in hybrid intelligent systems. It includes 57 carefully selected papers from the 16th International Conference on Hybrid Intelligent Systems (HIS 2016) and the 8th World Congress on Nature and Biologically Inspired Computing (NaBIC 2016), held on November 21–23, 2016 in Marrakech, Morocco. HIS - NaBIC 2016 was jointly organized by the Machine Intelligence Research Labs (MIR Labs), USA; Hassan 1st University, Settat, Morocco and University of Sfax, Tunisia. Hybridization of intelligent systems is a promising research field in modern artificial/computational intelligence and is concerned with the development of the next generation of intelligent systems. The conference’s main aim is to inspire further exploration of the intriguing potential of hybrid intelligent systems and bio-inspired computing. As such, the book is a valuable resource for practicing engineers /scientists and researchers working in the field of computational intelligence and artificial intelligence.

  11. Computer-Aided Design of Drugs on Emerging Hybrid High Performance Computers

    Science.gov (United States)

    2013-09-01

    Clustering using MapReduce , Workshop on Trends in High-Performance Distributed Computing, Vrije Universiteit, Amsterdam, NL. (Invited Talk) [25] February...and middleware packages for polarizable force fields on multi-core and GPU systems, supported by the MapReduce paradigm. NSF MRI #0922657, $451,051...High-throughput Molecular Datasets for Scalable Clustering using MapReduce , Workshop on Trends in High-Performance Distributed Computing, Vrije

  12. Experiences With Efficient Methodologies for Teaching Computer Programming to Geoscientists

    Science.gov (United States)

    Jacobs, Christian T.; Gorman, Gerard J.; Rees, Huw E.; Craig, Lorraine E.

    2016-08-01

    Computer programming was once thought of as a skill required only by professional software developers. But today, given the ubiquitous nature of computation and data science it is quickly becoming necessary for all scientists and engineers to have at least a basic knowledge of how to program. Teaching how to program, particularly to those students with little or no computing background, is well-known to be a difficult task. However, there is also a wealth of evidence-based teaching practices for teaching programming skills which can be applied to greatly improve learning outcomes and the student experience. Adopting these practices naturally gives rise to greater learning efficiency - this is critical if programming is to be integrated into an already busy geoscience curriculum. This paper considers an undergraduate computer programming course, run during the last 5 years in the Department of Earth Science and Engineering at Imperial College London. The teaching methodologies that were used each year are discussed alongside the challenges that were encountered, and how the methodologies affected student performance. Anonymised student marks and feedback are used to highlight this, and also how the adjustments made to the course eventually resulted in a highly effective learning environment.

  13. A Methodology for Teaching Computer Programming: first year students’ perspective

    Directory of Open Access Journals (Sweden)

    Bassey Isong

    2014-09-01

    Full Text Available The teaching of computer programming is one of the greatest challenges that have remained for years in Computer Science Education. A particular case is computer programming course for the beginners. While the traditional objectivist lecture-based approaches do not actively engage students to achieve their learning outcome, we believe that integrating some cutting-edge processes and practices like agile method into the teaching approaches will be leverage. Agile software development has gained widespread popularity and acceptance in the software industry and integrating the ideas into teaching will be constructive. In the educational system, while the positive impact of agile principles has been felt on students’ projects, none has been experienced on the teaching aspect. Therefore, this paper proposes the use of agile process in the teaching of first year programming courses. The goal is to help the beginners develop their programming skills, proffer a teaching technology that maximizes students’ chances of engagement, improve teaching as teachers reflects on what they are teaching and what the students are learning. Additionally, beginners will be able to operate the computer, program, and improve their programming skills through active team collaboration as well as managing large classes effectively by the teacher.

  14. Automatic Generation of Very Efficient Programs by Generalized Partial Computation

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Generalized Partial Computation (GPC) is a program transformationmethod utilizi ng partial information about input data, properties of auxiliary functions and t he logical structure of a source program. GPC uses both an inference engine such as a theorem prover and a classical partial evaluator to optimize programs. The refore, GPC is more powerful than classical partial evaluators but harder to imp lement and control. We have implemented an experimental GPC system called WSDFU (Waseda Simplify-Distribute-Fold-Unfold). This paper discusses the power of t he program transformation system, its theorem prover and future works.

  15. Hybrid Cloud Computing Environment for EarthCube and Geoscience Community

    Science.gov (United States)

    Yang, C. P.; Qin, H.

    2016-12-01

    The NSF EarthCube Integration and Test Environment (ECITE) has built a hybrid cloud computing environment to provides cloud resources from private cloud environments by using cloud system software - OpenStack and Eucalyptus, and also manages public cloud - Amazon Web Service that allow resource synchronizing and bursting between private and public cloud. On ECITE hybrid cloud platform, EarthCube and geoscience community can deploy and manage the applications by using base virtual machine images or customized virtual machines, analyze big datasets by using virtual clusters, and real-time monitor the virtual resource usage on the cloud. Currently, a number of EarthCube projects have deployed or started migrating their projects to this platform, such as CHORDS, BCube, CINERGI, OntoSoft, and some other EarthCube building blocks. To accomplish the deployment or migration, administrator of ECITE hybrid cloud platform prepares the specific needs (e.g. images, port numbers, usable cloud capacity, etc.) of each project in advance base on the communications between ECITE and participant projects, and then the scientists or IT technicians in those projects launch one or multiple virtual machines, access the virtual machine(s) to set up computing environment if need be, and migrate their codes, documents or data without caring about the heterogeneity in structure and operations among different cloud platforms.

  16. Portability and Reusability, Standardized Programming for Present and Future Computers

    Science.gov (United States)

    Dumont, Jean-Jacques; Tomassini, Marco

    Unstructured sequential programming in Fortran, together with a top down approach for problem analysis, have always been and still are the usual physicists favorite methods as far as computing is concerned. This unfortunate fact of life is causing a tremendous amount of efficiency loss for code development and maintenance, which could easily be avoided by evolving to a more modern, bottom up programming style, based on the new emerging standards (system interfaces, communication between computational nodes, object-oriented C-extensions, user graphical interfaces, data structures etc.). We are reaching the historical point where this evolution becomes mandatory if one wants to tackle properly the problem of programming in a reasonably efficient way the highly parallel machines which are now appearing on the market, to the delight of numerous scientists who are badly in need of more computation power.

  17. Contributions to Desktop Grid Computing : From High Throughput Computing to Data-Intensive Sciences on Hybrid Distributed Computing Infrastructures

    OpenAIRE

    Fedak, Gilles

    2015-01-01

    Since the mid 90’s, Desktop Grid Computing - i.e the idea of using a large number of remote PCs distributed on the Internet to execute large parallel applications - has proved to be an efficient paradigm to provide a large computational power at the fraction of the cost of a dedicated computing infrastructure.This document presents my contributions over the last decade to broaden the scope of Desktop Grid Computing. My research has followed three different directions. The first direction has ...

  18. Numerical approach for solving kinetic equations in two-dimensional case on hybrid computational clusters

    Science.gov (United States)

    Malkov, Ewgenij A.; Poleshkin, Sergey O.; Kudryavtsev, Alexey N.; Shershnev, Anton A.

    2016-10-01

    The paper presents the software implementation of the Boltzmann equation solver based on the deterministic finite-difference method. The solver allows one to carry out parallel computations of rarefied flows on a hybrid computational cluster with arbitrary number of central processor units (CPU) and graphical processor units (GPU). Employment of GPUs leads to a significant acceleration of the computations, which enables us to simulate two-dimensional flows with high resolution in a reasonable time. The developed numerical code was validated by comparing the obtained solutions with the Direct Simulation Monte Carlo (DSMC) data. For this purpose the supersonic flow past a flat plate at zero angle of attack is used as a test case.

  19. Hybrid annealing using a quantum simulator coupled to a classical computer

    CERN Document Server

    Graß, Tobias

    2016-01-01

    Finding the global minimum in a rugged potential landscape is a computationally hard task, often equivalent to relevant optimization problems. Simulated annealing is a computational technique which explores the configuration space by mimicking thermal noise. By slow cooling, it freezes the system in a low-energy configuration, but the algorithm often gets stuck in local minima. In quantum annealing, the thermal noise is replaced by controllable quantum fluctuations, and the technique can be implemented in modern quantum simulators. However, quantum-adiabatic schemes become prohibitively slow in the presence of quasidegeneracies. Here we propose a strategy which combines ideas from simulated annealing and quantum annealing. In such hybrid algorithm, the outcome of a quantum simulator is processed on a classical device. While the quantum simulator explores the configuration space by repeatedly applying quantum fluctuations and performing projective measurements, the classical computer evaluates each configurati...

  20. Step Response Enhancement of Hybrid Stepper Motors Using Soft Computing Techniques

    Directory of Open Access Journals (Sweden)

    Amged S. El-Wakeel

    2014-05-01

    Full Text Available This paper presents the use of different soft computing techniques for step response enhancement of Hybrid Stepper Motors. The basic differential equations of hybrid stepper motor are used to build up a model using MATLAB software package. The implementation of Fuzzy Logic (FL and Proportional-Integral-Derivative (PID controllers are used to improve the motor performance. The numerical simulations by a PC-based controller show that the PID controller tuned by Genetic Algorithm (GA produces better performance than that tuned by Fuzzy controller. They show that, the Fuzzy PID-like controller produces better performance than the other linear Fuzzy controllers. Finally, the comparison between PID controllers tuned by genetic algorithm and the Fuzzy PID-like controller shows that, the Fuzzy PID-like controller produces better performance.

  1. Four-Cylinder Stirling-Engine Computer Program

    Science.gov (United States)

    Daniele, C. J.; Lorenzo, C. F.

    1986-01-01

    Computer program developed for simulating steady-state and transient performance of four-cylinder Stirling engine. In model, four cylinders interconnected by four working spaces. Each working space contains seven volumes: one for expansion space, heater, cooler, and compression space and three for regenerator. Thermal time constant for regenerator mass associated with each regenator gas volume. Former code generates results very quickly, since it has only 14 state variables with no energy equation. Current code then used to study various aspects of Stirling engine in much more detail. Program written in FORTRAN IV for use on IBM 370 computer.

  2. WHIPICE. [Computer Program for Analysis of Aircraft Deicing

    Science.gov (United States)

    1992-01-01

    This video documents efforts by NASA Lewis Research Center researchers to improve ice protection for aircraft. A new system of deicing aircraft by allowing a thin sheet of ice to develop, then breaking it into particles, is being examined, particularly to determine the extent of shed ice ingestion by jet engines that results. The process is documented by a high speed imaging system that scans the breakup and flow of the ice particles at 1000 frames per second. This data is then digitized and analyzed using a computer program called WHIPICE, which analyzes grey scale images of the ice particles. Detailed description of the operation of this computer program is provided.

  3. The UF family of hybrid phantoms of the developing human fetus for computational radiation dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Maynard, Matthew R; Geyer, John W; Bolch, Wesley [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL (United States); Aris, John P [Department of Anatomy and Cell Biology, University of Florida, Gainesville, FL (United States); Shifrin, Roger Y, E-mail: wbolch@ufl.edu [Department of Radiology, University of Florida, Gainesville, FL (United States)

    2011-08-07

    Historically, the development of computational phantoms for radiation dosimetry has primarily been directed at capturing and representing adult and pediatric anatomy, with less emphasis devoted to models of the human fetus. As concern grows over possible radiation-induced cancers from medical and non-medical exposures of the pregnant female, the need to better quantify fetal radiation doses, particularly at the organ-level, also increases. Studies such as the European Union's SOLO (Epidemiological Studies of Exposed Southern Urals Populations) hope to improve our understanding of cancer risks following chronic in utero radiation exposure. For projects such as SOLO, currently available fetal anatomic models do not provide sufficient anatomical detail for organ-level dose assessment. To address this need, two fetal hybrid computational phantoms were constructed using high-quality magnetic resonance imaging and computed tomography image sets obtained for two well-preserved fetal specimens aged 11.5 and 21 weeks post-conception. Individual soft tissue organs, bone sites and outer body contours were segmented from these images using 3D-DOCTOR(TM) and then imported to the 3D modeling software package Rhinoceros(TM) for further modeling and conversion of soft tissue organs, certain bone sites and outer body contours to deformable non-uniform rational B-spline surfaces. The two specimen-specific phantoms, along with a modified version of the 38 week UF hybrid newborn phantom, comprised a set of base phantoms from which a series of hybrid computational phantoms was derived for fetal ages 8, 10, 15, 20, 25, 30, 35 and 38 weeks post-conception. The methodology used to construct the series of phantoms accounted for the following age-dependent parameters: (1) variations in skeletal size and proportion, (2) bone-dependent variations in relative levels of bone growth, (3) variations in individual organ masses and total fetal masses and (4) statistical percentile variations

  4. The UF family of hybrid phantoms of the developing human fetus for computational radiation dosimetry

    Science.gov (United States)

    Maynard, Matthew R.; Geyer, John W.; Aris, John P.; Shifrin, Roger Y.; Bolch, Wesley

    2011-08-01

    Historically, the development of computational phantoms for radiation dosimetry has primarily been directed at capturing and representing adult and pediatric anatomy, with less emphasis devoted to models of the human fetus. As concern grows over possible radiation-induced cancers from medical and non-medical exposures of the pregnant female, the need to better quantify fetal radiation doses, particularly at the organ-level, also increases. Studies such as the European Union's SOLO (Epidemiological Studies of Exposed Southern Urals Populations) hope to improve our understanding of cancer risks following chronic in utero radiation exposure. For projects such as SOLO, currently available fetal anatomic models do not provide sufficient anatomical detail for organ-level dose assessment. To address this need, two fetal hybrid computational phantoms were constructed using high-quality magnetic resonance imaging and computed tomography image sets obtained for two well-preserved fetal specimens aged 11.5 and 21 weeks post-conception. Individual soft tissue organs, bone sites and outer body contours were segmented from these images using 3D-DOCTOR™ and then imported to the 3D modeling software package Rhinoceros™ for further modeling and conversion of soft tissue organs, certain bone sites and outer body contours to deformable non-uniform rational B-spline surfaces. The two specimen-specific phantoms, along with a modified version of the 38 week UF hybrid newborn phantom, comprised a set of base phantoms from which a series of hybrid computational phantoms was derived for fetal ages 8, 10, 15, 20, 25, 30, 35 and 38 weeks post-conception. The methodology used to construct the series of phantoms accounted for the following age-dependent parameters: (1) variations in skeletal size and proportion, (2) bone-dependent variations in relative levels of bone growth, (3) variations in individual organ masses and total fetal masses and (4) statistical percentile variations in

  5. A hybrid model for the computationally-efficient simulation of the cerebellar granular layer

    Directory of Open Access Journals (Sweden)

    Anna eCattani

    2016-04-01

    Full Text Available The aim of the present paper is to efficiently describe the membrane potential dynamics of neural populations formed by species having a high density difference in specific brain areas. We propose a hybrid model whose main ingredients are a conductance-based model (ODE system and its continuous counterpart (PDE system obtained through a limit process in which the number of neurons confined in a bounded region of the brain tissue is sent to infinity. Specifically, in the discrete model, each cell is described by a set of time-dependent variables, whereas in the continuum model, cells are grouped into populations that are described by a set of continuous variables.Communications between populations, which translate into interactions among the discrete and the continuous models, are the essence of the hybrid model we present here. The cerebellum and cerebellum-like structures show in their granular layer a large difference in the relative density of neuronal species making them a natural testing ground for our hybrid model. By reconstructing the ensemble activity of the cerebellar granular layer network and by comparing our results to a more realistic computational network, we demonstrate that our description of the network activity, even though it is not biophysically detailed, is still capable of reproducing salient features of neural network dynamics. Our modeling approach yields a significant computational cost reduction by increasing the simulation speed at least $270$ times. The hybrid model reproduces interesting dynamics such as local microcircuit synchronization, traveling waves, center-surround and time-windowing.

  6. Computer programs: Information retrieval and data analysis, a compilation

    Science.gov (United States)

    1972-01-01

    The items presented in this compilation are divided into two sections. Section one treats of computer usage devoted to the retrieval of information that affords the user rapid entry into voluminous collections of data on a selective basis. Section two is a more generalized collection of computer options for the user who needs to take such data and reduce it to an analytical study within a specific discipline. These programs, routines, and subroutines should prove useful to users who do not have access to more sophisticated and expensive computer software.

  7. FLUENT and FLUENT/BFC CFD computer programs

    Science.gov (United States)

    Patel, Bharatan R.

    In the scope of the 1990-04 lecture series on computational fluid dynamics, two computer systems programs are described. FLUENT and FLUENT/BFC codes are well-suited for simulating turbulent flows encountered in industrial applications. The numerical schemes in these codes are first and/or second order accurate. These codes have a large number of physical models to simulate a variety of flows. The NEKTON code, well-suited for the laminar and transitional flow computations is outlined. It is a finite element code and uses pseudo-spectral formulations.

  8. An Analysis on Distance Education Computer Programming Students' Attitudes Regarding Programming and Their Self-Efficacy for Programming

    Science.gov (United States)

    Ozyurt, Ozcan

    2015-01-01

    This study aims to analyze the attitudes of students studying computer programming through the distance education regarding programming, and their self-efficacy for programming and the relation between these two factors. The study is conducted with 104 students being thought with distance education in a university in the north region of Turkey in…

  9. Computational and experimental determinations of the UV adsorption of polyvinylsilsesquioxane-silica and titanium dioxide hybrids.

    Science.gov (United States)

    Wang, Haiyan; Lin, Derong; Wang, Di; Hu, Lijiang; Huang, Yudong; Liu, Li; Loy, Douglas A

    2014-01-01

    Sunscreens that absorb UV light without photodegradation could reduce skin cancer. Polyvinyl silsesquioxanes are known to have greater thermal and photochemical stability than organic compounds, such as those in sunscreens. This paper evaluates the UV transparency of vinyl silsesquioxanes (VS) and its hybrids with SiO2(VSTE) and TiO2(VSTT) experimentally and computationally. Based on films of VS prepared by sol-gel polymerization, using benzoyl peroxide as an initiator, vinyltrimethoxysilane (VMS) formulated oligomer through thermal curing. Similarly, VSTE films were prepared from VMS and 5-25 wt-% tetraethoxysilane (TEOS) and VSTT films were prepared from VMS and 5-25 wt-% titanium tetrabutoxide (TTB). Experimental average transparencies of the modified films were found to be about 9-14% between 280-320 nm, 67-73% between 320-350nm, and 86-89% between 350-400nm. Computation of the band gap was absorption edges for the hybrids in excellent agreement with experimental data. VS, VSTE and VSTT showed good absorption in UV-C and UV-B range, but absorbed virtually no UV-A. Addition of SiO2 or TiO2 does not improve UV-B absorption, but on the opposite increases transparency of thin films to UV. This increase was validated with molecular simulations. Results show computational design can predict better sunscreens and reduce the effort of creating sunscreens that are capable of absorbing more UV-B and UV-A.

  10. The near-term hybrid vehicle program, phase 1

    Science.gov (United States)

    1979-01-01

    Performance specifications were determined for a hybrid vehicle designed to achieve the greatest reduction in fuel consumption. Based on the results of systems level studies, a baseline vehicle was constructed with the following basic paramaters: a heat engine power peak of 53 kW (VW gasoline engine); a traction motor power peak of 30 kW (Siemens 1GV1, separately excited); a heat engine fraction of 0.64; a vehicle curb weight of 2080 kg; a lead acid battery (35 kg weight); and a battery weight fraction of 0.17. The heat engine and the traction motor are coupled together with their combined output driving a 3 speed automatic transmission with lockup torque converter. The heat engine is equipped withe a clutch which allows it to be decoupled from the system.

  11. Department of Energy Mathematical, Information, and Computational Sciences Division: High Performance Computing and Communications Program

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-11-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, The DOE Program in HPCC), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW).

  12. Department of Energy: MICS (Mathematical Information, and Computational Sciences Division). High performance computing and communications program

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-06-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, {open_quotes}The DOE Program in HPCC{close_quotes}), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW). The information pointed to by the URL is updated frequently, and the interested reader is urged to access the WWW for the latest information.

  13. Higher Order Modeling in Hybrid Approaches to the Computation of Electromagnetic Fields

    Science.gov (United States)

    Wilton, Donald R.; Fink, Patrick W.; Graglia, Roberto D.

    2000-01-01

    Higher order geometry representations and interpolatory basis functions for computational electromagnetics are reviewed. Two types of vector-valued basis functions are described: curl-conforming bases, used primarily in finite element solutions, and divergence-conforming bases used primarily in integral equation formulations. Both sets satisfy Nedelec constraints, which optimally reduce the number of degrees of freedom required for a given order. Results are presented illustrating the improved accuracy and convergence properties of higher order representations for hybrid integral equation and finite element methods.

  14. Public vs Private vs Hybrid vs Community - Cloud Computing: A Critical Review

    Directory of Open Access Journals (Sweden)

    Sumit Goyal

    2014-02-01

    Full Text Available These days cloud computing is booming like no other technology. Every organization whether it's small, mid-sized or big, wants to adapt this cutting edge technology for its business. As cloud technology becomes immensely popular among these businesses, the question arises: Which cloud model to consider for your business? There are four types of cloud models available in the market: Public, Private, Hybrid and Community. This review paper answers the question, which model would be most beneficial for your business. All the four models are defined, discussed and compared with the benefits and pitfalls, thus giving you a clear idea, which model to adopt for your organization.

  15. Quantum computation in a quantum-dot-Majorana-fermion hybrid system

    CERN Document Server

    Xue, Zheng-Yuan

    2012-01-01

    We propose a scheme to implement universal quantum computation in a quantum-dot-Majorana-fermion hybrid system. Quantum information is encoded on pairs of Majorana fermions, which live on the the interface between topologically trivial and nontrivial sections of a quantum nanowire deposited on an s-wave superconductor. Universal single-qubit gates on topological qubit can be achieved. A measurement-based two-qubit Controlled-Not gate is produced with the help of parity measurements assisted by the quantum-dot and followed by prescribed single-qubit gates. The parity measurement, on the quantum-dot and a topological qubit, is achieved by the Aharonov- Casher effect.

  16. Hybrid EEG-EOG brain-computer interface system for practical machine control.

    Science.gov (United States)

    Punsawad, Yunyong; Wongsawat, Yodchanan; Parnichkun, Manukid

    2010-01-01

    Practical issues such as accuracy with various subjects, number of sensors, and time for training are important problems of existing brain-computer interface (BCI) systems. In this paper, we propose a hybrid framework for the BCI system that can make machine control more practical. The electrooculogram (EOG) is employed to control the machine in the left and right directions while the electroencephalogram (EEG) is employed to control the forword, no action, and complete stop motions of the machine. By using only 2-channel biosignals, the average classification accuracy of more than 95% can be achieved.

  17. Treatment of early and late reflections in a hybrid computer model for room acoustics

    DEFF Research Database (Denmark)

    Naylor, Graham

    1992-01-01

    The ODEON computer model for acoustics in large rooms is intended for use both in design (by predicting room acoustical indices quickly and easily) and in research (by forming the basis of an auralization system and allowing study of various room acoustical phenomena). These conflicting demands...... preclude the use of both ``pure'' image source and ``pure'' particle tracing methods. A hybrid model has been developed, in which rays discover potential image sources up to a specified order. Thereafter, the same ray tracing process is used in a different way to rapidly generate a dense reverberant decay...

  18. Assessment of asthmatic inflammation using hybrid fluorescence molecular tomography-x-ray computed tomography

    Science.gov (United States)

    Ma, Xiaopeng; Prakash, Jaya; Ruscitti, Francesca; Glasl, Sarah; Stellari, Fabio Franco; Villetti, Gino; Ntziachristos, Vasilis

    2016-01-01

    Nuclear imaging plays a critical role in asthma research but is limited in its readings of biology due to the short-lived signals of radio-isotopes. We employed hybrid fluorescence molecular tomography (FMT) and x-ray computed tomography (XCT) for the assessment of asthmatic inflammation based on resolving cathepsin activity and matrix metalloproteinase activity in dust mite, ragweed, and Aspergillus species-challenged mice. The reconstructed multimodal fluorescence distribution showed good correspondence with ex vivo cryosection images and histological images, confirming FMT-XCT as an interesting alternative for asthma research.

  19. Mixed model approaches for the identification of QTLs within a maize hybrid breeding program.

    NARCIS (Netherlands)

    Eeuwijk, van F.A.; Boer, M.; Totir, L.; Bink, M.C.A.M.; Wright, D.; Winkler, C.; Podlich, D.; Boldman, K.; Baumgarten, R.; Smalley, M.; Arbelbide, M.; Braak, ter C.J.F.; Cooper, M.

    2010-01-01

    Two outlines for mixed model based approaches to quantitative trait locus (QTL) mapping in existing maize hybrid selection programs are presented: a restricted maximum likelihood (REML) and a Bayesian Markov Chain Monte Carlo (MCMC) approach. The methods use the in-silico-mapping procedure developed

  20. Hybrid Food Preservation Program Improves Food Preservation and Food Safety Knowledge

    Science.gov (United States)

    Francis, Sarah L.

    2014-01-01

    The growing trend in home food preservation raises concerns about whether the resulting food products will be safe to eat. The increased public demand for food preservation information led to the development of the comprehensive food preservation program, Preserve the Taste of Summer (PTTS). PTTS is a comprehensive hybrid food preservation program…

  1. The Swedish electric and hybrid vehicle R, D and D program. Seminar no. 2, June 1999

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-09-01

    This publication presents a selection of the ongoing and finalised projects in form of abstracts, within the KFB RDD-program Electric- and Hybride Vehicles. These projects were presented at the second project manager seminar 14-15 June 1999. The first project manager seminar was held 20-21 October 1998

  2. Hybrid Food Preservation Program Improves Food Preservation and Food Safety Knowledge

    Science.gov (United States)

    Francis, Sarah L.

    2014-01-01

    The growing trend in home food preservation raises concerns about whether the resulting food products will be safe to eat. The increased public demand for food preservation information led to the development of the comprehensive food preservation program, Preserve the Taste of Summer (PTTS). PTTS is a comprehensive hybrid food preservation program…

  3. Teaching Perspectives among Introductory Computer Programming Faculty in Higher Education

    Science.gov (United States)

    Mainier, Michael J.

    2011-01-01

    This study identified the teaching beliefs, intentions, and actions of 80 introductory computer programming (CS1) faculty members from institutions of higher education in the United States using the Teacher Perspectives Inventory. Instruction method used inside the classroom, categorized by ACM CS1 curriculum guidelines, was also captured along…

  4. Computer program calculates velocities and streamlines in turbomachines

    Science.gov (United States)

    Katsanis, T.

    1968-01-01

    Computer program calculates the velocity distribution and streamlines over widely separated blades of turbomachines. It gives the solutions of a two dimensional, subsonic, compressible nonviscous flow problem for a rotating or stationary circular cascade of blades on a blade-to-blade surface of revolution.

  5. GenOVa: a computer program to generate orientational variants

    OpenAIRE

    Cayron, Cyril

    2007-01-01

    A computer program called GenOVa, written in Python, calculates the orientational variants, the operators (special types of misorientations between variants) and the composition table associated with a groupoid structure. The variants can be represented by three-dimensional shapes or by pole figures.

  6. A Domain-Specific Programming Language for Secure Multiparty Computation

    DEFF Research Database (Denmark)

    Nielsen, Janus Dam; Schwartzbach, Michael Ignatieff

    2007-01-01

    We present a domain-specific programming language for Secure Multiparty Computation (SMC). Information is a resource of vital importance and considerable economic value to individuals, public administration, and private companies. This means that the confidentiality of information is crucial...... application development. The language is implemented in a prototype compiler that generates Java code exploiting a distributed cryptographic runtime....

  7. Computer Programming with Early Elementary Students with Down Syndrome

    Science.gov (United States)

    Taylor, Matthew S.; Vasquez, Eleazar; Donehower, Claire

    2017-01-01

    Students of all ages and abilities must be given the opportunity to learn academic skills that can shape future opportunities and careers. Researchers in the mid-1970s and 1980s began teaching young students the processes of computer programming using basic coding skills and limited technology. As technology became more personalized and easily…

  8. Individual Differences in Learning Computer Programming: A Social Cognitive Approach

    Science.gov (United States)

    Akar, Sacide Guzin Mazman; Altun, Arif

    2017-01-01

    The purpose of this study is to investigate and conceptualize the ranks of importance of social cognitive variables on university students' computer programming performances. Spatial ability, working memory, self-efficacy, gender, prior knowledge and the universities students attend were taken as variables to be analyzed. The study has been…

  9. Learning Computer Programming: Implementing a Fractal in a Turing Machine

    Science.gov (United States)

    Pereira, Hernane B. de B.; Zebende, Gilney F.; Moret, Marcelo A.

    2010-01-01

    It is common to start a course on computer programming logic by teaching the algorithm concept from the point of view of natural languages, but in a schematic way. In this sense we note that the students have difficulties in understanding and implementation of the problems proposed by the teacher. The main idea of this paper is to show that the…

  10. P-Lingua: A Programming Language for Membrane Computing

    OpenAIRE

    Díaz Pernil, Daniel; Pérez Hurtado de Mendoza, Ignacio; Pérez Jiménez, Mario de Jesús; Riscos Núñez, Agustín

    2008-01-01

    Software development for cellular computing has already been addressed, yielding a first generation of applications. In this paper, we develop a new programming language: P-Lingua. Furthermore, we present a simulator for the class of recognizing P systems with active membranes. We illustrate it by giving a solution to the SAT problem as an example.

  11. Computer program aids dual reflector antenna system design

    Science.gov (United States)

    Firnett, P.; Gerritsen, R.; Jarvie, P.; Ludwig, A.

    1968-01-01

    Computer program aids in the design of maximum efficiency dual reflector antenna systems. It designs a shaped cassegrainian antenna which has nearly 100 percent efficiency, and accepts input parameters specifying an existing conventional antenna and produces as output the modifications necessary to conform to a shaped design.

  12. Qualification plan for the Genmod-PC computer program

    Energy Technology Data Exchange (ETDEWEB)

    Richardson, R.B.; Wright, G.M.; Dunford, D.W.; Linauskas, S.H

    2002-07-01

    Genmod-PC is an internal dosimetry code that uses Microsoft Windows operating system, and that currently calculates radionuclide doses and intakes for an adult male. This report provides a plan for specifying the quality assurance measures that conform to the recommendations of the Canadian Standards Association, as well as AECL procedural requirements for a legacy computer program developed at AECL. (author)

  13. BASIC Computer Scoring Program for the Leadership Scale for Sports.

    Science.gov (United States)

    Garland, Daniel J.

    This paper describes a computer scoring program, written in Commodore BASIC, that offers an efficient approach to the scoring of the Leadership Scale for Sports (LSS). The LSS measures: (1) the preferences of athletes for specific leader behaviors from the coach; (2) the perception of athletes regarding the actual leader behavior of their coach;…

  14. Computer program for calculating the daylight level in a room

    NARCIS (Netherlands)

    Jordaans, A.A.

    1984-01-01

    A computer program has been developed that calculates the total quantity of daylight provided to an arbitrary place in a room by direct incident daylight, by reflected daylight from opposite buildings and ground, and by interreflected daylight from walls, ceilings and floors. Input data include the

  15. A hybrid nonlinear programming method for design optimization

    Science.gov (United States)

    Rajan, S. D.

    1986-01-01

    Solutions to engineering design problems formulated as nonlinear programming (NLP) problems usually require the use of more than one optimization technique. Moreover, the interaction between the user (analysis/synthesis) program and the NLP system can lead to interface, scaling, or convergence problems. An NLP solution system is presented that seeks to solve these problems by providing a programming system to ease the user-system interface. A simple set of rules is used to select an optimization technique or to switch from one technique to another in an attempt to detect, diagnose, and solve some potential problems. Numerical examples involving finite element based optimal design of space trusses and rotor bearing systems are used to illustrate the applicability of the proposed methodology.

  16. Introductory Computer Programming Course Teaching Improvement Using Immersion Language, Extreme Programming, and Education Theories

    Science.gov (United States)

    Velez-Rubio, Miguel

    2013-01-01

    Teaching computer programming to freshmen students in Computer Sciences and other Information Technology areas has been identified as a complex activity. Different approaches have been studied looking for the best one that could help to improve this teaching process. A proposed approach was implemented which is based in the language immersion…

  17. A computer program for simulating geohydrologic systems in three dimensions

    Science.gov (United States)

    Posson, D.R.; Hearne, G.A.; Tracy, J.V.; Frenzel, P.F.

    1980-01-01

    This document is directed toward individuals who wish to use a computer program to simulate ground-water flow in three dimensions. The strongly implicit procedure (SIP) numerical method is used to solve the set of simultaneous equations. New data processing techniques and program input and output options are emphasized. The quifer system to be modeled may be heterogeneous and anisotropic, and may include both artesian and water-table conditions. Systems which consist of well defined alternating layers of highly permeable and poorly permeable material may be represented by a sequence of equations for two dimensional flow in each of the highly permeable units. Boundaries where head or flux is user-specified may be irregularly shaped. The program also allows the user to represent streams as limited-source boundaries when the streamflow is small in relation to the hydraulic stress on the system. The data-processing techniques relating to ' cube ' input and output, to swapping of layers, to restarting of simulation, to free-format NAMELIST input, to the details of each sub-routine 's logic, and to the overlay program structure are discussed. The program is capable of processing large models that might overflow computer memories with conventional programs. Detailed instructions for selecting program options, for initializing the data arrays, for defining ' cube ' output lists and maps, and for plotting hydrographs of calculated and observed heads and/or drawdowns are provided. Output may be restricted to those nodes of particular interest, thereby reducing the volumes of printout for modelers, which may be critical when working at remote terminals. ' Cube ' input commands allow the modeler to set aquifer parameters and initialize the model with very few input records. Appendixes provide instructions to compile the program, definitions and cross-references for program variables, summary of the FLECS structured FORTRAN programming language, listings of the FLECS and

  18. Hybrid simulation of scatter intensity in industrial cone-beam computed tomography

    Science.gov (United States)

    Thierry, R.; Miceli, A.; Hofmann, J.; Flisch, A.; Sennhauser, U.

    2009-01-01

    A cone-beam computed tomography (CT) system using a 450 kV X-ray tube has been developed to challenge the three-dimensional imaging of parts of the automotive industry in short acquisition time. Because the probability of detecting scattered photons is high regarding the energy range and the area of detection, a scattering correction becomes mandatory for generating reliable images with enhanced contrast detectability. In this paper, we present a hybrid simulator for the fast and accurate calculation of the scattering intensity distribution. The full acquisition chain, from the generation of a polyenergetic photon beam, its interaction with the scanned object and the energy deposit in the detector is simulated. Object phantoms can be spatially described in form of voxels, mathematical primitives or CAD models. Uncollided radiation is treated with a ray-tracing method and scattered radiation is split into single and multiple scattering. The single scattering is calculated with a deterministic approach accelerated with a forced detection method. The residual noisy signal is subsequently deconvoluted with the iterative Richardson-Lucy method. Finally the multiple scattering is addressed with a coarse Monte Carlo (MC) simulation. The proposed hybrid method has been validated on aluminium phantoms with varying size and object-to-detector distance, and found in good agreement with the MC code Geant4. The acceleration achieved by the hybrid method over the standard MC on a single projection is approximately of three orders of magnitude.

  19. Hybrid Numerical Solvers for Massively Parallel Eigenvalue Computation and Their Benchmark with Electronic Structure Calculations

    CERN Document Server

    Imachi, Hiroto

    2015-01-01

    Optimally hybrid numerical solvers were constructed for massively parallel generalized eigenvalue problem (GEP).The strong scaling benchmark was carried out on the K computer and other supercomputers for electronic structure calculation problems in the matrix sizes of M = 10^4-10^6 with upto 105 cores. The procedure of GEP is decomposed into the two subprocedures of the reducer to the standard eigenvalue problem (SEP) and the solver of SEP. A hybrid solver is constructed, when a routine is chosen for each subprocedure from the three parallel solver libraries of ScaLAPACK, ELPA and EigenExa. The hybrid solvers with the two newer libraries, ELPA and EigenExa, give better benchmark results than the conventional ScaLAPACK library. The detailed analysis on the results implies that the reducer can be a bottleneck in next-generation (exa-scale) supercomputers, which indicates the guidance for future research. The code was developed as a middleware and a mini-application and will appear online.

  20. Research related to improved computer aided design software package. [comparative efficiency of finite, boundary, and hybrid element methods in elastostatics

    Science.gov (United States)

    Walston, W. H., Jr.

    1986-01-01

    The comparative computational efficiencies of the finite element (FEM), boundary element (BEM), and hybrid boundary element-finite element (HVFEM) analysis techniques are evaluated for representative bounded domain interior and unbounded domain exterior problems in elastostatics. Computational efficiency is carefully defined in this study as the computer time required to attain a specified level of solution accuracy. The study found the FEM superior to the BEM for the interior problem, while the reverse was true for the exterior problem. The hybrid analysis technique was found to be comparable or superior to both the FEM and BEM for both the interior and exterior problems.

  1. Near-term hybrid vehicle program, phase 1. Appendix D: Sensitivity analysis resport

    Science.gov (United States)

    1979-01-01

    Parametric analyses, using a hybrid vehicle synthesis and economics program (HYVELD) are described investigating the sensitivity of hybrid vehicle cost, fuel usage, utility, and marketability to changes in travel statistics, energy costs, vehicle lifetime and maintenance, owner use patterns, internal combustion engine (ICE) reference vehicle fuel economy, and drive-line component costs and type. The lowest initial cost of the hybrid vehicle would be $1200 to $1500 higher than that of the conventional vehicle. For nominal energy costs ($1.00/gal for gasoline and 4.2 cents/kWh for electricity), the ownership cost of the hybrid vehicle is projected to be 0.5 to 1.0 cents/mi less than the conventional ICE vehicle. To attain this ownership cost differential, the lifetime of the hybrid vehicle must be extended to 12 years and its maintenance cost reduced by 25 percent compared with the conventional vehicle. The ownership cost advantage of the hybrid vehicle increases rapidly as the price of fuel increases from $1 to $2/gal.

  2. Approximate dynamic programming recurrence relations for a hybrid optimal control problem

    Science.gov (United States)

    Lu, W.; Ferrari, S.; Fierro, R.; Wettergren, T. A.

    2012-06-01

    This paper presents a hybrid approximate dynamic programming (ADP) method for a hybrid dynamic system (HDS) optimal control problem, that occurs in many complex unmanned systems which are implemented via a hybrid architecture, regarding robot modes or the complex environment. The HDS considered in this paper is characterized by a well-known three-layer hybrid framework, which includes a discrete event controller layer, a discrete-continuous interface layer, and a continuous state layer. The hybrid optimal control problem (HOCP) is to nd the optimal discrete event decisions and the optimal continuous controls subject to a deterministic minimization of a scalar function regarding the system state and control over time. Due to the uncertainty of environment and complexity of the HOCP, the cost-to-go cannot be evaluated before the HDS explores the entire system state space; as a result, the optimal control, neither continuous nor discrete, is not available ahead of time. Therefore, ADP is adopted to learn the optimal control while the HDS is exploring the environment, because of the online advantage of ADP method. Furthermore, ADP can break the curses of dimensionality which other optimizing methods, such as dynamic programming (DP) and Markov decision process (MDP), are facing due to the high dimensions of HOCP.

  3. Computer program to generate attitude error equations for a gimballed platform

    Science.gov (United States)

    Hall, W. A., Jr.; Morris, T. D.; Rone, K. Y.

    1972-01-01

    Computer program for solving attitude error equations related to gimballed platform is described. Program generates matrix elements of attitude error equations when initial matrices and trigonometric identities have been defined. Program is written for IBM 360 computer.

  4. Solving Segment Routing Problems with Hybrid Constraint Programming Techniques

    OpenAIRE

    Hartert, Renaud; Schaus, Pierre; Vissicchio, Stefano; Bonaventure, Olivier; International Conference on Principles and Practice of Constraint Programming (CP2014)

    2015-01-01

    Segment routing is an emerging network technology that exploits the existence of several paths between a source and a destination to spread the traffic in a simple and elegant way. The major commercial network vendors already support segment routing, and several Internet actors are ready to use segment routing in their network. Unfortunately, by changing the way paths are computed, segment routing poses new op- timization problems which cannot be addressed with previous research contributions...

  5. Exploring the Effects of Gender and Learning Styles on Computer Programming Performance: Implications for Programming Pedagogy

    Science.gov (United States)

    Lau, Wilfred W. F.; Yuen, Allan H. K.

    2009-01-01

    Computer programming has been taught in secondary schools for more than two decades. However, little is known about how students learn to program. From the curriculum implementation perspectives, learning style helps address the issue of learner differences, resulting in a shift from a teacher-centred approach to a learner-focused approach. This…

  6. PWR hybrid computer model for assessing the safety implications of control systems

    Energy Technology Data Exchange (ETDEWEB)

    Smith, O L; Renier, J P; Difilippo, F C; Clapp, N E; Sozer, A; Booth, R S; Craddick, W G; Morris, D G

    1986-03-01

    The ORNL study of safety-related aspects of nuclear power plant control systems consists of two interrelated tasks: (1) failure mode and effects analysis (FMEA) that identified single and multiple component failures that might lead to significant plant upsets and (2) computer models that used these failures as initial conditions and traced the dynamic impact on the control system and remainder of the plant. This report describes the simulation of Oconee Unit 1, the first plant analyzed. A first-principles, best-estimate model was developed and implemented on a hybrid computer consisting of AD-4 analog and PDP-10 digital machines. Controls were placed primarily on the analog to use its interactive capability to simulate operator action. 48 refs., 138 figs., 15 tabs.

  7. Semiempirical Quantum Chemical Calculations Accelerated on a Hybrid Multicore CPU-GPU Computing Platform.

    Science.gov (United States)

    Wu, Xin; Koslowski, Axel; Thiel, Walter

    2012-07-10

    In this work, we demonstrate that semiempirical quantum chemical calculations can be accelerated significantly by leveraging the graphics processing unit (GPU) as a coprocessor on a hybrid multicore CPU-GPU computing platform. Semiempirical calculations using the MNDO, AM1, PM3, OM1, OM2, and OM3 model Hamiltonians were systematically profiled for three types of test systems (fullerenes, water clusters, and solvated crambin) to identify the most time-consuming sections of the code. The corresponding routines were ported to the GPU and optimized employing both existing library functions and a GPU kernel that carries out a sequence of noniterative Jacobi transformations during pseudodiagonalization. The overall computation times for single-point energy calculations and geometry optimizations of large molecules were reduced by one order of magnitude for all methods, as compared to runs on a single CPU core.

  8. Guidelines for development of NASA (National Aeronautics and Space Administration) computer security training programs

    Science.gov (United States)

    Tompkins, F. G.

    1983-01-01

    The report presents guidance for the NASA Computer Security Program Manager and the NASA Center Computer Security Officials as they develop training requirements and implement computer security training programs. NASA audiences are categorized based on the computer security knowledge required to accomplish identified job functions. Training requirements, in terms of training subject areas, are presented for both computer security program management personnel and computer resource providers and users. Sources of computer security training are identified.

  9. The UF family of reference hybrid phantoms for computational radiation dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Choonsik [Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institute of Health, Bethesda, MD 20852 (United States); Lodwick, Daniel; Hurtado, Jorge; Pafundi, Deanna [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL 32611 (United States); Williams, Jonathan L [Department of Radiology, University of Florida, Gainesville, FL 32611 (United States); Bolch, Wesley E [Departments of Nuclear and Radiological and Biomedical Engineering, University of Florida, Gainesville, FL 32611 (United States)], E-mail: wbolch@ufl.edu

    2010-01-21

    Computational human phantoms are computer models used to obtain dose distributions within the human body exposed to internal or external radiation sources. In addition, they are increasingly used to develop detector efficiencies for in vivo whole-body counters. Two classes of computational human phantoms have been widely utilized for dosimetry calculation: stylized and voxel phantoms that describe human anatomy through mathematical surface equations and 3D voxel matrices, respectively. Stylized phantoms are flexible in that changes to organ position and shape are possible given avoidance of region overlap, while voxel phantoms are typically fixed to a given patient anatomy, yet can be proportionally scaled to match individuals of larger or smaller stature, but of equivalent organ anatomy. Voxel phantoms provide much better anatomical realism as compared to stylized phantoms which are intrinsically limited by mathematical surface equations. To address the drawbacks of these phantoms, hybrid phantoms based on non-uniform rational B-spline (NURBS) surfaces have been introduced wherein anthropomorphic flexibility and anatomic realism are both preserved. Researchers at the University of Florida have introduced a series of hybrid phantoms representing the ICRP Publication 89 reference newborn, 15 year, and adult male and female. In this study, six additional phantoms are added to the UF family of hybrid phantoms-those of the reference 1 year, 5 year and 10 year child. Head and torso CT images of patients whose ages were close to the targeted ages were obtained under approved protocols. Major organs and tissues were segmented from these images using an image processing software, 3D-DOCTOR(TM). NURBS and polygon mesh surfaces were then used to model individual organs and tissues after importing the segmented organ models to the 3D NURBS modeling software, Rhinoceros(TM). The phantoms were matched to four reference datasets: (1) standard anthropometric data, (2) reference

  10. BF-PSO-TS: Hybrid Heuristic Algorithms for Optimizing Task Schedulingon Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Hussin M. Alkhashai

    2016-06-01

    Full Text Available Task Scheduling is a major problem in Cloud computing because the cloud provider has to serve many users. Also, a good scheduling algorithm helps in the proper and efficient utilization of the resources. So, task scheduling is considered as one of the major issues on the Cloud computing systems. The objective of this paper is to assign the tasks to multiple computing resources. Consequently, the total cost of execution is to be minimum and load to be shared between these computing resources. Therefore, two hybrid algorithms based on Particle Swarm Optimization (PSO have been introduced to schedule the tasks; Best-Fit-PSO (BFPSO and PSO-Tabu Search (PSOTS. According to BFPSO algorithm, Best-Fit (BF algorithm has been merged into the PSO algorithm to improve the performance. The main principle of the modified BFSOP algorithm is that BF algorithm is used to generate the initial population of the standard PSO algorithm instead of being initiated randomly. According to the proposed PSOTS algorithm, the Tabu-Search (TS has been used to improve the local research by avoiding the trap of the local optimality which could be occurred using the standard PSO algorithm. The two proposed algorithms (i.e., BFPSO and PSOTS have been implemented using Cloudsim and evaluated comparing to the standard PSO algorithm using five problems with different number of independent tasks and resources. The performance parameters have been considered are the execution time (Makspan, cost, and resources utilization. The implementation results prove that the proposed hybrid algorithms (i.e., BFPSO, PSOTS outperform the standard PSO algorithm.

  11. A simplified computational fluid-dynamic approach to the oxidizer injector design in hybrid rockets

    Science.gov (United States)

    Di Martino, Giuseppe D.; Malgieri, Paolo; Carmicino, Carmine; Savino, Raffaele

    2016-12-01

    Fuel regression rate in hybrid rockets is non-negligibly affected by the oxidizer injection pattern. In this paper a simplified computational approach developed in an attempt to optimize the oxidizer injector design is discussed. Numerical simulations of the thermo-fluid-dynamic field in a hybrid rocket are carried out, with a commercial solver, to investigate into several injection configurations with the aim of increasing the fuel regression rate and minimizing the consumption unevenness, but still favoring the establishment of flow recirculation at the motor head end, which is generated with an axial nozzle injector and has been demonstrated to promote combustion stability, and both larger efficiency and regression rate. All the computations have been performed on the configuration of a lab-scale hybrid rocket motor available at the propulsion laboratory of the University of Naples with typical operating conditions. After a preliminary comparison between the two baseline limiting cases of an axial subsonic nozzle injector and a uniform injection through the prechamber, a parametric analysis has been carried out by varying the oxidizer jet flow divergence angle, as well as the grain port diameter and the oxidizer mass flux to study the effect of the flow divergence on heat transfer distribution over the fuel surface. Some experimental firing test data are presented, and, under the hypothesis that fuel regression rate and surface heat flux are proportional, the measured fuel consumption axial profiles are compared with the predicted surface heat flux showing fairly good agreement, which allowed validating the employed design approach. Finally an optimized injector design is proposed.

  12. Environmental Assessment of the US Department of Energy Electric and Hybrid Vehicle Program

    Energy Technology Data Exchange (ETDEWEB)

    Singh, M.K.; Bernard, M.J. III; Walsh, R.F

    1980-11-01

    This environmental assessment (EA) focuses on the long-term (1985-2000) impacts of the US Department of Energy (DOE) electric and hybrid vehicle (EHV) program. This program has been designed to accelerate the development of EHVs and to demonstrate their commercial feasibility as required by the Electric and Hybrid Vehicle Research, Development and Demonstration Act of 1976 (P.L. 94-413), as amended (P.L. 95-238). The overall goal of the program is the commercialization of: (1) electric vehicles (EVs) acceptable to broad segments of the personal and commercial vehicle markets, (2) hybrid vehicles (HVs) with range capabilities comparable to those of conventional vehicles (CVs), and (3) advanced EHVs completely competitive with CVs with respect to both cost and performance. Five major EHV projects have been established by DOE: market demonstration, vehicle evaluation and improvement, electric vehicle commercialization, hybrid vehicle commercialization, and advanced vehicle development. Conclusions are made as to the effects of EV and HV commercialization on the: consumption and importation of raw materials; petroleum and total energy consumption; ecosystems impact from the time of obtaining raw material through vehicle use and materials recycling; environmental impacts on air and water quality, land use, and noise; health and safety aspects; and socio-economic factors. (LCL)

  13. Computer Programming Games and Gender Oriented Cultural Forms

    Science.gov (United States)

    AlSulaiman, Sarah Abdulmalik

    I present the design and evaluation of two games designed to help elementary and middle school students learn computer programming concepts. The first game was designed to be "gender neutral", aligning with might be described as a consensus opinion on best practices for computational learning environments. The second game, based on the cultural form of dress up dolls was deliberately designed to appeal to females. I recruited 70 participants in an international two-phase study to investigate the relationship between games, gender, attitudes towards computer programming, and learning. My findings suggest that while the two games were equally effective in terms of learning outcomes, I saw differences in motivation between players of the two games. Specifically, participants who reported a preference for female- oriented games were more motivated to learn about computer programming when they played a game that they perceived as designed for females. In addition, I describe how the two games seemed to encourage different types of social activity between players in a classroom setting. Based on these results, I reflect on the strategy of exclusively designing games and activities as "gender neutral", and suggest that employing cultural forms, including gendered ones, may help create a more productive experience for learners.

  14. Solutions manual and computer programs for physical and computational aspects of convective heat transfer

    CERN Document Server

    Cebeci, Tuncer

    1989-01-01

    This book is designed to accompany Physical and Computational Aspects of Convective Heat Transfer by T Cebeci and P Bradshaw and contains solutions to the exercises and computer programs for the numerical methods contained in that book Physical and Computational Aspects of Convective Heat Transfer begins with a thorough discussion of the physical aspects of convective heat transfer and presents in some detail the partial differential equations governing the transport of thermal energy in various types of flows The book is intended for senior undergraduate and graduate students of aeronautical, chemical, civil and mechanical engineering It can also serve as a reference for the practitioner

  15. The study of hybrid model identification,computation analysis and fault location for nonlinear dynamic circuits and systems

    Institute of Scientific and Technical Information of China (English)

    XIE Hong; HE Yi-gang; ZENG Guan-da

    2006-01-01

    This paper presents the hybrid model identification for a class of nonlinear circuits and systems via a combination of the block-pulse function transform with the Volterra series.After discussing the method to establish the hybrid model and introducing the hybrid model identification,a set of relative formulas are derived for calculating the hybrid model and computing the Volterra series solution of nonlinear dynamic circuits and systems.In order to significantly reduce the computation cost for fault location,the paper presents a new fault diagnosis method based on multiple preset models that can be realized online.An example of identification simulation and fault diagnosis are given.Results show that the method has high accuracy and efficiency for fault location of nonlinear dynamic circuits and systems.

  16. Solving a multi-objective location routing problem for infectious waste disposal using hybrid goal programming and hybrid genetic algorithm

    Directory of Open Access Journals (Sweden)

    Narong Wichapa

    2018-01-01

    Full Text Available Infectious waste disposal remains one of the most serious problems in the medical, social and environmental domains of almost every country. Selection of new suitable locations and finding the optimal set of transport routes for a fleet of vehicles to transport infectious waste material, location routing problem for infectious waste disposal, is one of the major problems in hazardous waste management. Determining locations for infectious waste disposal is a difficult and complex process, because it requires combining both intangible and tangible factors. Additionally, it depends on several criteria and various regulations. This facility location problem for infectious waste disposal is complicated, and it cannot be addressed using any stand-alone technique. Based on a case study, 107 hospitals and 6 candidate municipalities in Upper-Northeastern Thailand, we considered criteria such as infrastructure, geology and social & environmental criteria, evaluating global priority weights using the fuzzy analytical hierarchy process (Fuzzy AHP. After that, a new multi-objective facility location problem model which hybridizes fuzzy AHP and goal programming (GP, namely the HGP model, was tested. Finally, the vehicle routing problem (VRP for a case study was formulated, and it was tested using a hybrid genetic algorithm (HGA which hybridizes the push forward insertion heuristic (PFIH, genetic algorithm (GA and three local searches including 2-opt, insertion-move and interexchange-move. The results show that both the HGP and HGA can lead to select new suitable locations and to find the optimal set of transport routes for vehicles delivering infectious waste material. The novelty of the proposed methodologies, HGP, is the simultaneous combination of relevant factors that are difficult to interpret and cost factors in order to determine new suitable locations, and HGA can be applied to determine the transport routes which provide a minimum number of vehicles

  17. Center for Programming Models for Scalable Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    John Mellor-Crummey

    2008-02-29

    Rice University's achievements as part of the Center for Programming Models for Scalable Parallel Computing include: (1) design and implemention of cafc, the first multi-platform CAF compiler for distributed and shared-memory machines, (2) performance studies of the efficiency of programs written using the CAF and UPC programming models, (3) a novel technique to analyze explicitly-parallel SPMD programs that facilitates optimization, (4) design, implementation, and evaluation of new language features for CAF, including communication topologies, multi-version variables, and distributed multithreading to simplify development of high-performance codes in CAF, and (5) a synchronization strength reduction transformation for automatically replacing barrier-based synchronization with more efficient point-to-point synchronization. The prototype Co-array Fortran compiler cafc developed in this project is available as open source software from http://www.hipersoft.rice.edu/caf.

  18. Electric/Hybrid Vehicle Simulation

    Science.gov (United States)

    Slusser, R. A.; Chapman, C. P.; Brennand, J. P.

    1985-01-01

    ELVEC computer program provides vehicle designer with simulation tool for detailed studies of electric and hybrid vehicle performance and cost. ELVEC simulates performance of user-specified electric or hybrid vehicle under user specified driving schedule profile or operating schedule. ELVEC performs vehicle design and life cycle cost analysis.

  19. A hybrid multi-scale computational scheme for advection-diffusion-reaction equation

    Science.gov (United States)

    Karimi, S.; Nakshatrala, K. B.

    2016-12-01

    Simulation of transport and reaction processes in porous media and subsurface science has become more vital than ever. Over the past few decades, a variety of mathematical models and numerical methodologies for porous media simulations have been developed. As the demand for higher accuracy and validity of the models grows, the issue of disparate temporal and spatial scales becomes more problematic. The variety of reaction processes and complexity of pore geometry poses a huge computational burden in a real-world or reservoir scale simulation. Meanwhile, methods based on averaging or up- scaling techniques do not provide reliable estimates to pore-scale processes. To overcome this problem, development of hybrid and multi-scale computational techniques is considered a promising approach. In these methods, pore-scale and continuum-scale models are combined, hence, a more reliable estimate to pore-scale processes is obtained without having to deal with the tremendous computational overhead of pore-scale methods. In this presentation, we propose a computational framework that allows coupling of lattice Boltzmann method (for pore-scale simulation) and finite element method (for continuum-scale simulation) for advection-diffusion-reaction equations. To capture disparate in time and length events, non-matching grid and time-steps are allowed. Apart from application of this method to benchmark problems, multi-scale simulation of chemical reactions in porous media is also showcased.

  20. Modeling the complete Otto cycle: Preliminary version. [computer programming

    Science.gov (United States)

    Zeleznik, F. J.; Mcbride, B. J.

    1977-01-01

    A description is given of the equations and the computer program being developed to model the complete Otto cycle. The program incorporates such important features as: (1) heat transfer, (2) finite combustion rates, (3) complete chemical kinetics in the burned gas, (4) exhaust gas recirculation, and (5) manifold vacuum or supercharging. Changes in thermodynamic, kinetic and transport data as well as model parameters can be made without reprogramming. Preliminary calculations indicate that: (1) chemistry and heat transfer significantly affect composition and performance, (2) there seems to be a strong interaction among model parameters, and (3) a number of cycles must be calculated in order to obtain steady-state conditions.

  1. A computer program for the estimation of time of death

    DEFF Research Database (Denmark)

    Lynnerup, N

    1993-01-01

    and that the temperature at death is known. Also, Marshall and Hoare's formula expresses the temperature as a function of time, and not vice versa, the latter being the problem most often encountered by forensic scientists. A simple BASIC program that enables solving of Marshall and Hoare's equation for the postmortem...... cooling of bodies is presented. It is proposed that by having a computer program that solves the equation, giving the length of the cooling period in response to a certain rectal temperature, and which allows easy comparison of multiple solutions, the uncertainties related to ambience temperature...

  2. Computer program for equilibrium calculation and diffusion simulation

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    A computer program called TKCALC(thermodynamic and kinetic calculation) has been successfully developedfor the purpose of phase equilibrium calculation and diffusion simulation in ternary substitutional alloy systems. The program was subsequently applied to calculate the isothermal sections of the Fe-Cr-Ni system and predict the concentrationprofiles of two γ/γ single-phase diffusion couples in the Ni-Cr-Al system. The results are in excellent agreement withTHERMO-CALC and DICTRA software packages. Detailed mathematical derivation of some important formulae involvedis also elaborated

  3. An Analysis on Distance Education Computer Programming Students’ Attitudes Regarding Programming and Their Self-Efficacy For Programming

    Directory of Open Access Journals (Sweden)

    Ozcan OZYURT

    2015-04-01

    Full Text Available This study aims to analyze the attitudes of students studying computer programming through the distance education regarding programming, and their self-efficacy for programming and the relation between these two factors. The study is conducted with 104 students being thought with distance education in a university in the north region of Turkey in spring semester of 2013-2014 academic years. Attitude Scale toward Computer Programming (AStCP and Computer Programming Self-Efficacy Inventory (CPSEI are used as data collecting tool. The study is conducted within the descriptive scanning model. The data collected during the study is analyzed with Mann Whitney U test, independent t-test and Pearson Correlation coefficient for answering the research questions. According to the results of the study the attitudes of the students regarding programming are generally positive and their self-efficacy for programming are at high level. There is statistically important difference in the attitudes of students regarding programming in accordance with their gender and grade level. Accordingly, their selfefficacy differentiates statistically by these two variables. Finally, it is concluded that there is a positive relation in average level between the attitudes of the students regarding programming and their self-efficacy for programming.

  4. Final Report: Center for Programming Models for Scalable Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    Mellor-Crummey, John [William Marsh Rice University

    2011-09-13

    As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.

  5. Development of a Comprehensive Computer Program for Predicting Farm Energy

    Directory of Open Access Journals (Sweden)

    S. A. Al-Hamed

    2010-01-01

    Full Text Available Problem statement: The agricultural industry is a major user of energy. Energy is used directly for operating agricultural machinery and equipment on the farm and indirectly in the manufacturing of fertilizers and pesticides and processing of agricultural products off the farm. In order to reduce the cost of agricultural production, energy uses on the farm must be identified and optimized using modern tools. Approach: A comprehensive and easy to use computer program was developed for the purpose of determining the farm energy requirements with the aim of reducing costs and maximizing profit. The program includes a main database composed of nine sub-databases: Tractors sub-database, agricultural machinery sub-database, pumps sub-database, stationary engines sub-database, planting dates sub-database, soil sub-database, operating variables of farm operations sub-database, draft and power equations sub-database and water requirement sub-database. The program was designed with visual C++ language. Results: The program was tested by comparing its results with the manually calculated results. The results showed that the program worked properly. The developed program was also illustrated using an example farm to show the different stages of determining the required farm energy. Conclusion: The program can be used: To determine the farm energy requirements, to assess the current status of farms in terms of energy use efficiency, for future planning of modern farms and as an educational tool. It has many advantages including: Ease of use when dealing with input through interactive windows, ease of addition or deletion or updating of sub-databases, ease of exploring the program windows and the potential for further future development of any part of the program. The program is unique as it includes all the information in a database and has a multi dimensional uses including: Evaluation of an existing system, selecting new machinery based on an optimum

  6. Finite State Tables for general computer programming applications

    Energy Technology Data Exchange (ETDEWEB)

    Leininger, M.

    1988-01-01

    The Finite State Table is a computer programming technique which offers a faster and more compact alternative to traditional logical control structures such as the IF-THEN-ELSE statement. A basic description of this technique is presented. The application example is the creation of plot output from engineering analysis and design models generated by I-DEAS, a commercial software package used for solid modeling, finite element analysis, design and drafting.

  7. Computationally intensive econometrics using a distributed matrix-programming language.

    Science.gov (United States)

    Doornik, Jurgen A; Hendry, David F; Shephard, Neil

    2002-06-15

    This paper reviews the need for powerful computing facilities in econometrics, focusing on concrete problems which arise in financial economics and in macroeconomics. We argue that the profession is being held back by the lack of easy-to-use generic software which is able to exploit the availability of cheap clusters of distributed computers. Our response is to extend, in a number of directions, the well-known matrix-programming interpreted language Ox developed by the first author. We note three possible levels of extensions: (i) Ox with parallelization explicit in the Ox code; (ii) Ox with a parallelized run-time library; and (iii) Ox with a parallelized interpreter. This paper studies and implements the first case, emphasizing the need for deterministic computing in science. We give examples in the context of financial economics and time-series modelling.

  8. Resource-Efficient, Hierarchical Auto-Tuning of a Hybrid Lattice Boltzmann Computation on the Cray XT4

    OpenAIRE

    Williams, Samuel; Computational Research Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley, CA 94720, USA; NERSC, Lawrence Berkeley National Laboratory; Computer Science Department, University of California, Irvine, CA

    2009-01-01

    We apply auto-tuning to a hybrid MPI-pthreads lattice Boltzmann computation running on the Cray XT4 at National Energy Research Scientific Computing Center (NERSC). Previous work showed that multicore-specific auto-tuning can improve the performance of lattice Boltzmann magnetohydrodynamics (LBMHD) by a factor of 4x when running on dual- and quad-core Opteron dual-socket SMPs. We extend these studies to the distributed memory arena via a hybrid MPI/pthreads implementation. In addition to con...

  9. My program is ok - am I? Computing freshmen's experiences of doing programming assignments

    Science.gov (United States)

    Kinnunen, Päivi; Simon, Beth

    2012-03-01

    This article provides insight into how computing majors experience the process of doing programming assignments in their first programming course. This grounded theory study sheds light on the various processes and contexts through which students constantly assess their self-efficacy as a programmer. The data consists of a series of four interviews conducted with a purposeful sample of nine computer science majors in a research intensive state university in the United States. Use of the constant comparative method elicited two forms of results. First, we identified six stages of doing a programming assignment. Analysis captures the dimensional variation in students' experiences with programming assignments on a detailed level. We identified a core category resulting from students' reflected emotions in conjunction with self-efficacy assessment. We provide a descriptive model of how computer science majors build their self-efficacy perceptions, reported via four narratives. Our key findings are that some students reflect negative views of their efficacy, even after having a positive programming experience and that in other situations, students having negative programming experiences still have a positive outlook on their efficacy. We consider these findings in light of possible languages and support structures for introductory programming courses.

  10. Parallel Computing Characteristics of CUPID code under MPI and Hybrid environment

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Ryong; Yoon, Han Young [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Jeon, Byoung Jin; Choi, Hyoung Gwon [Seoul National Univ. of Science and Technology, Seoul (Korea, Republic of)

    2014-05-15

    In this paper, a characteristic of parallel algorithm is presented for solving an elliptic type equation of CUPID via domain decomposition method using the MPI and the parallel performance is estimated in terms of a scalability which shows the speedup ratio. In addition, the time-consuming pattern of major subroutines is studied. Two different grid systems are taken into account: 40,000 meshes for coarse system and 320,000 meshes for fine system. Since the matrix of the CUPID code differs according to whether the flow is single-phase or two-phase, the effect of matrix shape is evaluated. Finally, the effect of the preconditioner for matrix solver is also investigated. Finally, the hybrid (OpenMP+MPI) parallel algorithm is introduced and discussed in detail for solving pressure solver. Component-scale thermal-hydraulics code, CUPID has been developed for two-phase flow analysis, which adopts a three-dimensional, transient, three-field model, and parallelized to fulfill a recent demand for long-transient and highly resolved multi-phase flow behavior. In this study, the parallel performance of the CUPID code was investigated in terms of scalability. The CUPID code was parallelized with domain decomposition method. The MPI library was adopted to communicate the information at the neighboring domain. For managing the sparse matrix effectively, the CSR storage format is used. To take into account the characteristics of the pressure matrix which turns to be asymmetric for two-phase flow, both single-phase and two-phase calculations were run. In addition, the effect of the matrix size and preconditioning was also investigated. The fine mesh calculation shows better scalability than the coarse mesh because the number of coarse mesh does not need to decompose the computational domain excessively. The fine mesh can be present good scalability when dividing geometry with considering the ratio between computation and communication time. For a given mesh, single-phase flow

  11. Detecting awareness in patients with disorders of consciousness using a hybrid brain-computer interface

    Science.gov (United States)

    Pan, Jiahui; Xie, Qiuyou; He, Yanbin; Wang, Fei; Di, Haibo; Laureys, Steven; Yu, Ronghao; Li, Yuanqing

    2014-10-01

    Objective. The bedside detection of potential awareness in patients with disorders of consciousness (DOC) currently relies only on behavioral observations and tests; however, the misdiagnosis rates in this patient group are historically relatively high. In this study, we proposed a visual hybrid brain-computer interface (BCI) combining P300 and steady-state evoked potential (SSVEP) responses to detect awareness in severely brain injured patients. Approach. Four healthy subjects, seven DOC patients who were in a vegetative state (VS, n = 4) or minimally conscious state (MCS, n = 3), and one locked-in syndrome (LIS) patient attempted a command-following experiment. In each experimental trial, two photos were presented to each patient; one was the patient's own photo, and the other photo was unfamiliar. The patients were instructed to focus on their own or the unfamiliar photos. The BCI system determined which photo the patient focused on with both P300 and SSVEP detections. Main results. Four healthy subjects, one of the 4 VS, one of the 3 MCS, and the LIS patient were able to selectively attend to their own or the unfamiliar photos (classification accuracy, 66-100%). Two additional patients (one VS and one MCS) failed to attend the unfamiliar photo (50-52%) but achieved significant accuracies for their own photo (64-68%). All other patients failed to show any significant response to commands (46-55%). Significance. Through the hybrid BCI system, command following was detected in four healthy subjects, two of 7 DOC patients, and one LIS patient. We suggest that the hybrid BCI system could be used as a supportive bedside tool to detect awareness in patients with DOC.

  12. Comparative Study of Dynamic Programming and Pontryagin’s Minimum Principle on Energy Management for a Parallel Hybrid Electric Vehicle

    Directory of Open Access Journals (Sweden)

    Huei Peng

    2013-04-01

    Full Text Available This paper compares two optimal energy management methods for parallel hybrid electric vehicles using an Automatic Manual Transmission (AMT. A control-oriented model of the powertrain and vehicle dynamics is built first. The energy management is formulated as a typical optimal control problem to trade off the fuel consumption and gear shifting frequency under admissible constraints. The Dynamic Programming (DP and Pontryagin’s Minimum Principle (PMP are applied to obtain the optimal solutions. Tuning with the appropriate co-states, the PMP solution is found to be very close to that from DP. The solution for the gear shifting in PMP has an algebraic expression associated with the vehicular velocity and can be implemented more efficiently in the control algorithm. The computation time of PMP is significantly less than DP.

  13. A Rural South African Experience of an ESL Computer Program

    Directory of Open Access Journals (Sweden)

    Marius Dieperink

    2008-12-01

    Full Text Available This article reports on a case study that explored the effect of an English-as-Second Language (ESL computer program at Tshwane University of Technology (TUT, South Africa. The case study explored participants’ perceptions, attitudes and beliefs regarding the ESL reading enhancement program, Reading Excellence™. The study found that participants experienced the program in a positive light. They experienced improved ESL reading as well as listening and writing proficiency. In addition, they experienced improved affective well-being in the sense that they generally felt more comfortable using ESL. This included feeling more self-confident in their experience of their academic environment. Interviews as well as document review resulted in dissonance, however: data pointed towards poor class attendance as well as a perturbing lack of progress in terms of reading comprehension and speed.

  14. Towards a Serious Game to Help Students Learn Computer Programming

    Directory of Open Access Journals (Sweden)

    Mathieu Muratet

    2009-01-01

    Full Text Available Video games are part of our culture like TV, movies, and books. We believe that this kind of software can be used to increase students' interest in computer science. Video games with other goals than entertainment, serious games, are present, today, in several fields such as education, government, health, defence, industry, civil security, and science. This paper presents a study around a serious game dedicated to strengthening programming skills. Real-Time Strategy, which is a popular game genre, seems to be the most suitable kind of game to support such a serious game. From programming teaching features to video game characteristics, we define a teaching organisation to experiment if a serious game can be adapted to learn programming.

  15. Optimization of Turning Operations by Using a Hybrid Genetic Algorithm with Sequential Quadratic Programming

    Directory of Open Access Journals (Sweden)

    A. Belloufi*

    2013-01-01

    Full Text Available The determination of optimal cutting parameters is one of the most important elements in any process planning ofmetal parts. In this paper, a new hybrid genetic algorithm by using sequential quadratic programming is used for theoptimization of cutting conditions. It is used for the resolution of a multipass turning optimization case by minimizingthe production cost under a set of machining constraints. The genetic algorithm (GA is the main optimizer of thisalgorithm whereas SQP Is used to fine tune the results obtained from the GA. Furthermore, the convergencecharacteristics and robustness of the proposed method have been explored through comparisons with resultsreported in literature. The obtained results indicate that the proposed hybrid genetic algorithm by using a sequentialquadratic programming is effective compared to other techniques carried out by different researchers.

  16. PROGTEST: A Computer System for the Analysis of Computational Computer Programs.

    Science.gov (United States)

    1980-04-01

    Richard Loller, Graphic Arts Branch Ms Linda Prieto , Word Processing Center A-i APPENDIX B CAA-D-80-1 DISTRIBUTION Addressee # of Copies Defense...Development Center ATTN: Alan Barnum Is Griffiss Air Force Base, NY 13441 B-6 CAA-D-80-1 Mr. Glen Ingram Scientific Computing Division Room A151

  17. Adaptation of hybrid human-computer interaction systems using EEG error-related potentials.

    Science.gov (United States)

    Chavarriaga, Ricardo; Biasiucci, Andrea; Forster, Killian; Roggen, Daniel; Troster, Gerhard; Millan, Jose Del R

    2010-01-01

    Performance improvement in both humans and artificial systems strongly relies in the ability of recognizing erroneous behavior or decisions. This paper, that builds upon previous studies on EEG error-related signals, presents a hybrid approach for human computer interaction that uses human gestures to send commands to a computer and exploits brain activity to provide implicit feedback about the recognition of such commands. Using a simple computer game as a case study, we show that EEG activity evoked by erroneous gesture recognition can be classified in single trials above random levels. Automatic artifact rejection techniques are used, taking into account that subjects are allowed to move during the experiment. Moreover, we present a simple adaptation mechanism that uses the EEG signal to label newly acquired samples and can be used to re-calibrate the gesture recognition system in a supervised manner. Offline analysis show that, although the achieved EEG decoding accuracy is far from being perfect, these signals convey sufficient information to significantly improve the overall system performance.

  18. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network

    Directory of Open Access Journals (Sweden)

    Lukas Falat

    2016-01-01

    Full Text Available This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process.

  19. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network.

    Science.gov (United States)

    Falat, Lukas; Marcek, Dusan; Durisova, Maria

    2016-01-01

    This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process.

  20. Optimization of a Continuous Hybrid Impeller Mixer via Computational Fluid Dynamics

    Directory of Open Access Journals (Sweden)

    N. Othman

    2014-01-01

    Full Text Available This paper presents the preliminary steps required for conducting experiments to obtain the optimal operating conditions of a hybrid impeller mixer and to determine the residence time distribution (RTD using computational fluid dynamics (CFD. In this paper, impeller speed and clearance parameters are examined. The hybrid impeller mixer consists of a single Rushton turbine mounted above a single pitched blade turbine (PBT. Four impeller speeds, 50, 100, 150, and 200 rpm, and four impeller clearances, 25, 50, 75, and 100 mm, were the operation variables used in this study. CFD was utilized to initially screen the parameter ranges to reduce the number of actual experiments needed. Afterward, the residence time distribution (RTD was determined using the respective parameters. Finally, the Fluent-predicted RTD and the experimentally measured RTD were compared. The CFD investigations revealed that an impeller speed of 50 rpm and an impeller clearance of 25 mm were not viable for experimental investigations and were thus eliminated from further analyses. The determination of RTD using a k-ε turbulence model was performed using CFD techniques. The multiple reference frame (MRF was implemented and a steady state was initially achieved followed by a transient condition for RTD determination.

  1. SWNT-DNA and SWNT-polyC hybrids: AFM study and computer modeling.

    Science.gov (United States)

    Karachevtsev, M V; Lytvyn, O S; Stepanian, S G; Leontiev, V S; Adamowicz, L; Karachevtsev, V A

    2008-03-01

    Hybrids of carbon single-walled nanotubes (SWNT) with fragmented single or double-stranded DNA (fss- or fds-DNA) or polyC were studied by Atom Force Microscopy (AFM) and computer modeling. It was found that fragments of the polymer wrap in several layers around the nanotube, forming a strand-like spindle. In contrast to the fss-DNA, the fds-DNA also forms compact structures near the tube surface due to the formation of self-assembly structures consisting of a few DNA fragments. The hybrids of SWNT with wrapped single-, double- or triple strands of the biopolymer were simulated, and it was shown that such structures are stable. To explain the reason of multi-layer polymeric coating of the nanotube surface, the energy of the intermolecular interactions between different components of polyC was calculated at the MP2/6-31++G** level as well as the interaction energy in the SWNT-cytosine complex.

  2. Feasibility of a Hybrid Brain-Computer Interface for Advanced Functional Electrical Therapy

    Directory of Open Access Journals (Sweden)

    Andrej M. Savić

    2014-01-01

    Full Text Available We present a feasibility study of a novel hybrid brain-computer interface (BCI system for advanced functional electrical therapy (FET of grasp. FET procedure is improved with both automated stimulation pattern selection and stimulation triggering. The proposed hybrid BCI comprises the two BCI control signals: steady-state visual evoked potentials (SSVEP and event-related desynchronization (ERD. The sequence of the two stages, SSVEP-BCI and ERD-BCI, runs in a closed-loop architecture. The first stage, SSVEP-BCI, acts as a selector of electrical stimulation pattern that corresponds to one of the three basic types of grasp: palmar, lateral, or precision. In the second stage, ERD-BCI operates as a brain switch which activates the stimulation pattern selected in the previous stage. The system was tested in 6 healthy subjects who were all able to control the device with accuracy in a range of 0.64–0.96. The results provided the reference data needed for the planned clinical study. This novel BCI may promote further restoration of the impaired motor function by closing the loop between the “will to move” and contingent temporally synchronized sensory feedback.

  3. Clinical application of a second generation electrocardiographic computer program.

    Science.gov (United States)

    Pipberger, H V; McCaughan, D; Littmann, D; Pipberger, H A; Cornfield, J; Dunn, R A; Batchlor, C D; Berson, A S

    1975-05-01

    An electrocardiographic computer program based on multivariate analysis of orthogonal leads (Frank) was applied to records transmitted daily by telephone from the Veterans Administration Hospital, West Roxbury, Mass., to the Veterans Administration Hospital, Washington, D. C. A Bayesian classification procedure was used to compute probabilities for all diagnostic categories that might be encountered in a given record. Computer results were compared with interpretations of conventional 12 lead tracings. Of 1,663 records transmitted, 1,192 were selected for the study because the clinical diagnosis in these cases could be firmly established on the basis of independent, nonelectrocardiographic information. Twenty-one percent of the records were obtained from patients without evidence of cardiac disease and 79 percent from patients with various cardiovascular illnesses. Diagnostic electrocardiographic classifications were considered correct when in agreement with documented clinical diagnoses. Of the total sample of 1,192 recordings, 86 percent were classified correctly by computer as compared with 68 percent by conventional 12 lead electrocardiographic analysis. Improvement in diagnostic recognition by computer was most striking in patients with hypertensive cardiovascular disease or chronic obstructive lung disease. The multivariate classification scheme functioned most efficiently when a problem-oriented approach to diagnosis was simulated. This was accomplished by a simple method of adjusting prior probabilities according to the diagnostic problem under consideration.

  4. Hybrid constraint programming and metaheuristic methods for large scale optimization problems

    OpenAIRE

    2011-01-01

    This work presents hybrid Constraint Programming (CP) and metaheuristic methods for the solution of Large Scale Optimization Problems; it aims at integrating concepts and mechanisms from the metaheuristic methods to a CP-based tree search environment in order to exploit the advantages of both approaches. The modeling and solution of large scale combinatorial optimization problem is a topic which has arisen the interest of many researcherers in the Operations Research field; combinatori...

  5. EMI Measurement and Mitigation Testing for the ARPA Hybrid Electric Vehicle Program

    Science.gov (United States)

    1996-08-27

    will be a more realistic approach for evaluating the EMI radiated from the electric vehicles . Vehicle Converter TyJ:!e OJ:!en-Field Screen Room...radiation from the electric vehicles considered were motor controllers, de to de converters , power steering motors, brake vacuum pumps, distribution...the ARPA Hybrid Electric Vehicle Program Anthony B. Bruno Engineering and Technical Services Department Oscar R. Zelaya Submarine Electromagnetic

  6. HAL/SM language specification. [programming languages and computer programming for space shuttles

    Science.gov (United States)

    Williams, G. P. W., Jr.; Ross, C.

    1975-01-01

    A programming language is presented for the flight software of the NASA Space Shuttle program. It is intended to satisfy virtually all of the flight software requirements of the space shuttle. To achieve this, it incorporates a wide range of features, including applications-oriented data types and organizations, real time control mechanisms, and constructs for systems programming tasks. It is a higher order language designed to allow programmers, analysts, and engineers to communicate with the computer in a form approximating natural mathematical expression. Parts of the English language are combined with standard notation to provide a tool that readily encourages programming without demanding computer hardware expertise. Block diagrams and flow charts are included. The semantics of the language is discussed.

  7. Single-Board-Computer-Based Traffic Generator for a Heterogeneous and Hybrid Smart Grid Communication Network

    Directory of Open Access Journals (Sweden)

    Do Nguyet Quang

    2014-02-01

    Full Text Available In smart grid communication implementation, network traffic pattern is one of the main factors that affect the system’s performance. Examining different traffic patterns in smart grid is therefore crucial when analyzing the network performance. Due to the heterogeneous and hybrid nature of smart grid, the type of traffic distribution in the network is still unknown. The traffic that popularly used for simulation and analysis no longer reflects the real traffic in a multi-technology and bi-directional communication system. Hence, in this study, a single-board computer is implemented as a traffic generator which can generate network traffic similar to those generated by various applications in the fully operational smart grid. By placing in a strategic and appropriate position, a collection of traffic generators allow network administrators to investigate and test the effect of heavy traffic on performance of smart grid communication system.

  8. Phase I of the Near-Term Hybrid Passenger-Vehicle Development Program. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1980-10-01

    Heat engine/electric hybrid vehicles offer the potential of greatly reduced petroleum consumption, compared to conventional vehicles, without the disadvantages of limited performance and operating range associated with purely electric vehicles. This report documents a hybrid-vehicle design approach which is aimed at the development of the technology required to achieve this potential - in such a way that it is transferable to the auto industry in the near term. The development of this design approach constituted Phase I of the Near-Term Hybrid-Vehicle Program. The major tasks in this program were: (1) Mission Analysis and Performance Specification Studies; (2) Design Tradeoff Studies; and (3) Preliminary Design. Detailed reports covering each of these tasks are included as appendices to this report and issued under separate cover; a fourth task, Sensitivity Studies, is also included in the report on the Design Tradeoff Studies. Because of the detail with which these appendices cover methodology and both interim and final results, the body of this report was prepared as a brief executive summary of the program activities and results, with appropriate references to the detailed material in the appendices.

  9. Phase I of the Near-Term Hybrid Vehicle Program. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1979-09-10

    Heat engine/electric hybrid vehicles offer the potential of greatly reduced petroleum consumption, compared to conventional vehicles, without the disadvantages of limited performance and operating range associated with pure electric vehicles. This report documents a hybrid vehicle design approach which is aimed at the development of the technology required to achieve this potential, in such a way that it is transferable to the auto industry in the near term. The development of this design approach constituted Phase I of the Near-Term Hybrid Vehicle Program. The major tasks in this program were: mission analysis and performance specification studies; design tradeoff studies; and preliminary design. Detailed reports covering each of these tasks are included as appendices to this report. A fourth task, sensitivity studies, is also included in the report on the design tradeoff studies. Because of the detail with which these appendices cover methodology and results, the body of this report has been prepared as a brief executive summary of the program activities and results, with appropriate references to the detailed material in the appendices.

  10. A new finite element and finite difference hybrid method for computing electrostatics of ionic solvated biomolecule

    Science.gov (United States)

    Ying, Jinyong; Xie, Dexuan

    2015-10-01

    The Poisson-Boltzmann equation (PBE) is one widely-used implicit solvent continuum model for calculating electrostatics of ionic solvated biomolecule. In this paper, a new finite element and finite difference hybrid method is presented to solve PBE efficiently based on a special seven-overlapped box partition with one central box containing the solute region and surrounded by six neighboring boxes. In particular, an efficient finite element solver is applied to the central box while a fast preconditioned conjugate gradient method using a multigrid V-cycle preconditioning is constructed for solving a system of finite difference equations defined on a uniform mesh of each neighboring box. Moreover, the PBE domain, the box partition, and an interface fitted tetrahedral mesh of the central box can be generated adaptively for a given PQR file of a biomolecule. This new hybrid PBE solver is programmed in C, Fortran, and Python as a software tool for predicting electrostatics of a biomolecule in a symmetric 1:1 ionic solvent. Numerical results on two test models with analytical solutions and 12 proteins validate this new software tool, and demonstrate its high performance in terms of CPU time and memory usage.

  11. Electric and Hybrid Vehicles Program. Sixteenth annual report to Congress for fiscal year 1992

    Energy Technology Data Exchange (ETDEWEB)

    1993-08-01

    This report describes the progress achieved in developing electric and hybrid vehicle technologies, beginning with highlights of recent accomplishments in FY 1992. Detailed descriptions are provided of program activities during FY 1992 in the areas of battery, fuel cell, and propulsion system development, and testing and evaluation of new technology in fleet site operations and in laboratories. This Annual Report also contains a status report on incentives and use of foreign components, as well as a list of publications resulting from the DOE program.

  12. Electric and Hybrid Vehicles Program. Seventeenth annual report to Congress for Fiscal Year 1993

    Energy Technology Data Exchange (ETDEWEB)

    1994-08-01

    This program, in cooperation with industry, is conducting research, development, testing, and evaluation activities to develop the technologies that would lead to production and introduction of low-and zero-emission electric and hybrid vehicles into the Nation`s transportation fleet. This annual report describes program activities in the areas of advanced battery, fuel cell, and propulsion systems development. Testing and evaluation of new technology in fleet site operations and laboratories are also provided. Also presented is status on incentives (CAFE, 1992 Energy Policy Act) and use of foreign components, and a listing of publications by DOE, national laboratories, and contractors.

  13. An Efficient Framework for EEG Analysis with Application to Hybrid Brain Computer Interfaces Based on Motor Imagery and P300

    Directory of Open Access Journals (Sweden)

    Jinyi Long

    2017-01-01

    Full Text Available The hybrid brain computer interface (BCI based on motor imagery (MI and P300 has been a preferred strategy aiming to improve the detection performance through combining the features of each. However, current methods used for combining these two modalities optimize them separately, which does not result in optimal performance. Here, we present an efficient framework to optimize them together by concatenating the features of MI and P300 in a block diagonal form. Then a linear classifier under a dual spectral norm regularizer is applied to the combined features. Under this framework, the hybrid features of MI and P300 can be learned, selected, and combined together directly. Experimental results on the data set of hybrid BCI based on MI and P300 are provided to illustrate competitive performance of the proposed method against other conventional methods. This provides an evidence that the method used here contributes to the discrimination performance of the brain state in hybrid BCI.

  14. An Efficient Framework for EEG Analysis with Application to Hybrid Brain Computer Interfaces Based on Motor Imagery and P300

    Science.gov (United States)

    Wang, Jue; Yu, Tianyou

    2017-01-01

    The hybrid brain computer interface (BCI) based on motor imagery (MI) and P300 has been a preferred strategy aiming to improve the detection performance through combining the features of each. However, current methods used for combining these two modalities optimize them separately, which does not result in optimal performance. Here, we present an efficient framework to optimize them together by concatenating the features of MI and P300 in a block diagonal form. Then a linear classifier under a dual spectral norm regularizer is applied to the combined features. Under this framework, the hybrid features of MI and P300 can be learned, selected, and combined together directly. Experimental results on the data set of hybrid BCI based on MI and P300 are provided to illustrate competitive performance of the proposed method against other conventional methods. This provides an evidence that the method used here contributes to the discrimination performance of the brain state in hybrid BCI. PMID:28316617

  15. A hybrid method for the computation of quasi-3D seismograms.

    Science.gov (United States)

    Masson, Yder; Romanowicz, Barbara

    2013-04-01

    The development of powerful computer clusters and efficient numerical computation methods, such as the Spectral Element Method (SEM) made possible the computation of seismic wave propagation in a heterogeneous 3D earth. However, the cost of theses computations is still problematic for global scale tomography that requires hundreds of such simulations. Part of the ongoing research effort is dedicated to the development of faster modeling methods based on the spectral element method. Capdeville et al. (2002) proposed to couple SEM simulations with normal modes calculation (C-SEM). Nissen-Meyer et al. (2007) used 2D SEM simulations to compute 3D seismograms in a 1D earth model. Thanks to these developments, and for the first time, Lekic et al. (2011) developed a 3D global model of the upper mantle using SEM simulations. At the local and continental scale, adjoint tomography that is using a lot of SEM simulation can be implemented on current computers (Tape, Liu et al. 2009). Due to their smaller size, these models offer higher resolution. They provide us with images of the crust and the upper part of the mantle. In an attempt to teleport such local adjoint tomographic inversions into the deep earth, we are developing a hybrid method where SEM computation are limited to a region of interest within the earth. That region can have an arbitrary shape and size. Outside this region, the seismic wavefield is extrapolated to obtain synthetic data at the Earth's surface. A key feature of the method is the use of a time reversal mirror to inject the wavefield induced by distant seismic source into the region of interest (Robertsson and Chapman 2000). We compute synthetic seismograms as follow: Inside the region of interest, we are using regional spectral element software RegSEM to compute wave propagation in 3D. Outside this region, the wavefield is extrapolated to the surface by convolution with the Green's functions from the mirror to the seismic stations. For now, these

  16. Mississippi Curriculum Framework for Computer Information Systems Technology. Computer Information Systems Technology (Program CIP: 52.1201--Management Information Systems & Business Data). Computer Programming (Program CIP: 52.1201). Network Support (Program CIP: 52.1290--Computer Network Support Technology). Postsecondary Programs.

    Science.gov (United States)

    Mississippi Research and Curriculum Unit for Vocational and Technical Education, State College.

    This document, which is intended for use by community and junior colleges throughout Mississippi, contains curriculum frameworks for two programs in the state's postsecondary-level computer information systems technology cluster: computer programming and network support. Presented in the introduction are program descriptions and suggested course…

  17. Study of operational parameters impacting helicopter fuel consumption. [using computer techniques (computer programs)

    Science.gov (United States)

    Cross, J. L.; Stevens, D. D.

    1976-01-01

    A computerized study of operational parameters affecting helicopter fuel consumption was conducted as an integral part of the NASA Civil Helicopter Technology Program. The study utilized the Helicopter Sizing and Performance Computer Program (HESCOMP) developed by the Boeing-Vertol Company and NASA Ames Research Center. An introduction to HESCOMP is incorporated in this report. The results presented were calculated using the NASA CH-53 civil helicopter research aircraft specifications. Plots from which optimum flight conditions for minimum fuel use that can be obtained are presented for this aircraft. The results of the study are considered to be generally indicative of trends for all helicopters.

  18. Propeller aircraft interior noise model: User's manual for computer program

    Science.gov (United States)

    Wilby, E. G.; Pope, L. D.

    1985-01-01

    A computer program entitled PAIN (Propeller Aircraft Interior Noise) has been developed to permit calculation of the sound levels in the cabin of a propeller-driven airplane. The fuselage is modeled as a cylinder with a structurally integral floor, the cabin sidewall and floor being stiffened by ring frames, stringers and floor beams of arbitrary configurations. The cabin interior is covered with acoustic treatment and trim. The propeller noise consists of a series of tones at harmonics of the blade passage frequency. Input data required by the program include the mechanical and acoustical properties of the fuselage structure and sidewall trim. Also, the precise propeller noise signature must be defined on a grid that lies in the fuselage skin. The propeller data are generated with a propeller noise prediction program such as the NASA Langley ANOPP program. The program PAIN permits the calculation of the space-average interior sound levels for the first ten harmonics of a propeller rotating alongside the fuselage. User instructions for PAIN are given in the report. Development of the analytical model is presented in NASA CR 3813.

  19. Trace contaminant control simulation computer program, version 8.1

    Science.gov (United States)

    Perry, J. L.

    1994-01-01

    The Trace Contaminant Control Simulation computer program is a tool for assessing the performance of various process technologies for removing trace chemical contamination from a spacecraft cabin atmosphere. Included in the simulation are chemical and physical adsorption by activated charcoal, chemical adsorption by lithium hydroxide, absorption by humidity condensate, and low- and high-temperature catalytic oxidation. Means are provided for simulating regenerable as well as nonregenerable systems. The program provides an overall mass balance of chemical contaminants in a spacecraft cabin given specified generation rates. Removal rates are based on device flow rates specified by the user and calculated removal efficiencies based on cabin concentration and removal technology experimental data. Versions 1.0 through 8.0 are documented in NASA TM-108409. TM-108409 also contains a source file listing for version 8.0. Changes to version 8.0 are documented in this technical memorandum and a source file listing for the modified version, version 8.1, is provided. Detailed descriptions for the computer program subprograms are extracted from TM-108409 and modified as necessary to reflect version 8.1. Version 8.1 supersedes version 8.0. Information on a separate user's guide is available from the author.

  20. A Computer Program for Assessing Nuclear Safety Culture Impact

    Energy Technology Data Exchange (ETDEWEB)

    Han, Kiyoon; Jae, Moosung [Hanyang Univ., Seoul (Korea, Republic of)

    2014-10-15

    Through several accidents of NPP including the Fukushima Daiichi in 2011 and Chernobyl accidents in 1986, a lack of safety culture was pointed out as one of the root cause of these accidents. Due to its latent influences on safety performance, safety culture has become an important issue in safety researches. Most of the researches describe how to evaluate the state of the safety culture of the organization. However, they did not include a possibility that the accident occurs due to the lack of safety culture. Because of that, a methodology for evaluating the impact of the safety culture on NPP's safety is required. In this study, the methodology for assessing safety culture impact is suggested and a computer program is developed for its application. SCII model which is the new methodology for assessing safety culture impact quantitatively by using PSA model. The computer program is developed for its application. This program visualizes the SCIs and the SCIIs. It might contribute to comparing the level of the safety culture among NPPs as well as improving the management safety of NPP.

  1. Hybrid Symbiotic Organisms Search Optimization Algorithm for Scheduling of Tasks on Cloud Computing Environment.

    Science.gov (United States)

    Abdullahi, Mohammed; Ngadi, Md Asri

    2016-01-01

    Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan.

  2. Hybrid Symbiotic Organisms Search Optimization Algorithm for Scheduling of Tasks on Cloud Computing Environment.

    Directory of Open Access Journals (Sweden)

    Mohammed Abdullahi

    Full Text Available Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS has been shown to perform competitively with Particle Swarm Optimization (PSO. The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA based SOS (SASOS in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan.

  3. Hybrid Symbiotic Organisms Search Optimization Algorithm for Scheduling of Tasks on Cloud Computing Environment

    Science.gov (United States)

    Abdullahi, Mohammed; Ngadi, Md Asri

    2016-01-01

    Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan. PMID:27348127

  4. Investigating Value Creation in a Community of Practice with Social Network Analysis in a Hybrid Online Graduate Education Program

    Science.gov (United States)

    Cowan, John E.; Menchaca, Michael P.

    2014-01-01

    This study reports an analysis of 10?years in the life of the Internet-based Master in Educational Technology program (iMET) at Sacramento State University. iMET is a hybrid educational technology master's program delivered 20% face to face and 80% online. The program has achieved a high degree of success, with a course completion rate of 93% and…

  5. Hybrid MPI/OpenMP Implementation of the ORAC Molecular Dynamics Program for Generalized Ensemble and Fast Switching Alchemical Simulations.

    Science.gov (United States)

    Procacci, Piero

    2016-06-27

    We present a new release (6.0β) of the ORAC program [Marsili et al. J. Comput. Chem. 2010, 31, 1106-1116] with a hybrid OpenMP/MPI (open multiprocessing message passing interface) multilevel parallelism tailored for generalized ensemble (GE) and fast switching double annihilation (FS-DAM) nonequilibrium technology aimed at evaluating the binding free energy in drug-receptor system on high performance computing platforms. The production of the GE or FS-DAM trajectories is handled using a weak scaling parallel approach on the MPI level only, while a strong scaling force decomposition scheme is implemented for intranode computations with shared memory access at the OpenMP level. The efficiency, simplicity, and inherent parallel nature of the ORAC implementation of the FS-DAM algorithm, project the code as a possible effective tool for a second generation high throughput virtual screening in drug discovery and design. The code, along with documentation, testing, and ancillary tools, is distributed under the provisions of the General Public License and can be freely downloaded at www.chim.unifi.it/orac .

  6. Hybrid heuristic and mathematical programming in oil pipelines networks: Use of immigrants

    Institute of Scientific and Technical Information of China (English)

    DE LA CRUZ J.M.; HERR(A)N-GONZ(A)LEZ A.; RISCO-MART(I)N J.L.; ANDR(E)S-TORO B.

    2005-01-01

    We solve the problem of petroleum products distribution through oil pipelines networks. This problem is modelled and solved using two techniques: A heuristic method like a multiobjective evolutionary algorithm and Mathematical Programming. In the multiobjective evolutionary algorithm, several objective functions are defined to express the goals of the solutions as well as the preferences among them. Some constraints are included as hard objective functions and some are evaluated through a repairing function to avoid infeasible solutions. In the Mathematical Programming approach the multiobjective optimization is solved using the Constraint Method in Mixed Integer Linear Programming. Some constraints of the mathematical model are nonlinear, so they are linearized. The results obtained with both methods for one concrete network are presented. They are compared with a hybrid solution, where we use the results obtained by Mathematical Programming as the seed of the evolutionary algorithm.

  7. User's manual for SEDCALC, a computer program for computation of suspended-sediment discharge

    Science.gov (United States)

    Koltun, G.F.; Gray, John R.; McElhone, T.J.

    1994-01-01

    Sediment-Record Calculations (SEDCALC), a menu-driven set of interactive computer programs, was developed to facilitate computation of suspended-sediment records. The programs comprising SEDCALC were developed independently in several District offices of the U.S. Geological Survey (USGS) to minimize the intensive labor associated with various aspects of sediment-record computations. SEDCALC operates on suspended-sediment-concentration data stored in American Standard Code for Information Interchange (ASCII) files in a predefined card-image format. Program options within SEDCALC can be used to assist in creating and editing the card-image files, as well as to reformat card-image files to and from formats used by the USGS Water-Quality System. SEDCALC provides options for creating card-image files containing time series of equal-interval suspended-sediment concentrations from 1. digitized suspended-sediment-concentration traces, 2. linear interpolation between log-transformed instantaneous suspended-sediment-concentration data stored at unequal time intervals, and 3. nonlinear interpolation between log-transformed instantaneous suspended-sediment-concentration data stored at unequal time intervals. Suspended-sediment discharge can be computed from the streamflow and suspended-sediment-concentration data or by application of transport relations derived by regressing log-transformed instantaneous streamflows on log-transformed instantaneous suspended-sediment concentrations or discharges. The computed suspended-sediment discharge data are stored in card-image files that can be either directly imported to the USGS Automated Data Processing System or used to generate plots by means of other SEDCALC options.

  8. Programming Pluralism: Using Learning Analytics to Detect Patterns in the Learning of Computer Programming

    Science.gov (United States)

    Blikstein, Paulo; Worsley, Marcelo; Piech, Chris; Sahami, Mehran; Cooper, Steven; Koller, Daphne

    2014-01-01

    New high-frequency, automated data collection and analysis algorithms could offer new insights into complex learning processes, especially for tasks in which students have opportunities to generate unique open-ended artifacts such as computer programs. These approaches should be particularly useful because the need for scalable project-based and…

  9. COED Transactions, Vol. IX, No. 3, March 1977. Evaluation of a Complex Variable Using Analog/Hybrid Computation Techniques.

    Science.gov (United States)

    Marcovitz, Alan B., Ed.

    Described is the use of an analog/hybrid computer installation to study those physical phenomena that can be described through the evaluation of an algebraic function of a complex variable. This is an alternative way to study such phenomena on an interactive graphics terminal. The typical problem used, involving complex variables, is that of…

  10. High Performance Computing - Power Application Programming Interface Specification.

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H.,; Kelly, Suzanne M.; Pedretti, Kevin; Grant, Ryan; Olivier, Stephen Lecler; Levenhagen, Michael J.; DeBonis, David

    2014-08-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  11. A Hybrid Approach Using an Artificial Bee Algorithm with Mixed Integer Programming Applied to a Large-Scale Capacitated Facility Location Problem

    Directory of Open Access Journals (Sweden)

    Guillermo Cabrera G.

    2012-01-01

    Full Text Available We present a hybridization of two different approaches applied to the well-known Capacitated Facility Location Problem (CFLP. The Artificial Bee algorithm (BA is used to select a promising subset of locations (warehouses which are solely included in the Mixed Integer Programming (MIP model. Next, the algorithm solves the subproblem by considering the entire set of customers. The hybrid implementation allows us to bypass certain inherited weaknesses of each algorithm, which means that we are able to find an optimal solution in an acceptable computational time. In this paper we demonstrate that BA can be significantly improved by use of the MIP algorithm. At the same time, our hybrid implementation allows the MIP algorithm to reach the optimal solution in a considerably shorter time than is needed to solve the model using the entire dataset directly within the model. Our hybrid approach outperforms the results obtained by each technique separately. It is able to find the optimal solution in a shorter time than each technique on its own, and the results are highly competitive with the state-of-the-art in large-scale optimization. Furthermore, according to our results, combining the BA with a mathematical programming approach appears to be an interesting research area in combinatorial optimization.

  12. Structural mode significance using INCA. [Interactive Controls Analysis computer program

    Science.gov (United States)

    Bauer, Frank H.; Downing, John P.; Thorpe, Christopher J.

    1990-01-01

    Structural finite element models are often too large to be used in the design and analysis of control systems. Model reduction techniques must be applied to reduce the structural model to manageable size. In the past, engineers either performed the model order reduction by hand or used distinct computer programs to retrieve the data, to perform the significance analysis and to reduce the order of the model. To expedite this process, the latest version of INCA has been expanded to include an interactive graphical structural mode significance and model order reduction capability.

  13. Structural mode significance using INCA. [Interactive Controls Analysis computer program

    Science.gov (United States)

    Bauer, Frank H.; Downing, John P.; Thorpe, Christopher J.

    1990-01-01

    Structural finite element models are often too large to be used in the design and analysis of control systems. Model reduction techniques must be applied to reduce the structural model to manageable size. In the past, engineers either performed the model order reduction by hand or used distinct computer programs to retrieve the data, to perform the significance analysis and to reduce the order of the model. To expedite this process, the latest version of INCA has been expanded to include an interactive graphical structural mode significance and model order reduction capability.

  14. SMART - a computer program for modelling stellar atmospheres

    CERN Document Server

    Aret, Anna; Poolamäe, Raivo; Sapar, Lili

    2013-01-01

    Program SMART (Spectra and Model Atmospheres by Radiative Transfer) has been composed for modelling atmospheres and spectra of hot stars (O, B and A spectral classes) and studying different physical processes in them (Sapar & Poolam\\"ae 2003, Sapar et al. 2007). Line-blanketed models are computed assuming plane-parallel, static and horizontally homogeneous atmosphere in radiative, hydrostatic and local thermodynamic equilibrium. Main advantages of SMART are its shortness, simplicity, user friendliness and flexibility for study of different physical processes. SMART successfully runs on PC both under Windows and Linux.

  15. Building Computer-Based Experiments in Psychology without Programming Skills.

    Science.gov (United States)

    Ruisoto, Pablo; Bellido, Alberto; Ruiz, Javier; Juanes, Juan A

    2016-06-01

    Research in Psychology usually requires to build and run experiments. However, although this task has required scripting, recent computer tools based on graphical interfaces offer new opportunities in this field for researchers with non-programming skills. The purpose of this study is to illustrate and provide a comparative overview of two of the main free open source "point and click" software packages for building and running experiments in Psychology: PsychoPy and OpenSesame. Recommendations for their potential use are further discussed.

  16. Computer simulation modeling of abnormal behavior: a program approach.

    Science.gov (United States)

    Reilly, K D; Freese, M R; Rowe, P B

    1984-07-01

    A need for modeling abnormal behavior on a comprehensive, systematic basis exists. Computer modeling and simulation tools offer especially good opportunities to establish such a program of studies. Issues concern deciding which modeling tools to use, how to relate models to behavioral data, what level of modeling to employ, and how to articulate theory to facilitate such modeling. Four levels or types of modeling, two qualitative and two quantitative, are identified. Their properties are examined and interrelated to include illustrative applications to the study of abnormal behavior, with an emphasis on schizophrenia.

  17. A Hybrid Scheme for Fine-Grained Search and Access Authorization in Fog Computing Environment

    Science.gov (United States)

    Xiao, Min; Zhou, Jing; Liu, Xuejiao; Jiang, Mingda

    2017-01-01

    In the fog computing environment, the encrypted sensitive data may be transferred to multiple fog nodes on the edge of a network for low latency; thus, fog nodes need to implement a search over encrypted data as a cloud server. Since the fog nodes tend to provide service for IoT applications often running on resource-constrained end devices, it is necessary to design lightweight solutions. At present, there is little research on this issue. In this paper, we propose a fine-grained owner-forced data search and access authorization scheme spanning user-fog-cloud for resource constrained end users. Compared to existing schemes only supporting either index encryption with search ability or data encryption with fine-grained access control ability, the proposed hybrid scheme supports both abilities simultaneously, and index ciphertext and data ciphertext are constructed based on a single ciphertext-policy attribute based encryption (CP-ABE) primitive and share the same key pair, thus the data access efficiency is significantly improved and the cost of key management is greatly reduced. Moreover, in the proposed scheme, the resource constrained end devices are allowed to rapidly assemble ciphertexts online and securely outsource most of decryption task to fog nodes, and mediated encryption mechanism is also adopted to achieve instantaneous user revocation instead of re-encrypting ciphertexts with many copies in many fog nodes. The security and the performance analysis show that our scheme is suitable for a fog computing environment. PMID:28629131

  18. Programming a massively parallel, computation universal system: static behavior

    Energy Technology Data Exchange (ETDEWEB)

    Lapedes, A.; Farber, R.

    1986-01-01

    In previous work by the authors, the ''optimum finding'' properties of Hopfield neural nets were applied to the nets themselves to create a ''neural compiler.'' This was done in such a way that the problem of programming the attractors of one neural net (called the Slave net) was expressed as an optimization problem that was in turn solved by a second neural net (the Master net). In this series of papers that approach is extended to programming nets that contain interneurons (sometimes called ''hidden neurons''), and thus deals with nets capable of universal computation. 22 refs.

  19. Dynamic analysis of spur gears using computer program DANST

    Science.gov (United States)

    Oswald, Fred B.; Lin, Hsiang H.; Liou, Chuen-Huei; Valco, Mark J.

    1993-06-01

    DANST is a computer program for static and dynamic analysis of spur gear systems. The program can be used for parametric studies to predict the effect on dynamic load and tooth bending stress of spur gears due to operating speed, torque, stiffness, damping, inertia, and tooth profile. DANST performs geometric modeling and dynamic analysis for low- or high-contact-ratio spur gears. DANST can simulate gear systems with contact ratio ranging from one to three. It was designed to be easy to use, and it is extensively documented by comments in the source code. This report describes the installation and use of DANST. It covers input data requirements and presents examples. The report also compares DANST predictions for gear tooth loads and bending stress to experimental and finite element results.

  20. Spur-gear optimization using SPUROPT computer program

    Science.gov (United States)

    Coe, Harold H.

    1991-01-01

    A computer program developed for optimizing spur gear designs, SPUROPT, was updated by installing a new subroutine that uses AGMA 908-B89 standards to calculate the J-factor for determining tooth-bending stress. The updated SPUROPT program was then used to optimize a spur gear set for maximum fatigue life, minimum dynamic load, or minimum weight. All calculations were made with constraints on as many as 13 parameters by using three design variables: the number of teeth, diametral pitch, and tooth-face width. Results depended largely on constraints values. When the limiting bending stress was set at a high value, the optimal solution was the highest allowable number of teeth. When the allowable bending stress was lowered, the optimal solution moved toward the fewest number of teeth permitted. Final results were also affected by the amount of transmission error. A lower error permitted a higher number of teeth.

  1. A Computer Program for a Canonical Problem in Underwater Shock

    Directory of Open Access Journals (Sweden)

    Thomas L. Geers

    1994-01-01

    Full Text Available Finite-element/boundary-element codes are widely used to analyze the response of marine structures to underwater explosions. An important step in verifying the correctness and accuracy of such codes is the comparison of code-generated results for canonical problems with corresponding analytical or semianalytical results. At the present time, such comparisons rely on hardcopy results presented in technical journals and reports. This article describes a computer program available from SAVIAC that produces user-selected numerical results for a step-wave-excited spherical shell submerged in and (optionally filled with an acoustic fluid. The method of solution employed in the program is based on classical expansion of the field quantities in generalized Fourier series in the meridional coordinate. Convergence of the series is enhanced by judicious application of modified Cesàro summation and partial closed-form solution.

  2. Identification of Cognitive Processes of Effective and Ineffective Students during Computer Programming

    Science.gov (United States)

    Renumol, V. G.; Janakiram, Dharanipragada; Jayaprakash, S.

    2010-01-01

    Identifying the set of cognitive processes (CPs) a student can go through during computer programming is an interesting research problem. It can provide a better understanding of the human aspects in computer programming process and can also contribute to the computer programming education in general. The study identified the presence of a set of…

  3. Easy-to-use application programs for decay heat and delayed neutron calculations on personal computers

    Energy Technology Data Exchange (ETDEWEB)

    Oyamatsu, Kazuhiro [Nagoya Univ. (Japan)

    1998-03-01

    Application programs for personal computers are developed to calculate the decay heat power and delayed neutron activity from fission products. The main programs can be used in any computers from personal computers to main frames because their sources are written in Fortran. These programs have user friendly interfaces to be used easily not only for research activities but also for educational purposes. (author)

  4. 75 FR 69988 - Privacy Act of 1974; Computer Matching Program

    Science.gov (United States)

    2010-11-16

    ...Section 421(a)(1) of the Controlled Substances Act (21 U.S.C. 862(a)(1)) includes provisions regarding the judicial denial of Federal benefits. Section 421 of the Controlled Substances Act, which was originally enacted as section 5301 of the Anti-Drug Abuse Act of 1988, and which was amended and redesignated as section 421 of the Controlled Substances Act by section 1002(d) of the Crime Control Act of 1990, Public Law 101-647 (hereinafter referred to as ``section 5301''), authorizes Federal and State judges to deny certain Federal benefits (including student financial assistance under Title IV of the Higher Education Act of 1965, as amended (HEA)) to individuals convicted of drug trafficking or possession of a controlled substance. In order to ensure that Title IV, HEA student financial assistance is not awarded to individuals subject to denial of benefits under court orders issued pursuant to section 5301, the Department of Justice and the Department of Education implemented a computer matching program. The 18-month computer matching agreement (CMA) was recertified for an additional 12 months on December 19, 2009. The 12-month recertification of the CMA will automatically expire on December 17, 2010. The Department of Education must continue to obtain from the Department of Justice identifying information regarding individuals who are the subject of section 5301 denial of benefits court orders for the purpose of ensuring that Title IV, HEA student financial assistance is not awarded to individuals subject to denial of benefits under court orders issued pursuant to the Denial of Federal Benefits Program. The purpose of this notice is to announce the continued operation of the computer matching program and to provide certain required information concerning the computer matching program. In accordance with the Privacy Act of 1974 (5 U.S.C. 552a), as amended by the Computer Matching and Privacy Protection Act of 1988 (Pub. L. 100-503) and Office of Management and

  5. Hybrid hierarchical bio-based materials: Development and characterization through experimentation and computational simulations

    Science.gov (United States)

    Haq, Mahmoodul

    Environmentally friendly bio-based composites with improved properties can be obtained by harnessing the synergy offered by hybrid constituents such as multiscale (nano- and micro-scale) reinforcement in bio-based resins composed of blends of synthetic and natural resins. Bio-based composites have recently gained much attention due to their low cost, environmental appeal and their potential to compete with synthetic composites. The advantage of multiscale reinforcement is that it offers synergy at various length scales, and when combined with bio-based resins provide stiffness-toughness balance, improved thermal and barrier properties, and increased environmental appeal to the resulting composites. Moreover, these hybrid materials are tailorable in performance and in environmental impact. While the use of different concepts of multiscale reinforcement has been studied for synthetic composites, the study of mukiphase/multiscale reinforcements for developing new types of sustainable materials is limited. The research summarized in this dissertation focused on development of multiscale reinforced bio-based composites and the effort to understand and exploit the synergy of its constituents through experimental characterization and computational simulations. Bio-based composites consisting of petroleum-based resin (unsaturated polyester), natural or bio-resin (epoxidized soybean and linseed oils), natural fibers (industrial hemp), and nanosilicate (nanoclay) inclusions were developed. The work followed the "materials by Mahmoodul Haq design" philosophy by incorporating an integrated experimental and computational approach to strategically explore the design possibilities and limits. Experiments demonstrated that the drawbacks of bio-resin addition, which lowers stiffness, strength and increases permeability, can be counter-balanced through nanoclay reinforcement. Bio-resin addition yields benefits in impact strength and ductility. Conversely, nanoclay enhances stiffness

  6. The Effects of a Robot Game Environment on Computer Programming Education for Elementary School Students

    Science.gov (United States)

    Shim, Jaekwoun; Kwon, Daiyoung; Lee, Wongyu

    2017-01-01

    In the past, computer programming was perceived as a task only carried out by computer scientists; in the 21st century, however, computer programming is viewed as a critical and necessary skill that everyone should learn. In order to improve teaching of problem-solving abilities in a computing environment, extensive research is being done on…

  7. Evaluation of hybrid inverters for strategic environmental research and development program applications

    Energy Technology Data Exchange (ETDEWEB)

    Ginn, J.W. [Sandia National Laboratory, Albuquerque, NM (United States)

    1995-11-01

    The photovoltaic systems test facility at Sandia National Laboratories is evaluating the performance of large hybrid power-processing centers (PPC`s). The primary customer for this work has been the Strategic Environmental Research and Development Program (SERDP) of the Department of Defense. One of the goals of SERDP is to develop power-processing hardware to be used in photovoltaic-hybrid power systems at remote military installations. Power for these installations is presently provided by engine-generators. Currently, hardware for twelve such sites is in various stages of procurement. The subject of this talk is testing of the PPC for the first SERDP system, a 300-kW unit for Superior Valley, a US Navy site at China Lake, California.

  8. Evidence of programmed cell death during microsporogenesis in an interspecific Brachiaria (Poaceae: Panicoideae: Paniceae) hybrid.

    Science.gov (United States)

    Fuzinatto, V A; Pagliarini, M S; Valle, C B

    2007-05-11

    Morphological changes have been investigated during plant programmed cell death (PCD) in the last few years due to the new interest in a possible apoptotic-like phenomenon existing in plants. Although PCD has been reported in several tissues and specialized cells in plants, there have been few reports of its occurrence during microsporogenesis. The present study reports a typical process of PCD during meiosis in an interspecific Brachiaria hybrid leading to male sterility. In this hybrid, some inflorescences initiated meiosis but it was arrested in zygotene/pachytene. From this stage, meiocytes underwent a severe alteration in shape showing substantial membrane blebbing; the cytoplasm became denser at the periphery; the cell nucleus entered a progressive stage of chromatin disintegration, and then the nucleolus disintegrated, and the cytoplasm condensed and shrunk. The oldest flowers of the raceme showed only the callose wall in the anthers showing obvious signs of complete sterility.

  9. Automatic artefact removal in a self-paced hybrid brain- computer interface system

    Directory of Open Access Journals (Sweden)

    Yong Xinyi

    2012-07-01

    Full Text Available Abstract Background A novel artefact removal algorithm is proposed for a self-paced hybrid brain-computer interface (BCI system. This hybrid system combines a self-paced BCI with an eye-tracker to operate a virtual keyboard. To select a letter, the user must gaze at the target for at least a specific period of time (dwell time and then activate the BCI by performing a mental task. Unfortunately, electroencephalogram (EEG signals are often contaminated with artefacts. Artefacts change the quality of EEG signals and subsequently degrade the BCI’s performance. Methods To remove artefacts in EEG signals, the proposed algorithm uses the stationary wavelet transform combined with a new adaptive thresholding mechanism. To evaluate the performance of the proposed algorithm and other artefact handling/removal methods, semi-simulated EEG signals (i.e., real EEG signals mixed with simulated artefacts and real EEG signals obtained from seven participants are used. For real EEG signals, the hybrid BCI system’s performance is evaluated in an online-like manner, i.e., using the continuous data from the last session as in a real-time environment. Results With semi-simulated EEG signals, we show that the proposed algorithm achieves lower signal distortion in both time and frequency domains. With real EEG signals, we demonstrate that for dwell time of 0.0s, the number of false-positives/minute is 2 and the true positive rate (TPR achieved by the proposed algorithm is 44.7%, which is more than 15.0% higher compared to other state-of-the-art artefact handling methods. As dwell time increases to 1.0s, the TPR increases to 73.1%. Conclusions The proposed artefact removal algorithm greatly improves the BCI’s performance. It also has the following advantages: a it does not require additional electrooculogram/electromyogram channels, long data segments or a large number of EEG channels, b it allows real-time processing, and c it reduces signal distortion.

  10. A uniform approach for programming distributed heterogeneous computing systems.

    Science.gov (United States)

    Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas

    2014-12-01

    Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater's performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations.

  11. Generalized fish life-cycle poplulation model and computer program

    Energy Technology Data Exchange (ETDEWEB)

    DeAngelis, D. L.; Van Winkle, W.; Christensen, S. W.; Blum, S. R.; Kirk, B. L.; Rust, B. W.; Ross, C.

    1978-03-01

    A generalized fish life-cycle population model and computer program have been prepared to evaluate the long-term effect of changes in mortality in age class 0. The general question concerns what happens to a fishery when density-independent sources of mortality are introduced that act on age class 0, particularly entrainment and impingement at power plants. This paper discusses the model formulation and computer program, including sample results. The population model consists of a system of difference equations involving age-dependent fecundity and survival. The fecundity for each age class is assumed to be a function of both the fraction of females sexually mature and the weight of females as they enter each age class. Natural mortality for age classes 1 and older is assumed to be independent of population size. Fishing mortality is assumed to vary with the number and weight of fish available to the fishery. Age class 0 is divided into six life stages. The probability of survival for age class 0 is estimated considering both density-independent mortality (natural and power plant) and density-dependent mortality for each life stage. Two types of density-dependent mortality are included. These are cannibalism of each life stage by older age classes and intra-life-stage competition.

  12. Monthly reservoir inflow forecasting using a new hybrid SARIMA genetic programming approach

    Science.gov (United States)

    Moeeni, Hamid; Bonakdari, Hossein; Ebtehaj, Isa

    2017-03-01

    Forecasting reservoir inflow is one of the most important components of water resources and hydroelectric systems operation management. Seasonal autoregressive integrated moving average (SARIMA) models have been frequently used for predicting river flow. SARIMA models are linear and do not consider the random component of statistical data. To overcome this shortcoming, monthly inflow is predicted in this study based on a combination of seasonal autoregressive integrated moving average (SARIMA) and gene expression programming (GEP) models, which is a new hybrid method (SARIMA-GEP). To this end, a four-step process is employed. First, the monthly inflow datasets are pre-processed. Second, the datasets are modelled linearly with SARIMA and in the third stage, the non-linearity of residual series caused by linear modelling is evaluated. After confirming the non-linearity, the residuals are modelled in the fourth step using a gene expression programming (GEP) method. The proposed hybrid model is employed to predict the monthly inflow to the Jamishan Dam in west Iran. Thirty years' worth of site measurements of monthly reservoir dam inflow with extreme seasonal variations are used. The results of this hybrid model (SARIMA-GEP) are compared with SARIMA, GEP, artificial neural network (ANN) and SARIMA-ANN models. The results indicate that the SARIMA-GEP model ( R 2=78.8, VAF =78.8, RMSE =0.89, MAPE =43.4, CRM =0.053) outperforms SARIMA and GEP and SARIMA-ANN ( R 2=68.3, VAF =66.4, RMSE =1.12, MAPE =56.6, CRM =0.032) displays better performance than the SARIMA and ANN models. A comparison of the two hybrid models indicates the superiority of SARIMA-GEP over the SARIMA-ANN model.

  13. Monthly reservoir inflow forecasting using a new hybrid SARIMA genetic programming approach

    Indian Academy of Sciences (India)

    Hamid Moeeni; Hossein Bonakdari; Isa Ebtehaj

    2017-03-01

    Forecasting reservoir inflow is one of the most important components of water resources and hydroelectric systems operation management. Seasonal autoregressive integrated moving average (SARIMA) models have been frequently used for predicting river flow. SARIMA models are linear and do not consider the random component of statistical data. To overcome this shortcoming, monthly inflow is predicted in this study based on a combination of seasonal autoregressive integrated moving average (SARIMA) andgene expression programming (GEP) models, which is a new hybrid method (SARIMA–GEP). To this end, a four-step process is employed. First, the monthly inflow datasets are pre-processed. Second, the datasets are modelled linearly with SARIMA and in the third stage, the non-linearity of residual seriescaused by linear modelling is evaluated. After confirming the non-linearity, the residuals are modelled in the fourth step using a gene expression programming (GEP) method. The proposed hybrid model is employed to predict the monthly inflow to the Jamishan Dam in west Iran. Thirty years’ worth of site measurements of monthly reservoir dam inflow with extreme seasonal variations are used. The results of this hybrid model (SARIMA–GEP) are compared with SARIMA, GEP, artificial neural network (ANN) and SARIMA–ANN models. The results indicate that the SARIMA–GEP model (R2=78.8, VAF=78.8, RMSE=0.89, MAPE=43.4, CRM=0.053) outperforms SARIMA and GEP and SARIMA– ANN (R2=68.3, VAF=66.4, RMSE=1.12, MAPE=56.6, CRM=0.032) displays better performance than the SARIMA and ANN models. A comparison of the two hybrid models indicates the superiority of SARIMA–GEP over the SARIMA–ANN model.

  14. Computer programs for forward and inverse modeling of acoustic and electromagnetic data

    Science.gov (United States)

    Ellefsen, Karl J.

    2011-01-01

    A suite of computer programs was developed by U.S. Geological Survey personnel for forward and inverse modeling of acoustic and electromagnetic data. This report describes the computer resources that are needed to execute the programs, the installation of the programs, the program designs, some tests of their accuracy, and some suggested improvements.

  15. Promoting Active Learning: The Use of Computational Software Programs

    Science.gov (United States)

    Dickinson, Tom

    The increased emphasis on active learning in essentially all disciplines is proving beneficial in terms of a student's depth of learning, retention, and completion of challenging courses. Formats labeled flipped, hybrid and blended facilitate face-to-face active learning. To be effective, students need to absorb a significant fraction of the course material prior to class, e.g., using online lectures and reading assignments. Getting students to assimilate and at least partially understand this material prior to class can be extremely difficult. As an aid to achieving this preparation as well as enhancing depth of understanding, we find the use of software programs such as Mathematica®or MatLab®, very helpful. We have written several Mathematica®applications and student exercises for use in a blended format two semester E&M course. Formats include tutorials, simulations, graded and non-graded quizzes, walk-through problems, exploration and interpretation exercises, and numerical solutions of complex problems. A good portion of this activity involves student-written code. We will discuss the efficacy of these applications, their role in promoting active learning, and the range of possible uses of this basic scheme in other classes.

  16. Hybrid Parallel Programming Models for AMR Neutron Monte-Carlo Transport

    Science.gov (United States)

    Dureau, David; Poëtte, Gaël

    2014-06-01

    This paper deals with High Performance Computing (HPC) applied to neutron transport theory on complex geometries, thanks to both an Adaptive Mesh Refinement (AMR) algorithm and a Monte-Carlo (MC) solver. Several Parallelism models are presented and analyzed in this context, among them shared memory and distributed memory ones such as Domain Replication and Domain Decomposition, together with Hybrid strategies. The study is illustrated by weak and strong scalability tests on complex benchmarks on several thousands of cores thanks to the petaflopic supercomputer Tera100.

  17. Finite difference programs and array processors. [High-speed floating point processing by coupling host computer to programable array processor

    Energy Technology Data Exchange (ETDEWEB)

    Rudy, T.E.

    1977-08-01

    An alternative to maxi computers for high-speed floating-point processing capabilities is the coupling of a host computer to a programable array processor. This paper compares the performance of two finite difference programs on various computers and their expected performance on commercially available array processors. The significance of balancing array processor computation, host-array processor control traffic, and data transfer operations is emphasized. 3 figures, 1 table.

  18. Threshold evaluation data revision and computer program enhancement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1979-02-27

    The Threshold Evaluation System was developed to assist the Division of Buildings and Community Systems of the Department of Energy in performing preliminary evaluation of projects being considered for funding. In addition, the evaluation has been applied to on-going projects, because information obtained through RD and D may alter the expected benefits and costs of a project, making it necessary to reevaluate project funding. The system evaluates each project according to its expected energy savings and costs. A number of public and private sector criteria are calculated, upon which comparisons between projects may be based. A summary of the methodology is given in Appendix B. The purpose of this task is to upgrade both the quality of the data used for input to the system and the usefulness and efficiency of the computer program used to perform the analysis. The modifications required to produce a better, more consistent set of data are described in Section 2. Program changes that have had a significant impact on the methodology are discussed in Section 3, while those that affected only the computer code are presented as a system flow diagram and program listing in Appendix C. These improvements in the project evaluation methodology and data will provide BCS with a more efficient and comprehensive management tool. The direction of future work will be toward integrating this system with a large scale (at ORNL) so that information used by both systems may be stored in a common data base. A discussion of this, and other unresolved problems is given in Section 4.

  19. Multi-Language Programming Environments for High Performance Java Computing

    Directory of Open Access Journals (Sweden)

    Vladimir Getov

    1999-01-01

    Full Text Available Recent developments in processor capabilities, software tools, programming languages and programming paradigms have brought about new approaches to high performance computing. A steadfast component of this dynamic evolution has been the scientific community’s reliance on established scientific packages. As a consequence, programmers of high‐performance applications are reluctant to embrace evolving languages such as Java. This paper describes the Java‐to‐C Interface (JCI tool which provides application programmers wishing to use Java with immediate accessibility to existing scientific packages. The JCI tool also facilitates rapid development and reuse of existing code. These benefits are provided at minimal cost to the programmer. While beneficial to the programmer, the additional advantages of mixed‐language programming in terms of application performance and portability are addressed in detail within the context of this paper. In addition, we discuss how the JCI tool is complementing other ongoing projects such as IBM’s High‐Performance Compiler for Java (HPCJ and IceT’s metacomputing environment.

  20. Computer program for distance learning of pesticide application technology.

    Science.gov (United States)

    Maia, Bruno; Cunha, Joao P A R

    2011-12-01

    Distance learning presents great potential for mitigating field problems on pesticide application technology. Thus, due to the lack of teaching material about pesticide spraying technology in the Portuguese language and the increasing availability of distance learning, this study developed and evaluated a computer program for distance learning about the theory of pesticide spraying technology using the tools of information technology. The modules comprising the course, named Pulverizar, were: (1) Basic concepts, (2) Factors that affect application, (3) Equipments, (4) Spraying nozzles, (5) Sprayer calibration, (6) Aerial application, (7) Chemigation, (8) Physical-chemical properties, (9) Formulations, (10) Adjuvants, (11) Water quality, and (12) Adequate use of pesticides. The program was made available to the public on July 1(st), 2008, hosted at the web site www.pulverizar.iciag.ufu.br, and was simple, robust and practical on the complementation of traditional teaching for the education of professionals in Agricultural Sciences. Mastering pesticide spraying technology by people involved in agricultural production can be facilitated by the program Pulverizar, which was well accepted in its initial evaluation.

  1. Probabilistic Planning with Imperfect Sensing Actions Using Hybrid Probabilistic Logic Programs

    Science.gov (United States)

    Saad, Emad

    Effective planning in uncertain environment is important to agents and multi-agents systems. In this paper, we introduce a new logic based approach to probabilistic contingent planning (probabilistic planning with imperfect sensing actions), by relating probabilistic contingent planning to normal hybrid probabilistic logic programs with probabilistic answer set semantics [24]. We show that any probabilistic contingent planning problem can be encoded as a normal hybrid probabilistic logic program. We formally prove the correctness of our approach. Moreover, we show that the complexity of finding a probabilistic contingent plan in our approach is NP-complete. In addition, we show that any probabilistic contingent planning problem, \\cal PP, can be encoded as a classical normal logic program with answer set semantics, whose answer sets corresponds to valid trajectories in \\cal PP. We show that probabilistic contingent planning problems can be encoded as SAT problems. We present a new high level probabilistic action description language that allows the representation of sensing actions with probabilistic outcomes.

  2. Electric and hybrid vehicle site operators program: Thinking of the future

    Science.gov (United States)

    Kansas State University, with support from federal, state, public, and private companies, is participating in the Department of Energy's Electric Vehicle Site Operator Program. Through participation in this program, Kansas State is displaying, testing, and evaluating electric or hybrid vehicle technology. This participation will provide organizations the opportunity to examine the latest EHV prototypes under actual operating conditions. KSU proposes to purchase one electric or hybrid van and two electric cars during the first two years of this five-year program. KSU has purchased one G-Van built by Conceptor Industries, Toronto, Canada and has initiated a procurement order to purchase two Soleq 1993 Ford EVcort station wagons. The G-Van has been signed in order for the public to be aware that this is an electric drive vehicle. Financial participants' names have been stenciled on the back door of the van. This vehicle is available for short term loan to interested utilities and companies. When other vehicles are obtained, the G-Van will be maintained on K-State's campus.

  3. Second Annual AEC Scientific Computer Information Exhange Meeting. Proceedings of the technical program theme: computer graphics

    Energy Technology Data Exchange (ETDEWEB)

    Peskin,A.M.; Shimamoto, Y.

    1974-01-01

    The topic of computer graphics serves well to illustrate that AEC affiliated scientific computing installations are well represented in the forefront of computing science activities. The participant response to the technical program was overwhelming--both in number of contributions and quality of the work described. Session I, entitled Advanced Systems, contains presentations describing systems that contain features not generally found in graphics facilities. These features can be roughly classified as extensions of standard two-dimensional monochromatic imaging to higher dimensions including color and time as well as multidimensional metrics. Session II presents seven diverse applications ranging from high energy physics to medicine. Session III describes a number of important developments in establishing facilities, techniques and enhancements in the computer graphics area. Although an attempt was made to schedule as many of these worthwhile presentations as possible, it appeared impossible to do so given the scheduling constraints of the meeting. A number of prospective presenters 'came to the rescue' by graciously withdrawing from the sessions. Some of their abstracts have been included in the Proceedings.

  4. Potential of Hybrid Computational Phantoms for Retrospective Heart Dosimetry After Breast Radiation Therapy: A Feasibility Study

    Energy Technology Data Exchange (ETDEWEB)

    Moignier, Alexandra, E-mail: alexandra.moignier@irsn.fr [Institut de Radioprotection et de Surete Nucleaire, Fontenay-aux-Roses (France); Derreumaux, Sylvie; Broggio, David; Beurrier, Julien [Institut de Radioprotection et de Surete Nucleaire, Fontenay-aux-Roses (France); Chea, Michel; Boisserie, Gilbert [Groupe Hospitalier Pitie Salpetriere, Service de Radiotherapie, Paris (France); Franck, Didier; Aubert, Bernard [Institut de Radioprotection et de Surete Nucleaire, Fontenay-aux-Roses (France); Mazeron, Jean-Jacques [Groupe Hospitalier Pitie Salpetriere, Service de Radiotherapie, Paris (France)

    2013-02-01

    Purpose: Current retrospective cardiovascular dosimetry studies are based on a representative patient or simple mathematic phantoms. Here, a process of patient modeling was developed to personalize the anatomy of the thorax and to include a heart model with coronary arteries. Methods and Materials: The patient models were hybrid computational phantoms (HCPs) with an inserted detailed heart model. A computed tomography (CT) acquisition (pseudo-CT) was derived from HCP and imported into a treatment planning system where treatment conditions were reproduced. Six current patients were selected: 3 were modeled from their CT images (A patients) and the others were modelled from 2 orthogonal radiographs (B patients). The method performance and limitation were investigated by quantitative comparison between the initial CT and the pseudo-CT, namely, the morphology and the dose calculation were compared. For the B patients, a comparison with 2 kinds of representative patients was also conducted. Finally, dose assessment was focused on the whole coronary artery tree and the left anterior descending coronary. Results: When 3-dimensional anatomic information was available, the dose calculations performed on the initial CT and the pseudo-CT were in good agreement. For the B patients, comparison of doses derived from HCP and representative patients showed that the HCP doses were either better or equivalent. In the left breast radiation therapy context and for the studied cases, coronary mean doses were at least 5-fold higher than heart mean doses. Conclusions: For retrospective dose studies, it is suggested that HCP offers a better surrogate, in terms of dose accuracy, than representative patients. The use of a detailed heart model eliminates the problem of identifying the coronaries on the patient's CT.

  5. Prediction of monthly regional groundwater levels through hybrid soft-computing techniques

    Science.gov (United States)

    Chang, Fi-John; Chang, Li-Chiu; Huang, Chien-Wei; Kao, I.-Feng

    2016-10-01

    Groundwater systems are intrinsically heterogeneous with dynamic temporal-spatial patterns, which cause great difficulty in quantifying their complex processes, while reliable predictions of regional groundwater levels are commonly needed for managing water resources to ensure proper service of water demands within a region. In this study, we proposed a novel and flexible soft-computing technique that could effectively extract the complex high-dimensional input-output patterns of basin-wide groundwater-aquifer systems in an adaptive manner. The soft-computing models combined the Self Organized Map (SOM) and the Nonlinear Autoregressive with Exogenous Inputs (NARX) network for predicting monthly regional groundwater levels based on hydrologic forcing data. The SOM could effectively classify the temporal-spatial patterns of regional groundwater levels, the NARX could accurately predict the mean of regional groundwater levels for adjusting the selected SOM, the Kriging was used to interpolate the predictions of the adjusted SOM into finer grids of locations, and consequently the prediction of a monthly regional groundwater level map could be obtained. The Zhuoshui River basin in Taiwan was the study case, and its monthly data sets collected from 203 groundwater stations, 32 rainfall stations and 6 flow stations during 2000 and 2013 were used for modelling purpose. The results demonstrated that the hybrid SOM-NARX model could reliably and suitably predict monthly basin-wide groundwater levels with high correlations (R2 > 0.9 in both training and testing cases). The proposed methodology presents a milestone in modelling regional environmental issues and offers an insightful and promising way to predict monthly basin-wide groundwater levels, which is beneficial to authorities for sustainable water resources management.

  6. Early experiences of computer-aided assessment and administration when teaching computer programming

    Directory of Open Access Journals (Sweden)

    Abdullah Mohd Zin

    1993-12-01

    Full Text Available This paper describes early experiences with the Ceilidh system currently being piloted at over 30 institutions of higher education. Ceilidh is a course-management system for teaching computer programming whose core is an auto-assessment facility. This facility automatically marks students programs from a range of perspectives, and may be used in an iterative manner, enabling students to work towards a target level of attainment. Ceilidh also includes extensive courseadministration and progress-monitoring facilities, as well as support for other forms of assessment including short-answer marking and the collation of essays for later hand-marking. The paper discusses the motivation for developing Ceilidh, outlines its major facilities, then summarizes experiences of developing and actually using it at the coal-face over three years of teaching.

  7. Electric and Hybrid Vehicle Program, Site Operator Program. Quarterly progress report, January--March 1996

    Energy Technology Data Exchange (ETDEWEB)

    Francfort, J.E. [Lockheed Martin Idaho Technologies Co., Idaho Falls, ID (United States); Bassett, R.R. [Sandia National Labs., Albuquerque, NM (United States); Briasco, S. [Los Angeles City Dept. of Water and Power, CA (United States)] [and others

    1996-08-01

    Goals of the site operator program include field evaluation of electric vehicles (EVs) in real-world applications and environments, advancement of electric vehicle technologies, development of infrastructure elements necessary to support significant EV use, and increasing the awareness and acceptance of EVs by the public. The site operator program currently consists of 11 participants under contract and two other organizations with data-sharing agreements with the program. The participants (electric utilities, academic institutions, Federal agencies) are geographically dispersed within US and their vehicles see a broad spectrum of service conditions. Current EV inventories of the site operators exceeds 250 vehicles. Several national organizations have joined DOE to further the introduction and awareness of EVs, including: (1) EVAmerica (a utility program) and DOE conduct performance and evaluation tests to support market development for EVs; (2) DOE, DOT, the Electric Transportation Coalition, and the Electric Vehicle Association of the Americas are conducting a series of workshops to encourage urban groups in Clean Cities (a DOE program) to initiate the policies and infrastructure development necessary to support large-scale demonstrations, and ultimately the mass market use, of EVs. Current focus of the program is collection and dissemination of EV operations and performance data to aid in the evaluation of real- world EV use. This report contains several sections with vehicle evaluation as a focus: EV testing results, energy economics of EVs, and site operators activities.

  8. Computing membrane-AQP5-phosphatidylserine binding affinities with hybrid steered molecular dynamics approach.

    Science.gov (United States)

    Chen, Liao Y

    2015-01-01

    In order to elucidate how phosphatidylserine (PS6) interacts with AQP5 in a cell membrane, we developed a hybrid steered molecular dynamics (hSMD) method that involved: (1) Simultaneously steering two centers of mass of two selected segments of the ligand, and (2) equilibrating the ligand-protein complex with and without biasing the system. Validating hSMD, we first studied vascular endothelial growth factor receptor 1 (VEGFR1) in complex with N-(4-Chlorophenyl)-2-((pyridin-4-ylmethyl)amino)benzamide (8ST), for which the binding energy is known from in vitro experiments. In this study, our computed binding energy well agreed with the experimental value. Knowing the accuracy of this hSMD method, we applied it to the AQP5-lipid-bilayer system to answer an outstanding question relevant to AQP5's physiological function: Will the PS6, a lipid having a single long hydrocarbon tail that was found in the central pore of the AQP5 tetramer crystal, actually bind to and inhibit AQP5's central pore under near-physiological conditions, namely, when AQP5 tetramer is embedded in a lipid bilayer? We found, in silico, using the CHARMM 36 force field, that binding PS6 to AQP5 was a factor of 3 million weaker than "binding" it in the lipid bilayer. This suggests that AQP5's central pore will not be inhibited by PS6 or a similar lipid in a physiological environment.

  9. Tools for Brain-Computer Interaction: a general concept for a hybrid BCI (hBCI

    Directory of Open Access Journals (Sweden)

    Gernot R. Mueller-Putz

    2011-11-01

    Full Text Available The aim of this work is to present the development of a hybrid Brain-Computer Interface (hBCI which combines existing input devices with a BCI. Thereby, the BCI should be available if the user wishes to extend the types of inputs available to an assistive technology system, but the user can also choose not to use the BCI at all; the BCI is active in the background. The hBCI might decide on the one hand which input channel(s offer the most reliable signal(s and switch between input channels to improve information transfer rate, usability, or other factors, or on the other hand fuse various input channels. One major goal therefore is to bring the BCI technology to a level where it can be used in a maximum number of scenarios in a simple way. To achieve this, it is of great importance that the hBCI is able to operate reliably for long periods, recognizing and adapting to changes as it does so. This goal is only possible if many different subsystems in the hBCI can work together. Since one research institute alone cannot provide such different functionality, collaboration between institutes is necessary. To allow for such a collaboration, a common software framework was investigated.

  10. sBCI-Headset—Wearable and Modular Device for Hybrid Brain-Computer Interface

    Directory of Open Access Journals (Sweden)

    Tatsiana Malechka

    2015-02-01

    Full Text Available Severely disabled people, like completely paralyzed persons either with tetraplegia or similar disabilities who cannot use their arms and hands, are often considered as a user group of Brain Computer Interfaces (BCI. In order to achieve high acceptance of the BCI by this user group and their supporters, the BCI system has to be integrated into their support infrastructure. Critical disadvantages of a BCI are the time consuming preparation of the user for the electroencephalography (EEG measurements and the low information transfer rate of EEG based BCI. These disadvantages become apparent if a BCI is used to control complex devices. In this paper, a hybrid BCI is described that enables research for a Human Machine Interface (HMI that is optimally adapted to requirements of the user and the tasks to be carried out. The solution is based on the integration of a Steady-state visual evoked potential (SSVEP-BCI, an Event-related (de-synchronization (ERD/ERS-BCI, an eye tracker, an environmental observation camera, and a new EEG head cap for wearing comfort and easy preparation. The design of the new fast multimodal BCI (called sBCI system is described and first test results, obtained in experiments with six healthy subjects, are presented. The sBCI concept may also become useful for healthy people in cases where a “hands-free” handling of devices is necessary.

  11. Hybrid computational phantoms of the male and female newborn patient: NURBS-based whole-body models

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Choonsik [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL 32611 (United States); Lodwick, Daniel [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL 32611 (United States); Hasenauer, Deanna [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL 32611 (United States); Williams, Jonathan L [Department of Radiology, University of Florida, Gainesville, FL 32611 (United States); Lee, Choonik [MD Anderson Cancer Center-Orlando, Orlando, FL 32806 (United States); Bolch, Wesley E [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL 32611 (United States)

    2007-07-21

    Anthropomorphic computational phantoms are computer models of the human body for use in the evaluation of dose distributions resulting from either internal or external radiation sources. Currently, two classes of computational phantoms have been developed and widely utilized for organ dose assessment: (1) stylized phantoms and (2) voxel phantoms which describe the human anatomy via mathematical surface equations or 3D voxel matrices, respectively. Although stylized phantoms based on mathematical equations can be very flexible in regard to making changes in organ position and geometrical shape, they are limited in their ability to fully capture the anatomic complexities of human internal anatomy. In turn, voxel phantoms have been developed through image-based segmentation and correspondingly provide much better anatomical realism in comparison to simpler stylized phantoms. However, they themselves are limited in defining organs presented in low contrast within either magnetic resonance or computed tomography images-the two major sources in voxel phantom construction. By definition, voxel phantoms are typically constructed via segmentation of transaxial images, and thus while fine anatomic features are seen in this viewing plane, slice-to-slice discontinuities become apparent in viewing the anatomy of voxel phantoms in the sagittal or coronal planes. This study introduces the concept of a hybrid computational newborn phantom that takes full advantage of the best features of both its stylized and voxel counterparts: flexibility in phantom alterations and anatomic realism. Non-uniform rational B-spline (NURBS) surfaces, a mathematical modeling tool traditionally applied to graphical animation studies, was adopted to replace the limited mathematical surface equations of stylized phantoms. A previously developed whole-body voxel phantom of the newborn female was utilized as a realistic anatomical framework for hybrid phantom construction. The construction of a hybrid

  12. Hybrid computational phantoms of the male and female newborn patient: NURBS-based whole-body models

    Science.gov (United States)

    Lee, Choonsik; Lodwick, Daniel; Hasenauer, Deanna; Williams, Jonathan L.; Lee, Choonik; Bolch, Wesley E.

    2007-07-01

    Anthropomorphic computational phantoms are computer models of the human body for use in the evaluation of dose distributions resulting from either internal or external radiation sources. Currently, two classes of computational phantoms have been developed and widely utilized for organ dose assessment: (1) stylized phantoms and (2) voxel phantoms which describe the human anatomy via mathematical surface equations or 3D voxel matrices, respectively. Although stylized phantoms based on mathematical equations can be very flexible in regard to making changes in organ position and geometrical shape, they are limited in their ability to fully capture the anatomic complexities of human internal anatomy. In turn, voxel phantoms have been developed through image-based segmentation and correspondingly provide much better anatomical realism in comparison to simpler stylized phantoms. However, they themselves are limited in defining organs presented in low contrast within either magnetic resonance or computed tomography images—the two major sources in voxel phantom construction. By definition, voxel phantoms are typically constructed via segmentation of transaxial images, and thus while fine anatomic features are seen in this viewing plane, slice-to-slice discontinuities become apparent in viewing the anatomy of voxel phantoms in the sagittal or coronal planes. This study introduces the concept of a hybrid computational newborn phantom that takes full advantage of the best features of both its stylized and voxel counterparts: flexibility in phantom alterations and anatomic realism. Non-uniform rational B-spline (NURBS) surfaces, a mathematical modeling tool traditionally applied to graphical animation studies, was adopted to replace the limited mathematical surface equations of stylized phantoms. A previously developed whole-body voxel phantom of the newborn female was utilized as a realistic anatomical framework for hybrid phantom construction. The construction of a hybrid

  13. Computer program for the automated attendance accounting system

    Science.gov (United States)

    Poulson, P.; Rasmusson, C.

    1971-01-01

    The automated attendance accounting system (AAAS) was developed under the auspices of the Space Technology Applications Program. The task is basically the adaptation of a small digital computer, coupled with specially developed pushbutton terminals located in school classrooms and offices for the purpose of taking daily attendance, maintaining complete attendance records, and producing partial and summary reports. Especially developed for high schools, the system is intended to relieve both teachers and office personnel from the time-consuming and dreary task of recording and analyzing the myriad classroom attendance data collected throughout the semester. In addition, since many school district budgets are related to student attendance, the increase in accounting accuracy is expected to augment district income. A major component of this system is the real-time AAAS software system, which is described.

  14. Kinoscope: An Open-Source Computer Program for Behavioral Pharmacologists

    Directory of Open Access Journals (Sweden)

    Nikolaos Kokras

    2017-05-01

    Full Text Available Behavioral analysis in preclinical neuropsychopharmacology relies on the accurate measurement of animal behavior. Several excellent solutions for computer-assisted behavioral analysis are available for specialized behavioral laboratories wishing to invest significant resources. Herein, we present an open source straightforward software solution aiming at the rapid and easy introduction to an experimental workflow, and at the improvement of training staff members in a better and more reproducible manual scoring of behavioral experiments with the use of visual aids-maps. Currently the program readily supports the Forced Swim Test, Novel Object Recognition test and the Elevated Plus maze test, but with minor modifications can be used for scoring virtually any behavioral test. Additional modules, with predefined templates and scoring parameters, are continuously added. Importantly, the prominent use of visual maps has been shown to improve, in a student-engaging manner, the training and auditing of scoring in behavioral rodent experiments.

  15. Kinoscope: An Open-Source Computer Program for Behavioral Pharmacologists

    Science.gov (United States)

    Kokras, Nikolaos; Baltas, Dimitrios; Theocharis, Foivos; Dalla, Christina

    2017-01-01

    Behavioral analysis in preclinical neuropsychopharmacology relies on the accurate measurement of animal behavior. Several excellent solutions for computer-assisted behavioral analysis are available for specialized behavioral laboratories wishing to invest significant resources. Herein, we present an open source straightforward software solution aiming at the rapid and easy introduction to an experimental workflow, and at the improvement of training staff members in a better and more reproducible manual scoring of behavioral experiments with the use of visual aids-maps. Currently the program readily supports the Forced Swim Test, Novel Object Recognition test and the Elevated Plus maze test, but with minor modifications can be used for scoring virtually any behavioral test. Additional modules, with predefined templates and scoring parameters, are continuously added. Importantly, the prominent use of visual maps has been shown to improve, in a student-engaging manner, the training and auditing of scoring in behavioral rodent experiments. PMID:28553211

  16. Interactive, Computer-Based Training Program for Radiological Workers

    Energy Technology Data Exchange (ETDEWEB)

    Trinoskey, P.A.; Camacho, P.I.; Wells, L.

    2000-01-18

    Lawrence Livermore National Laboratory (LLNL) is redesigning its Computer-Based Training (CBT) program for radiological workers. The redesign represents a major effort to produce a single, highly interactive and flexible CBT program that will meet the training needs of a wide range of radiological workers--from researchers and x-ray operators to individuals working in tritium, uranium, plutonium, and accelerator facilities. The new CBT program addresses the broad diversity of backgrounds found at a national laboratory. When a training audience is homogeneous in terms of education level and type of work performed, it is difficult to duplicate the effectiveness of a flexible, technically competent instructor who can tailor a course to the express needs and concerns of a course's participants. Unfortunately, such homogeneity is rare. At LLNL, they have a diverse workforce engaged in a wide range of radiological activities, from the fairly common to the quite exotic. As a result, the Laboratory must offer a wide variety of radiological worker courses. These include a general contamination-control course in addition to radioactive-material-handling courses for both low-level laboratory (i.e., bench-top) activities as well as high-level work in tritium, uranium, and plutonium facilities. They also offer training courses for employees who work with radiation-generating devices--x-ray, accelerator, and E-beam operators, for instance. However, even with the number and variety of courses the Laboratory offers, they are constrained by the diversity of backgrounds (i.e., knowledge and experience) of those to be trained. Moreover, time constraints often preclude in-depth coverage of site- and/or task-specific details. In response to this situation, several years ago LLNL began moving toward computer-based training for radiological workers. Today, that CBT effort includes a general radiological safety course developed by the Department of Energy's Hanford facility and

  17. KNET - DISTRIBUTED COMPUTING AND/OR DATA TRANSFER PROGRAM

    Science.gov (United States)

    Hui, J.

    1994-01-01

    KNET facilitates distributed computing between a UNIX compatible local host and a remote host which may or may not be UNIX compatible. It is capable of automatic remote login. That is, it performs on the user's behalf the chore of handling host selection, user name, and password to the designated host. Once the login has been successfully completed, the user may interactively communicate with the remote host. Data output from the remote host may be directed to the local screen, to a local file, and/or to a local process. Conversely, data input from the keyboard, a local file, or a local process may be directed to the remote host. KNET takes advantage of the multitasking and terminal mode control features of the UNIX operating system. A parent process is used as the upper layer for interfacing with the local user. A child process is used for a lower layer for interfacing with the remote host computer, and optionally one or more child processes can be used for the remote data output. Output may be directed to the screen and/or to the local processes under the control of a data pipe switch. In order for KNET to operate, the local and remote hosts must observe a common communications protocol. KNET is written in ANSI standard C-language for computers running UNIX. It has been successfully implemented on several Sun series computers and a DECstation 3100 and used to run programs remotely on VAX VMS and UNIX based computers. It requires 100K of RAM under SunOS and 120K of RAM under DEC RISC ULTRIX. An electronic copy of the documentation is provided on the distribution medium. The standard distribution medium for KNET is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. KNET was developed in 1991 and is a copyrighted work with all copyright vested in NASA. UNIX is a registered trademark of AT&T Bell Laboratories. Sun and SunOS are trademarks of Sun Microsystems, Inc. DECstation, VAX, VMS, and

  18. Comparison of some results of program SHOW with other solar hot water computer programs

    Science.gov (United States)

    Young, M. F.; Baughn, J. W.

    Subroutines and the driver program for the simulation code SHOW (solar hot water) for solar thermosyphon systems are discussed, and simulations are compared with predictions by the F-CHART and TRNSYS codes. SHOW has the driver program MAIN, which defines the system control logic for choosing the appropriate system subroutine for analysis. Ten subroutines are described, which account for the solar system physical parameters, the weather data, the manufacturer-supplied system specifications, mass flow rates, pumped systems, total transformed radiation, load use profiles, stratification in storage, an electric water heater, and economic analyses. The three programs are employed to analyze a thermosiphon installation in Sacramento with two storage tanks. TRNSYS and SHOW were in agreement and lower than F-CHARt for annual predictions, although significantly more computer time was necessary to make TRNSYS converge.

  19. Large-Scale, Multi-Sensor Atmospheric Data Fusion Using Hybrid Cloud Computing

    Science.gov (United States)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E. J.

    2015-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, MODIS, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over 10 years of data. HySDS is a Hybrid-Cloud Science Data System that has been developed and applied under NASA AIST, MEaSUREs, and ACCESS grants. HySDS uses the SciFlow workflow engine to partition analysis workflows into parallel tasks (e.g. segmenting by time or space) that are pushed into a durable job queue. The tasks are "pulled" from the queue by worker Virtual Machines (VM's) and executed in an on-premise Cloud (Eucalyptus or OpenStack) or at Amazon in the public Cloud or govCloud. In this way, years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the transferred data. We are using HySDS to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a MEASURES grant. We will present the architecture of HySDS, describe the achieved "clock time" speedups in fusing datasets on our own nodes and in the Amazon Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer. Our system demonstrates how one can pull A-Train variables (Levels 2 & 3) on-demand into the Amazon Cloud, and cache only those variables that are heavily used, so that any number of compute jobs can be

  20. Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system

    Science.gov (United States)

    Meier, Konrad; Fleig, Georg; Hauth, Thomas; Janczyk, Michael; Quast, Günter; von Suchodoletz, Dirk; Wiebelt, Bernd

    2016-10-01

    Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare

  1. 凹资源配置问题的混合动态规划方法%A Hybrid Dynamic Programming Method for Concave Resource Allocation Problems

    Institute of Scientific and Technical Information of China (English)

    姜计荣; 孙小玲

    2005-01-01

    Concave resource allocation problem is an integer programming problem of minimizing a nonincreasing concave function subject to a convex nondecreasing constraint and bounded integer variables. This class of problems are encountered in optimization models involving economies of scale. In this paper, a new hybrid dynamic programming method was proposed for solving concave resource allocation problems. A convex underestimating function was used to approximate the objective function and the resulting convex subproblem was solved with dynamic programming technique after transforming it into a 0-1 linear knapsack problem. To ensure the convergence, monotonicity and domain cut technique was employed to remove certain integer boxes and partition the Reviseddomain into a union of integer boxes. Computational results were given to show the efficiency of the algorithm.

  2. TRECII: a computer program for transportation risk assessment

    Energy Technology Data Exchange (ETDEWEB)

    Franklin, A.L.

    1980-05-01

    A risk-based fault tree analysis method has been developed at the Pacific Northwest Laboratory (PNL) for analysis of nuclear fuel cycle operations. This methodology was developed for the Department of Energy (DOE) as a risk analysis tool for evaluating high level waste management systems. A computer package consisting of three programs was written at that time to assist in the performance of risk assessment: ACORN (draws fault trees), MFAULT (analyzes fault trees), and RAFT (calculates risk). This methodology evaluates release consequences and estimates the frequency of occurrence of these consequences. This document describes an additional risk calculating code which can be used in conjunction with two of the three codes for transportation risk assessment. TRECII modifies the definition of risk used in RAFT (prob. x release) to accommodate release consequences in terms of fatalities. Throughout this report risk shall be defined as probability times consequences (fatalities are one possible health effect consequence). This methodology has been applied to a variety of energy material transportation systems. Typically the material shipped has been radioactive, although some adaptation to fossil fuels has occurred. The approach is normally applied to truck or train transport systems with some adaptation to pipelines and aircraft. TRECII is designed to be used primarily in conjunction with MFAULT; however, with a moderate amount of effort by the user, it can be implemented independent of the risk analysis package developed at PNL. Code description and user instructions necessary for the implementation of the TRECII program are provided.

  3. Development of computer programs to determine the aerodynamic characteristics of complete light aircraft

    Science.gov (United States)

    Smetana, F. O.

    1974-01-01

    A computer program for determining the flight characteristics of light aircraft was developed. The parameters which were used in the computer program are defined. The accuracy of the system for various types of airfoils is analyzed and the airfoils for which the system does not provide adequate data are identified. The application of a computer program for predicting the fuselage characteristics is discussed. The assumptions and parameters of the fuselage characteristics program are explained. It is stated that the computer programs make it possible to determine the response of a light aircraft to a small disturbance given the geometric and inertial characteristics of the aircraft.

  4. 76 FR 56744 - Privacy Act of 1974; Notice of a Computer Matching Program

    Science.gov (United States)

    2011-09-14

    ... Agencies: Participants in this computer matching program are the Social Security Administration (SSA) and... pertaining to computer matching at 54 FR 25818, June 1989. The legal authority for this exchange is sections... of the Secretary Privacy Act of 1974; Notice of a Computer Matching Program AGENCY: Defense...

  5. A hybrid analytical network process and fuzzy goal programming for supplier selection: A case study of auto part maker

    OpenAIRE

    Hesam Zande Hesami; Mohammad Ali Afshari; Seyed Ali Ayazi; Javad Siahkali Moradi

    2011-01-01

    The aim of this research is to present a hybrid model to select auto part suppliers. The proposed method of this paper uses factor analysis to find the most influencing factors on part maker selection and the results are validated using different statistical tests such as Cronbach's Alpha and Kaiser-Meyer.The hybrid model uses analytical network process to rank different part maker suppliers and fuzzy goal programming to choose the appropriate alternative among various choices. The implementa...

  6. The DoD's High Performance Computing Modernization Program - Ensuing the National Earth Systems Prediction Capability Becomes Operational

    Science.gov (United States)

    Burnett, W.

    2016-12-01

    The Department of Defense's (DoD) High Performance Computing Modernization Program (HPCMP) provides high performance computing to address the most significant challenges in computational resources, software application support and nationwide research and engineering networks. Today, the HPCMP has a critical role in ensuring the National Earth System Prediction Capability (N-ESPC) achieves initial operational status in 2019. A 2015 study commissioned by the HPCMP found that N-ESPC computational requirements will exceed interconnect bandwidth capacity due to the additional load from data assimilation and passing connecting data between ensemble codes. Memory bandwidth and I/O bandwidth will continue to be significant bottlenecks for the Navy's Hybrid Coordinate Ocean Model (HYCOM) scalability - by far the major driver of computing resource requirements in the N-ESPC. The study also found that few of the N-ESPC model developers have detailed plans to ensure their respective codes scale through 2024. Three HPCMP initiatives are designed to directly address and support these issues: Productivity Enhancement, Technology, Transfer and Training (PETTT), the HPCMP Applications Software Initiative (HASI), and Frontier Projects. PETTT supports code conversion by providing assistance, expertise and training in scalable and high-end computing architectures. HASI addresses the continuing need for modern application software that executes effectively and efficiently on next-generation high-performance computers. Frontier Projects enable research and development that could not be achieved using typical HPCMP resources by providing multi-disciplinary teams access to exceptional amounts of high performance computing resources. Finally, the Navy's DoD Supercomputing Resource Center (DSRC) currently operates a 6 Petabyte system, of which Naval Oceanography receives 15% of operational computational system use, or approximately 1 Petabyte of the processing capability. The DSRC will

  7. High performance parallel computers for science: New developments at the Fermilab advanced computer program

    Energy Technology Data Exchange (ETDEWEB)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1988-08-01

    Fermilab's Advanced Computer Program (ACP) has been developing highly cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 MFlops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction. 10 refs., 7 figs.

  8. Multithreaded transactions in scientific computing: New versions of a computer program for kinematical calculations of RHEED intensity oscillations

    Science.gov (United States)

    Brzuszek, Marcin; Daniluk, Andrzej

    2006-11-01

    Writing a concurrent program can be more difficult than writing a sequential program. Programmer needs to think about synchronisation, race conditions and shared variables. Transactions help reduce the inconvenience of using threads. A transaction is an abstraction, which allows programmers to group a sequence of actions on the program into a logical, higher-level computation unit. This paper presents multithreaded versions of the GROWTH program, which allow to calculate the layer coverages during the growth of thin epitaxial films and the corresponding RHEED intensities according to the kinematical approximation. The presented programs also contain graphical user interfaces, which enable displaying program data at run-time. New version program summaryTitles of programs:GROWTHGr, GROWTH06 Catalogue identifier:ADVL_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVL_v2_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, N. Ireland Catalogue identifier of previous version:ADVL Does the new version supersede the original program:No Computer for which the new version is designed and others on which it has been tested: Pentium-based PC Operating systems or monitors under which the new version has been tested: Windows 9x, XP, NT Programming language used:Object Pascal Memory required to execute with typical data:More than 1 MB Number of bits in a word:64 bits Number of processors used:1 No. of lines in distributed program, including test data, etc.:20 931 Number of bytes in distributed program, including test data, etc.: 1 311 268 Distribution format:tar.gz Nature of physical problem: The programs compute the RHEED intensities during the growth of thin epitaxial structures prepared using the molecular beam epitaxy (MBE). The computations are based on the use of kinematical diffraction theory [P.I. Cohen, G.S. Petrich, P.R. Pukite, G.J. Whaley, A.S. Arrott, Surf. Sci. 216 (1989) 222. [1

  9. Middle School Teachers' Perceptions of Computer-Assisted Reading Intervention Programs

    Science.gov (United States)

    Bippert, Kelli; Harmon, Janis

    2017-01-01

    Middle schools often turn to computer-assisted reading intervention programs to improve student reading. The questions guiding this study are (a) in what ways are computer-assisted reading intervention programs utilized, and (b) what are teachers' perceptions about these intervention programs? Nineteen secondary reading teachers were interviewed…

  10. Case Study: Creation of a Degree Program in Computer Security. White Paper.

    Science.gov (United States)

    Belon, Barbara; Wright, Marie

    This paper reports on research into the field of computer security, and undergraduate degrees offered in that field. Research described in the paper reveals only one computer security program at the associate's degree level in the entire country. That program, at Texas State Technical College in Waco, is a 71-credit-hour program leading to an…

  11. Hybrid and Electric Advanced Vehicle Systems Simulation

    Science.gov (United States)

    Beach, R. F.; Hammond, R. A.; Mcgehee, R. K.

    1985-01-01

    Predefined components connected to represent wide variety of propulsion systems. Hybrid and Electric Advanced Vehicle System (HEAVY) computer program is flexible tool for evaluating performance and cost of electric and hybrid vehicle propulsion systems. Allows designer to quickly, conveniently, and economically predict performance of proposed drive train.

  12. Scientific and Computational Challenges of the Fusion Simulation Program (FSP)

    Energy Technology Data Exchange (ETDEWEB)

    William M. Tang

    2011-02-09

    This paper highlights the scientific and computational challenges facing the Fusion Simulation Program (FSP) a major national initiative in the United States with the primary objective being to enable scientific discovery of important new plasma phenomena with associated understanding that emerges only upon integration. This requires developing a predictive integrated simulation capability for magnetically-confined fusion plasmas that are properly validated against experiments in regimes relevant for producing practical fusion energy. It is expected to provide a suite of advanced modeling tools for reliably predicting fusion device behavior with comprehensive and targeted science-based simulations of nonlinearly-coupled phenomena in the core plasma, edge plasma, and wall region on time and space scales required for fusion energy production. As such, it will strive to embody the most current theoretical and experimental understanding of magnetic fusion plasmas and to provide a living framework for the simulation of such plasmas as the associated physics understanding continues to advance over the next several decades. Substantive progress on answering the outstanding scientific questions in the field will drive the FSP toward its ultimate goal of developing the ability to predict the behavior of plasma discharges in toroidal magnetic fusion devices with high physics fidelity on all relevant time and space scales. From a computational perspective, this will demand computing resources in the petascale range and beyond together with the associated multi-core algorithmic formulation needed to address burning plasma issues relevant to ITER - a multibillion dollar collaborative experiment involving seven international partners representing over half the world's population. Even more powerful exascale platforms will be needed to meet the future challenges of designing a demonstration fusion reactor (DEMO). Analogous to other major applied physics modeling projects (e

  13. Numerical methodologies for investigation of moderate-velocity flow using a hybrid computational fluid dynamics - molecular dynamics simulation approach

    Energy Technology Data Exchange (ETDEWEB)

    Ko, Soon Heum [Linkoeping University, Linkoeping (Sweden); Kim, Na Yong; Nikitopoulos, Dimitris E.; Moldovan, Dorel [Louisiana State University, Baton Rouge (United States); Jha, Shantenu [Rutgers University, Piscataway (United States)

    2014-01-15

    Numerical approaches are presented to minimize the statistical errors inherently present due to finite sampling and the presence of thermal fluctuations in the molecular region of a hybrid computational fluid dynamics (CFD) - molecular dynamics (MD) flow solution. Near the fluid-solid interface the hybrid CFD-MD simulation approach provides a more accurate solution, especially in the presence of significant molecular-level phenomena, than the traditional continuum-based simulation techniques. It also involves less computational cost than the pure particle-based MD. Despite these advantages the hybrid CFD-MD methodology has been applied mostly in flow studies at high velocities, mainly because of the higher statistical errors associated with low velocities. As an alternative to the costly increase of the size of the MD region to decrease statistical errors, we investigate a few numerical approaches that reduce sampling noise of the solution at moderate-velocities. These methods are based on sampling of multiple simulation replicas and linear regression of multiple spatial/temporal samples. We discuss the advantages and disadvantages of each technique in the perspective of solution accuracy and computational cost.

  14. Design and performance evaluation of dynamic wavelength scheduled hybrid WDM/TDM PON for distributed computing applications.

    Science.gov (United States)

    Zhu, Min; Guo, Wei; Xiao, Shilin; Dong, Yi; Sun, Weiqiang; Jin, Yaohui; Hu, Weisheng

    2009-01-19

    This paper investigates the design and implementation of distributed computing applications in local area network. We propose a novel Dynamical Wavelength Scheduled Hybrid WDM/TDM Passive Optical Network, which is termed as DWS-HPON. The system is implemented by using spectrum slicing techniques of broadband light source and overlay broadcast-signaling scheme. The Time-Wavelength Co-Allocation (TWCA) Problem is defined and an effective greedy approach to this problem is presented for aggregating large files in distributed computing applications. The simulations demonstrate that the performance is improved significantly compared with the conventional TDM-over-WDM PON.

  15. A Hybrid Approach for Scheduling and Replication based on Multi-criteria Decision Method in Grid Computing

    Directory of Open Access Journals (Sweden)

    Nadia Hadi

    2012-09-01

    Full Text Available Grid computing environments have emerged following the demand of scientists to have a very high computing power and storage capacity. One among the challenges imposed in the use of these environments is the performance problem. To improve performance, scheduling and replicating techniques are used. In this paper we propose an approach to task scheduling combined with data replication decision based on multi criteria principle. This is to improve performance by reducing the response time of tasks and the load of system. This hybrid approach is based on a non-hierarchical model that allows scalability.

  16. Hybrid MPI/OpenMP parallelization of the explicit Volterra integral equation solver for multi-core computer architectures

    KAUST Repository

    Al Jarro, Ahmed

    2011-08-01

    A hybrid MPI/OpenMP scheme for efficiently parallelizing the explicit marching-on-in-time (MOT)-based solution of the time-domain volume (Volterra) integral equation (TD-VIE) is presented. The proposed scheme equally distributes tested field values and operations pertinent to the computation of tested fields among the nodes using the MPI standard; while the source field values are stored in all nodes. Within each node, OpenMP standard is used to further accelerate the computation of the tested fields. Numerical results demonstrate that the proposed parallelization scheme scales well for problems involving three million or more spatial discretization elements. © 2011 IEEE.

  17. Hybrid OpenMP/MPI programs for solving the time-dependent Gross-Pitaevskii equation in a fully anisotropic trap

    Science.gov (United States)

    Satarić, Bogdan; Slavnić, Vladimir; Belić, Aleksandar; Balaž, Antun; Muruganandam, Paulsamy; Adhikari, Sadhan K.

    2016-03-01

    We present hybrid OpenMP/MPI (Open Multi-Processing/Message Passing Interface) parallelized versions of earlier published C programs (Vudragović et al. 2012) for calculating both stationary and non-stationary solutions of the time-dependent Gross-Pitaevskii (GP) equation in three spatial dimensions. The GP equation describes the properties of dilute Bose-Einstein condensates at ultra-cold temperatures. Hybrid versions of programs use the same algorithms as the C ones, involving real- and imaginary-time propagation based on a split-step Crank-Nicolson method, but consider only a fully-anisotropic three-dimensional GP equation, where algorithmic complexity for large grid sizes necessitates parallelization in order to reduce execution time and/or memory requirements per node. Since distributed memory approach is required to address the latter, we combine MPI programming paradigm with existing OpenMP codes, thus creating fully flexible parallelism within a combined distributed/shared memory model, suitable for different modern computer architectures. The two presented C/OpenMP/MPI programs for real- and imaginary-time propagation are optimized and accompanied by a customizable makefile. We present typical scalability results for the provided OpenMP/MPI codes and demonstrate almost linear speedup until inter-process communication time starts to dominate over calculation time per iteration. Such a scalability study is necessary for large grid sizes in order to determine optimal number of MPI nodes and OpenMP threads per node.

  18. A Hybrid Computational Model to Explore the Topological Characteristics of Epithelial Tissues.

    Science.gov (United States)

    González-Valverde, Ismael; García Aznar, José Manuel

    2017-03-01

    Epithelial tissues show a particular topology where cells resemble a polygon-like shape, but some biological processes can alter this tissue topology. During cell proliferation, mitotic cell dilation deforms the tissue and modifies the tissue topology. Additionally, cells are reorganized in the epithelial layer and these rearrangements also alter the polygon distribution. We present here a computer-based hybrid framework focused on the simulation of epithelial layer dynamics that combines discrete and continuum numerical models. In this framework, we consider topological and mechanical aspects of the epithelial tissue. Individual cells in the tissue are simulated by an off-lattice agent-based model, which keeps the information of each cell. In addition, we model the cell-cell interaction forces and the cell cycle. Otherwise, we simulate the passive mechanical behaviour of the cell monolayer using a material that approximates the mechanical properties of the cell. This continuum approach is solved by the finite element method, which uses a dynamic mesh generated by the triangulation of cell polygons. Forces generated by cell-cell interaction in the agent-based model are also applied on the finite element mesh. Cell movement in the agent-based model is driven by the displacements obtained from the deformed finite element mesh of the continuum mechanical approach. We successfully compare the results of our simulations with some experiments about the topology of proliferating epithelial tissues in Drosophila. Our framework is able to model the emergent behaviour of the cell monolayer that is due to local cell-cell interactions, which have a direct influence on the dynamics of the epithelial tissue.

  19. A hybrid three-class brain-computer interface system utilizing SSSEPs and transient ERPs

    Science.gov (United States)

    Breitwieser, Christian; Pokorny, Christoph; Müller-Putz, Gernot R.

    2016-12-01

    Objective. This paper investigates the fusion of steady-state somatosensory evoked potentials (SSSEPs) and transient event-related potentials (tERPs), evoked through tactile simulation on the left and right-hand fingertips, in a three-class EEG based hybrid brain-computer interface. It was hypothesized, that fusing the input signals leads to higher classification rates than classifying tERP and SSSEP individually. Approach. Fourteen subjects participated in the studies, consisting of a screening paradigm to determine person dependent resonance-like frequencies and a subsequent online paradigm. The whole setup of the BCI system was based on open interfaces, following suggestions for a common implementation platform. During the online experiment, subjects were instructed to focus their attention on the stimulated fingertips as indicated by a visual cue. The recorded data were classified during runtime using a multi-class shrinkage LDA classifier and the outputs were fused together applying a posterior probability based fusion. Data were further analyzed offline, involving a combined classification of SSSEP and tERP features as a second fusion principle. The final results were tested for statistical significance applying a repeated measures ANOVA. Main results. A significant classification increase was achieved when fusing the results with a combined classification compared to performing an individual classification. Furthermore, the SSSEP classifier was significantly better in detecting a non-control state, whereas the tERP classifier was significantly better in detecting control states. Subjects who had a higher relative band power increase during the screening session also achieved significantly higher classification results than subjects with lower relative band power increase. Significance. It could be shown that utilizing SSSEP and tERP for hBCIs increases the classification accuracy and also that tERP and SSSEP are not classifying control- and non

  20. Noise Threshold and Resource Cost of Fault-Tolerant Quantum Computing with Majorana Fermions in Hybrid Systems

    Science.gov (United States)

    Li, Ying

    2016-09-01

    Fault-tolerant quantum computing in systems composed of both Majorana fermions and topologically unprotected quantum systems, e.g., superconducting circuits or quantum dots, is studied in this Letter. Errors caused by topologically unprotected quantum systems need to be corrected with error-correction schemes, for instance, the surface code. We find that the error-correction performance of such a hybrid topological quantum computer is not superior to a normal quantum computer unless the topological charge of Majorana fermions is insusceptible to noise. If errors changing the topological charge are rare, the fault-tolerance threshold is much higher than the threshold of a normal quantum computer and a surface-code logical qubit could be encoded in only tens of topological qubits instead of about 1,000 normal qubits.