Sample records for computer processing avionics

  1. Heavy Lift Vehicle (HLV) Avionics Flight Computing Architecture Study

    Hodson, Robert F.; Chen, Yuan; Morgan, Dwayne R.; Butler, A. Marc; Sdhuh, Joseph M.; Petelle, Jennifer K.; Gwaltney, David A.; Coe, Lisa D.; Koelbl, Terry G.; Nguyen, Hai D.


    A NASA multi-Center study team was assembled from LaRC, MSFC, KSC, JSC and WFF to examine potential flight computing architectures for a Heavy Lift Vehicle (HLV) to better understand avionics drivers. The study examined Design Reference Missions (DRMs) and vehicle requirements that could impact the vehicles avionics. The study considered multiple self-checking and voting architectural variants and examined reliability, fault-tolerance, mass, power, and redundancy management impacts. Furthermore, a goal of the study was to develop the skills and tools needed to rapidly assess additional architectures should requirements or assumptions change.

  2. Advanced information processing system for advanced launch system: Avionics architecture synthesis

    Lala, Jaynarayan H.; Harper, Richard E.; Jaskowiak, Kenneth R.; Rosch, Gene; Alger, Linda S.; Schor, Andrei L.


    The Advanced Information Processing System (AIPS) is a fault-tolerant distributed computer system architecture that was developed to meet the real time computational needs of advanced aerospace vehicles. One such vehicle is the Advanced Launch System (ALS) being developed jointly by NASA and the Department of Defense to launch heavy payloads into low earth orbit at one tenth the cost (per pound of payload) of the current launch vehicles. An avionics architecture that utilizes the AIPS hardware and software building blocks was synthesized for ALS. The AIPS for ALS architecture synthesis process starting with the ALS mission requirements and ending with an analysis of the candidate ALS avionics architecture is described.

  3. Advanced Information Processing System (AIPS)-based fault tolerant avionics architecture for launch vehicles

    Lala, Jaynarayan H.; Harper, Richard E.; Jaskowiak, Kenneth R.; Rosch, Gene; Alger, Linda S.; Schor, Andrei L.


    An avionics architecture for the advanced launch system (ALS) that uses validated hardware and software building blocks developed under the advanced information processing system program is presented. The AIPS for ALS architecture defined is preliminary, and reliability requirements can be met by the AIPS hardware and software building blocks that are built using the state-of-the-art technology available in the 1992-93 time frame. The level of detail in the architecture definition reflects the level of detail available in the ALS requirements. As the avionics requirements are refined, the architecture can also be refined and defined in greater detail with the help of analysis and simulation tools. A useful methodology is demonstrated for investigating the impact of the avionics suite to the recurring cost of the ALS. It is shown that allowing the vehicle to launch with selected detected failures can potentially reduce the recurring launch costs. A comparative analysis shows that validated fault-tolerant avionics built out of Class B parts can result in lower life-cycle-cost in comparison to simplex avionics built out of Class S parts or other redundant architectures.


    Sergey Viktorovich Kuznetsov


    Full Text Available Modern aircraft are equipped with complicated systems and complexes of avionics. Aircraft and its avionics tech- nical operation process is observed as a process with changing of operation states. Mathematical models of avionics pro- cesses and systems of technical operation are represented as Markov chains, Markov and semi-Markov processes. The pur- pose is to develop the graph-models of avionics technical operation processes, describing their work in flight, as well as during maintenance on the ground in the various systems of technical operation. The graph-models of processes and sys- tems of on-board complexes and functional avionics systems in flight are proposed. They are based on the state tables. The models are specified for the various technical operation systems: the system with control of the reliability level, the system with parameters control and the system with resource control. The events, which cause the avionics complexes and func- tional systems change their technical state, are failures and faults of built-in test equipment. Avionics system of technical operation with reliability level control is applicable for objects with constant or slowly varying in time failure rate. Avion- ics system of technical operation with resource control is mainly used for objects with increasing over time failure rate. Avionics system of technical operation with parameters control is used for objects with increasing over time failure rate and with generalized parameters, which can provide forecasting and assign the borders of before-fail technical states. The pro- posed formal graphical approach avionics complexes and systems models designing is the basis for models and complex systems and facilities construction, both for a single aircraft and for an airline aircraft fleet, or even for the entire aircraft fleet of some specific type. The ultimate graph-models for avionics in various systems of technical operation permit the beginning of

  5. Computer architecture for efficient algorithmic executions in real-time systems: New technology for avionics systems and advanced space vehicles

    Carroll, Chester C.; Youngblood, John N.; Saha, Aindam


    Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.

  6. A method of distributed avionics data processing based on SVM classifier

    Guo, Hangyu; Wang, Jinyan; Kang, Minyang; Xu, Guojing


    Under the environment of system combat, in order to solve the problem on management and analysis of the massive heterogeneous data on multi-platform avionics system, this paper proposes a management solution which called avionics "resource cloud" based on big data technology, and designs an aided decision classifier based on SVM algorithm. We design an experiment with STK simulation, the result shows that this method has a high accuracy and a broad application prospect.

  7. Space Tug avionics definition study. Volume 2: Avionics functional requirements


    Flight and ground operational phases of the tug/shuttle system are analyzed to determine the general avionics support functions that are needed during each of the mission phases and sub-phases. Each of these general support functions is then expanded into specific avionics system requirements, which are then allocated to the appropriate avionics subsystems. This process is then repeated at the next lower level of detail where these subsystem requirements are allocated to each of the major components that comprise a subsystem.

  8. Avionics System Architecture for the NASA Orion Vehicle

    Baggerman, Clint; McCabe, Mary; Verma, Dinesh


    It has been 30 years since the National Aeronautics and Space Administration (NASA) last developed a crewed spacecraft capable of launch, on-orbit operations, and landing. During that time, aerospace avionics technologies have greatly advanced in capability, and these technologies have enabled integrated avionics architectures for aerospace applications. The inception of NASA s Orion Crew Exploration Vehicle (CEV) spacecraft offers the opportunity to leverage the latest integrated avionics technologies into crewed space vehicle architecture. The outstanding question is to what extent to implement these advances in avionics while still meeting the unique crewed spaceflight requirements for safety, reliability and maintainability. Historically, aircraft and spacecraft have very similar avionics requirements. Both aircraft and spacecraft must have high reliability. They also must have as much computing power as possible and provide low latency between user control and effecter response while minimizing weight, volume, and power. However, there are several key differences between aircraft and spacecraft avionics. Typically, the overall spacecraft operational time is much shorter than aircraft operation time, but the typical mission time (and hence, the time between preventive maintenance) is longer for a spacecraft than an aircraft. Also, the radiation environment is typically more severe for spacecraft than aircraft. A "loss of mission" scenario (i.e. - the mission is not a success, but there are no casualties) arguably has a greater impact on a multi-million dollar spaceflight mission than a typical commercial flight. Such differences need to be weighted when determining if an aircraft-like integrated modular avionics (IMA) system is suitable for a crewed spacecraft. This paper will explore the preliminary design process of the Orion vehicle avionics system by first identifying the Orion driving requirements and the difference between Orion requirements and those of

  9. Avionics and Software Project

    National Aeronautics and Space Administration — The goal of the AES Avionics and Software (A&S) project is to develop a reference avionics and software architecture that is based on standards and that can be...

  10. Flight Avionics Hardware Roadmap

    Hodson, Robert; McCabe, Mary; Paulick, Paul; Ruffner, Tim; Some, Rafi; Chen, Yuan; Vitalpur, Sharada; Hughes, Mark; Ling, Kuok; Redifer, Matt; hide


    As part of NASA's Avionics Steering Committee's stated goal to advance the avionics discipline ahead of program and project needs, the committee initiated a multi-Center technology roadmapping activity to create a comprehensive avionics roadmap. The roadmap is intended to strategically guide avionics technology development to effectively meet future NASA missions needs. The scope of the roadmap aligns with the twelve avionics elements defined in the ASC charter, but is subdivided into the following five areas: Foundational Technology (including devices and components), Command and Data Handling, Spaceflight Instrumentation, Communication and Tracking, and Human Interfaces.

  11. Avionics systems integration technology

    Stech, George; Williams, James R.


    A very dramatic and continuing explosion in digital electronics technology has been taking place in the last decade. The prudent and timely application of this technology will provide Army aviation the capability to prevail against a numerically superior enemy threat. The Army and NASA have exploited this technology explosion in the development and application of avionics systems integration technology for new and future aviation systems. A few selected Army avionics integration technology base efforts are discussed. Also discussed is the Avionics Integration Research Laboratory (AIRLAB) that NASA has established at Langley for research into the integration and validation of avionics systems, and evaluation of advanced technology in a total systems context.

  12. Computers and data processing

    Deitel, Harvey M


    Computers and Data Processing provides information pertinent to the advances in the computer field. This book covers a variety of topics, including the computer hardware, computer programs or software, and computer applications systems.Organized into five parts encompassing 19 chapters, this book begins with an overview of some of the fundamental computing concepts. This text then explores the evolution of modern computing systems from the earliest mechanical calculating devices to microchips. Other chapters consider how computers present their results and explain the storage and retrieval of

  13. Data Acquistion Controllers and Computers that can Endure, Operate and Survive Cryogenic Temperatures, Phase I

    National Aeronautics and Space Administration — Current and future NASA exploration flight missions require Avionics systems, Computers, Controllers and Data processing units that are capable of enduring extreme...

  14. Avionic Data Bus Integration Technology


    address the hardware-software interaction between a digital data bus and an avionic system. Very Large Scale Integration (VLSI) ICs and multiversion ...the SCP. In 1984, the Sperry Corporation developed a fault tolerant system which employed multiversion programming, voting, and monitoring for error... MULTIVERSION PROGRAMMING. N-version programming. 226 N-VERSION PROGRAMMING. The independent coding of a number, N, of redundant computer programs that

  15. Avionics Architecture for Exploration

    National Aeronautics and Space Administration — The goal of the AES Avionics Architectures for Exploration (AAE) project is to develop a reference architecture that is based on standards and that can be scaled and...

  16. Power plant process computer

    Koch, R.


    The concept of instrumentation and control in nuclear power plants incorporates the use of process computers for tasks which are on-line in respect to real-time requirements but not closed-loop in respect to closed-loop control. The general scope of tasks is: - alarm annunciation on CRT's - data logging - data recording for post trip reviews and plant behaviour analysis - nuclear data computation - graphic displays. Process computers are used additionally for dedicated tasks such as the aeroball measuring system, the turbine stress evaluator. Further applications are personal dose supervision and access monitoring. (orig.)

  17. Design and Realization of Avionics Integration Simulation System Based on RTX

    Wang Liang


    Full Text Available Aircraft avionics system becoming more and more complicated, it is too hard to test and verify real avionics systems. A design and realization method of avionics integration simulation system based on RTX was brought forward to resolve the problem. In this simulation system, computer software and hardware resources were utilized entirely. All kinds of aircraft avionics system HIL (hardware-in-loop simulations can be implemented in this platform. The simulation method provided the technical foundation of testing and verifying real avionics system. The research has recorded valuable data using the newly-developed method. The experiment results prove that the avionics integration simulation system was used well in some helicopter avionics HIL simulation experiment. The simulation experiment results provided the necessary judgment foundation for the helicopter real avionics system verification.

  18. Synchronous Modeling of Modular Avionics Architectures using the SIGNAL Language

    Gamatié , Abdoulaye; Gautier , Thierry


    This document presents a study on the modeling of architecture components for avionics applications. We consider the avionics standard ARINC 653 specifications as basis, as well as the synchronous language SIGNAL to describe the modeling. A library of APEX object models (partition, process, communication and synchronization services, etc.) has been implemented. This should allow to describe distributed real-time applications using POLYCHRONY, so as to access formal tools and techniques for ar...

  19. Information processing, computation, and cognition.

    Piccinini, Gualtiero; Scarantino, Andrea


    Computation and information processing are among the most fundamental notions in cognitive science. They are also among the most imprecisely discussed. Many cognitive scientists take it for granted that cognition involves computation, information processing, or both - although others disagree vehemently. Yet different cognitive scientists use 'computation' and 'information processing' to mean different things, sometimes without realizing that they do. In addition, computation and information processing are surrounded by several myths; first and foremost, that they are the same thing. In this paper, we address this unsatisfactory state of affairs by presenting a general and theory-neutral account of computation and information processing. We also apply our framework by analyzing the relations between computation and information processing on one hand and classicism, connectionism, and computational neuroscience on the other. We defend the relevance to cognitive science of both computation, at least in a generic sense, and information processing, in three important senses of the term. Our account advances several foundational debates in cognitive science by untangling some of their conceptual knots in a theory-neutral way. By leveling the playing field, we pave the way for the future resolution of the debates' empirical aspects.

  20. Advanced Avionics and Processor Systems for a Flexible Space Exploration Architecture

    Keys, Andrew S.; Adams, James H.; Smith, Leigh M.; Johnson, Michael A.; Cressler, John D.


    The Advanced Avionics and Processor Systems (AAPS) project, formerly known as the Radiation Hardened Electronics for Space Environments (RHESE) project, endeavors to develop advanced avionic and processor technologies anticipated to be used by NASA s currently evolving space exploration architectures. The AAPS project is a part of the Exploration Technology Development Program, which funds an entire suite of technologies that are aimed at enabling NASA s ability to explore beyond low earth orbit. NASA s Marshall Space Flight Center (MSFC) manages the AAPS project. AAPS uses a broad-scoped approach to developing avionic and processor systems. Investment areas include advanced electronic designs and technologies capable of providing environmental hardness, reconfigurable computing techniques, software tools for radiation effects assessment, and radiation environment modeling tools. Near-term emphasis within the multiple AAPS tasks focuses on developing prototype components using semiconductor processes and materials (such as Silicon-Germanium (SiGe)) to enhance a device s tolerance to radiation events and low temperature environments. As the SiGe technology will culminate in a delivered prototype this fiscal year, the project emphasis shifts its focus to developing low-power, high efficiency total processor hardening techniques. In addition to processor development, the project endeavors to demonstrate techniques applicable to reconfigurable computing and partially reconfigurable Field Programmable Gate Arrays (FPGAs). This capability enables avionic architectures the ability to develop FPGA-based, radiation tolerant processor boards that can serve in multiple physical locations throughout the spacecraft and perform multiple functions during the course of the mission. The individual tasks that comprise AAPS are diverse, yet united in the common endeavor to develop electronics capable of operating within the harsh environment of space. Specifically, the AAPS tasks for

  1. Demonstration Advanced Avionics System (DAAS) function description

    Bailey, A. J.; Bailey, D. G.; Gaabo, R. J.; Lahn, T. G.; Larson, J. C.; Peterson, E. M.; Schuck, J. W.; Rodgers, D. L.; Wroblewski, K. A.


    The Demonstration Advanced Avionics System, DAAS, is an integrated avionics system utilizing microprocessor technologies, data busing, and shared displays for demonstrating the potential of these technologies in improving the safety and utility of general aviation operations in the late 1980's and beyond. Major hardware elements of the DAAS include a functionally distributed microcomputer complex, an integrated data control center, an electronic horizontal situation indicator, and a radio adaptor unit. All processing and display resources are interconnected by an IEEE-488 bus in order to enhance the overall system effectiveness, reliability, modularity and maintainability. A detail description of the DAAS architecture, the DAAS hardware, and the DAAS functions is presented. The system is designed for installation and flight test in a NASA Cessna 402-B aircraft.

  2. Computer Processing of Esperanto Text.

    Sherwood, Bruce


    Basic aspects of computer processing of Esperanto are considered in relation to orthography and computer representation, phonetics, morphology, one-syllable and multisyllable words, lexicon, semantics, and syntax. There are 28 phonemes in Esperanto, each represented in orthography by a single letter. The PLATO system handles diacritics by using a…

  3. Reference Specifications for SAVOIR Avionics Elements

    Hult, Torbjorn; Lindskog, Martin; Roques, Remi; Planche, Luc; Brunjes, Bernhard; Dellandrea, Brice; Terraillon, Jean-Loup


    Space industry and Agencies have been recognizing already for quite some time the need to raise the level of standardisation in the spacecraft avionics systems in order to increase efficiency and reduce development cost and schedule. This also includes the aspect of increasing competition in global space business, which is a challenge that European space companies are facing at all stages of involvement in the international markets.A number of initiatives towards this vision are driven both by the industry and ESA’s R&D programmes. However, today an intensified coordination of these activities is required in order to achieve the necessary synergy and to ensure they converge towards the shared vision. It has been proposed to federate these initiatives under the common Space Avionics Open Interface Architecture (SAVOIR) initiative. Within this initiative, the approach based on reference architectures and building blocks plays a key role.Following the principles outlined above, the overall goal of the SAVOIR is to establish a streamlined onboard architecture in order to standardize the development of avionics systems for space programmes. This reflects the need to increase efficiency and cost-effectiveness in the development process as well as account the trend towards more functionality implemented by the onboard building blocks, i.e. HW and SW components, and more complexity for the overall space mission objectives.

  4. Guide to Computational Geometry Processing

    Bærentzen, Jakob Andreas; Gravesen, Jens; Anton, François

    be processed before it is useful. This Guide to Computational Geometry Processing reviews the algorithms for processing geometric data, with a practical focus on important techniques not covered by traditional courses on computer vision and computer graphics. This is balanced with an introduction...... to the theoretical and mathematical underpinnings of each technique, enabling the reader to not only implement a given method, but also to understand the ideas behind it, its limitations and its advantages. Topics and features: Presents an overview of the underlying mathematical theory, covering vector spaces......, metric space, affine spaces, differential geometry, and finite difference methods for derivatives and differential equations Reviews geometry representations, including polygonal meshes, splines, and subdivision surfaces Examines techniques for computing curvature from polygonal meshes Describes...

  5. An assessment of General Aviation utilization of advanced avionics technology

    Quinby, G. F.


    Needs of the general aviation industry for services and facilities which might be supplied by NASA were examined. In the data collection phase, twenty-one individuals from nine manufacturing companies in general aviation were interviewed against a carefully prepared meeting format. General aviation avionics manufacturers were credited with a high degree of technology transfer from the forcing industries such as television, automotive, and computers and a demonstrated ability to apply advanced technology such as large scale integration and microprocessors to avionics functions in an innovative and cost effective manner. The industry's traditional resistance to any unnecessary regimentation or standardization was confirmed. Industry's self sufficiency in applying advanced technology to avionics product development was amply demonstrated. NASA research capability could be supportive in areas of basic mechanics of turbulence in weather and alternative means for its sensing.

  6. Computational Intelligence in Image Processing

    Siarry, Patrick


    Computational intelligence based techniques have firmly established themselves as viable, alternate, mathematical tools for more than a decade. They have been extensively employed in many systems and application domains, among these signal processing, automatic control, industrial and consumer electronics, robotics, finance, manufacturing systems, electric power systems, and power electronics. Image processing is also an extremely potent area which has attracted the atten­tion of many researchers who are interested in the development of new computational intelligence-based techniques and their suitable applications, in both research prob­lems and in real-world problems. Part I of the book discusses several image preprocessing algorithms; Part II broadly covers image compression algorithms; Part III demonstrates how computational intelligence-based techniques can be effectively utilized for image analysis purposes; and Part IV shows how pattern recognition, classification and clustering-based techniques can ...

  7. Computer simulation of nonequilibrium processes

    Wallace, D.C.


    The underlying concepts of nonequilibrium statistical mechanics, and of irreversible thermodynamics, will be described. The question at hand is then, how are these concepts to be realize in computer simulations of many-particle systems. The answer will be given for dissipative deformation processes in solids, on three hierarchical levels: heterogeneous plastic flow, dislocation dynamics, an molecular dynamics. Aplication to the shock process will be discussed

  8. Computer Modelling of Dynamic Processes

    B. Rybakin


    Full Text Available Results of numerical modeling of dynamic problems are summed in the article up. These problems are characteristic for various areas of human activity, in particular for problem solving in ecology. The following problems are considered in the present work: computer modeling of dynamic effects on elastic-plastic bodies, calculation and determination of performances of gas streams in gas cleaning equipment, modeling of biogas formation processes.

  9. Space Shuttle Program Primary Avionics Software System (PASS) Success Legacy - Quality and Reliability Date

    Orr, James K.; Peltier, Daryl


    Thsi slide presentation reviews the avionics software system on board the space shuttle, with particular emphasis on the quality and reliability. The Primary Avionics Software System (PASS) provides automatic and fly-by-wire control of critical shuttle systems which executes in redundant computers. Charts given show the number of space shuttle flights vs time, PASS's development history, and other charts that point to the reliability of the system's development. The reliability of the system is also compared to predicted reliability.

  10. Software for Avionics.


    fonctions gfinbrales et lea uti- litaires fournis en particulier grice 41 UNIX, sont intfigrfs aelon divers points de vue: - par leur accas 41 travers le...Are They Really A Problem? Proceedings, 2nd International Conference On Software Engineering, pp 91-68. Long acCA : IEEE Computer Society. Britton...CD The Hague. Nc KLEINSCIIMIDT, M. Dr Fa. LITEF. Poatfach 774. 7800 Freiburg i. Br., Ge KLEMM, R. Dr FGAN- FFM , D 5 307 Watchberg-Werthhoven. Ge KLENK

  11. Computation as an Unbounded Process

    van Leeuwen, J.; Wiedermann, Jiří


    Roč. 429, 20 April (2012), s. 202-212 ISSN 0304-3975 R&D Projects: GA ČR GAP202/10/1333 Institutional research plan: CEZ:AV0Z10300504 Keywords : arithmetical hierarchy * hypercomputation * mind change complexity * nondeterminism * relativistic computation * unbounded computation Subject RIV: IN - Informatics, Computer Science Impact factor: 0.489, year: 2012

  12. Development of Avionics Installation Interface Standards. Revision.


    Shakil Rockwell Collins William Rupp Bendix Air Transport, Avionics Division * D. T. Engen Bendix Air Transport, Avionics Division J. C. Hoelz Bendix...flow is specified in recognition of the situation in whichj 220 kilograms per hour per kilowatt air flow available in a civil configuration D-1

  13. Image processing with personal computer

    Hara, Hiroshi; Handa, Madoka; Watanabe, Yoshihiko


    The method of automating the judgement works using photographs in radiation nondestructive inspection with a simple type image processor on the market was examined. The software for defect extraction and making binary and the software for automatic judgement were made for trial, and by using the various photographs on which the judgement was already done as the object, the accuracy and the problematic points were tested. According to the state of the objects to be photographed and the condition of inspection, the accuracy of judgement from 100% to 45% was obtained. The criteria for judgement were in conformity with the collection of reference photographs made by Japan Cast Steel Association. In the non-destructive inspection by radiography, the number and size of the defect images in photographs are visually judged, the results are collated with the standard, and the quality is decided. Recently, the technology of image processing with personal computers advanced, therefore by utilizing this technology, the automation of the judgement of photographs was attempted to improve the accuracy, to increase the inspection efficiency and to realize labor saving. (K.I.)

  14. Industry perspectives on Plug-& -Play Spacecraft Avionics

    Franck, R.; Graven, P.; Liptak, L.

    This paper describes the methodologies and findings from an industry survey of awareness and utility of Spacecraft Plug-& -Play Avionics (SPA). The survey was conducted via interviews, in-person and teleconference, with spacecraft prime contractors and suppliers. It focuses primarily on AFRL's SPA technology development activities but also explores the broader applicability and utility of Plug-& -Play (PnP) architectures for spacecraft. Interviews include large and small suppliers as well as large and small spacecraft prime contractors. Through these “ product marketing” interviews, awareness and attitudes can be assessed, key technical and market barriers can be identified, and opportunities for improvement can be uncovered. Although this effort focuses on a high-level assessment, similar processes can be used to develop business cases and economic models which may be necessary to support investment decisions.

  15. Distributed Processing in Cloud Computing

    Mavridis, Ilias; Karatza, Eleni


    Proceedings of the First PhD Symposium on Sustainable Ultrascale Computing Systems (NESUS PhD 2016) Timisoara, Romania. February 8-11, 2016. Cloud computing offers a wide range of resources and services through the Internet that can been used for various purposes. The rapid growth of cloud computing has exempted many companies and institutions from the burden of maintaining expensive hardware and software infrastructure. With characteristics like high scalability, availability ...

  16. Deterministic bound for avionics switched networks according to networking features using network calculus

    Feng HE


    Full Text Available The state of the art avionics system adopts switched networks for airborne communications. A major concern in the design of the networks is the end-to-end guarantee ability. Analytic methods have been developed to compute the worst-case delays according to the detailed configurations of flows and networks within avionics context, such as network calculus and trajectory approach. It still lacks a relevant method to make a rapid performance estimation according to some typically switched networking features, such as networking scale, bandwidth utilization and average flow rate. The goal of this paper is to establish a deterministic upper bound analysis method by using these networking features instead of the complete network configurations. Two deterministic upper bounds are proposed from network calculus perspective: one is for a basic estimation, and another just shows the benefits from grouping strategy. Besides, a mathematic expression for grouping ability is established based on the concept of network connecting degree, which illustrates the possibly minimal grouping benefit. For a fully connected network with 4 switches and 12 end systems, the grouping ability coming from grouping strategy is 15–20%, which just coincides with the statistical data (18–22% from the actual grouping advantage. Compared with the complete network calculus analysis method for individual flows, the effectiveness of the two deterministic upper bounds is no less than 38% even with remarkably varied packet lengths. Finally, the paper illustrates the design process for an industrial Avionics Full DupleX switched Ethernet (AFDX networking case according to the two deterministic upper bounds and shows that a better control for network connecting, when designing a switched network, can improve the worst-case delays dramatically. Keywords: Deterministic bound, Grouping ability, Network calculus, Networking features, Switched networks

  17. Tensors in image processing and computer vision

    De Luis García, Rodrigo; Tao, Dacheng; Li, Xuelong


    Tensor signal processing is an emerging field with important applications to computer vision and image processing. This book presents the developments in this branch of signal processing, offering research and discussions by experts in the area. It is suitable for advanced students working in the area of computer vision and image processing.

  18. Processing computed tomography images by using personal computer

    Seto, Kazuhiko; Fujishiro, Kazuo; Seki, Hirofumi; Yamamoto, Tetsuo.


    Processing of CT images was attempted by using a popular personal computer. The program for image-processing was made with C compiler. The original images, acquired with CT scanner (TCT-60A, Toshiba), were transferred to the computer by 8-inch flexible diskette. Many fundamental image-processing, such as displaying image to the monitor, calculating CT value and drawing the profile curve. The result showed that a popular personal computer had ability to process CT images. It seemed that 8-inch flexible diskette was still useful medium of transferring image data. (author)

  19. Non-functional Avionics Requirements

    Paulitsch, Michael; Ruess, Harald; Sorea, Maria

    Embedded systems in aerospace become more and more integrated in order to reduce weight, volume/size, and power of hardware for more fuel-effi ciency. Such integration tendencies change architectural approaches of system ar chi tec tures, which subsequently change non-functional requirements for plat forms. This paper provides some insight into state-of-the-practice of non-func tional requirements for developing ultra-critical embedded systems in the aero space industry, including recent changes and trends. In particular, formal requi re ment capture and formal analysis of non-functional requirements of avionic systems - including hard-real time, fault-tolerance, reliability, and per for mance - are exemplified by means of recent developments in SAL and HiLiTE.

  20. Electronics/avionics integrity - Definition, measurement and improvement

    Kolarik, W.; Rasty, J.; Chen, M.; Kim, Y.

    The authors report on the results obtained from an extensive, three-fold research project: (1) to search the open quality and reliability literature for documented information relative to electronics/avionics integrity; (2) to interpret and evaluate the literature as to significant concepts, strategies, and tools appropriate for use in electronics/avionics product and process integrity efforts; and (3) to develop a list of critical findings and recommendations that will lead to significant progress in product integrity definition, measurement, modeling, and improvements. The research consisted of examining a broad range of trade journals, scientific journals, and technical reports, as well as face-to-face discussions with reliability professionals. Ten significant recommendations have been supported by the research work.

  1. Avionics Simulation, Development and Software Engineering


    During this reporting period, all technical responsibilities were accomplished as planned. A close working relationship was maintained with personnel of the MSFC Avionics Department Software Group (ED14), the MSFC EXPRESS Project Office (FD31), and the Huntsville Boeing Company. Accomplishments included: performing special tasks; supporting Software Review Board (SRB), Avionics Test Bed (ATB), and EXPRESS Software Control Panel (ESCP) activities; participating in technical meetings; and coordinating issues between the Boeing Company and the MSFC Project Office.

  2. Introduction to computer image processing

    Moik, J. G.


    Theoretical backgrounds and digital techniques for a class of image processing problems are presented. Image formation in the context of linear system theory, image evaluation, noise characteristics, mathematical operations on image and their implementation are discussed. Various techniques for image restoration and image enhancement are presented. Methods for object extraction and the problem of pictorial pattern recognition and classification are discussed.

  3. Validating Avionics Conceptual Architectures with Executable Specifications

    Nils Fischer


    Full Text Available Current avionics systems specifications, developed after conceptual design, have a high degree of uncertainty. Since specifications are not sufficiently validated in the early development process and no executable specification exists at aircraft level, system designers cannot evaluate the impact of their design decisions at aircraft or aircraft application level. At the end of the development process of complex systems, e. g. aircraft, an average of about 65 per cent of all specifications have to be changed because they are incorrect, incomplete or too vaguely described. In this paper, a model-based design methodology together with a virtual test environment is described that makes complex high level system specifications executable and testable during the very early levels of system design. An aircraft communication system and its system context is developed to demonstrate the proposed early validation methodology. Executable specifications for early conceptual system architectures enable system designers to couple functions, architecture elements, resources and performance parameters, often called non-functional parameters. An integrated executable specification at Early Conceptual Architecture Level is developed and used to determine the impact of different system architecture decisions on system behavior and overall performance.

  4. Spectra processing with computer graphics

    Kruse, H.


    A program of processng gamma-ray spectra in rock analysis is described. The peak search was performed by applying a cross-correlation function. The experimental data were approximated by an analytical function represented by the sum of a polynomial and a multiple peak function. The latter is Gaussian, joined with the low-energy side by an exponential. A modified Gauss-Newton algorithm is applied for the purpose of fitting the data to the function. The processing of the values derived from a lunar sample demonstrates the effect of different choices of polynomial orders for approximating the background for various fitting intervals. Observations on applications of interactive graphics are presented. 3 figures, 1 table

  5. Integrated communication, navigation, and identification avionics: Impact analysis. Executive summary

    Veatch, M. H.; McManus, J. C.


    This paper summarizes the approach and findings of research into reliability, supportability, and survivability prediction techniques for fault-tolerant avionics systems. Since no technique existed to analyze the fault tolerance of reconfigurable systems, a new method was developed and implemented in the Mission Reliability Model (MIREM). The supportability analysis was completed by using the Simulation of Operational Availability/Readiness (SOAR) model. Both the Computation of Vulnerable Area and Repair Time (COVART) model and FASTGEN, a survivability model, proved valuable for the survivability research. Sample results are presented and several recommendations are also given for each of the three areas investigated under this study: reliability supportablility and survivability.

  6. Micro-Avionics Multi-Purpose Platform (MicroAMPP)

    National Aeronautics and Space Administration — The Micro-Avionics Multi-Purpose Platform (MicroAMPP) is a common avionics architecture supporting microsatellites, launch vehicles, and upper-stage carrier...

  7. Controlling Laboratory Processes From A Personal Computer

    Will, H.; Mackin, M. A.


    Computer program provides natural-language process control from IBM PC or compatible computer. Sets up process-control system that either runs without operator or run by workers who have limited programming skills. Includes three smaller programs. Two of them, written in FORTRAN 77, record data and control research processes. Third program, written in Pascal, generates FORTRAN subroutines used by other two programs to identify user commands with device-driving routines written by user. Also includes set of input data allowing user to define user commands to be executed by computer. Requires personal computer operating under MS-DOS with suitable hardware interfaces to all controlled devices. Also requires FORTRAN 77 compiler and device drivers written by user.

  8. Reconfigurable fault tolerant avionics system

    Ibrahim, M. M.; Asami, K.; Cho, Mengu

    This paper presents the design of a reconfigurable avionics system based on modern Static Random Access Memory (SRAM)-based Field Programmable Gate Array (FPGA) to be used in future generations of nano satellites. A major concern in satellite systems and especially nano satellites is to build robust systems with low-power consumption profiles. The system is designed to be flexible by providing the capability of reconfiguring itself based on its orbital position. As Single Event Upsets (SEU) do not have the same severity and intensity in all orbital locations, having the maximum at the South Atlantic Anomaly (SAA) and the polar cusps, the system does not have to be fully protected all the time in its orbit. An acceptable level of protection against high-energy cosmic rays and charged particles roaming in space is provided within the majority of the orbit through software fault tolerance. Check pointing and roll back, besides control flow assertions, is used for that level of protection. In the minority part of the orbit where severe SEUs are expected to exist, a reconfiguration for the system FPGA is initiated where the processor systems are triplicated and protection through Triple Modular Redundancy (TMR) with feedback is provided. This technique of reconfiguring the system as per the level of the threat expected from SEU-induced faults helps in reducing the average dynamic power consumption of the system to one-third of its maximum. This technique can be viewed as a smart protection through system reconfiguration. The system is built on the commercial version of the (XC5VLX50) Xilinx Virtex5 FPGA on bulk silicon with 324 IO. Simulations of orbit SEU rates were carried out using the SPENVIS web-based software package.

  9. HH-65A Dolphin digital integrated avionics

    Huntoon, R. B.


    Communication, navigation, flight control, and search sensor management are avionics functions which constitute every Search and Rescue (SAR) operation. Routine cockpit duties monopolize crew attention during SAR operations and thus impair crew effectiveness. The United States Coast Guard challenged industry to build an avionics system that automates routine tasks and frees the crew to focus on the mission tasks. The HH-64A SAR avionics systems of communication, navigation, search sensors, and flight control have existed independently. On the SRR helicopter, the flight management system (FMS) was introduced. H coordinates or integrates these functions. The pilot interacts with the FMS rather than the individual subsystems, using simple, straightforward procedures to address distinct mission tasks and the flight management system, in turn, orchestrates integrated system response.

  10. Projection display technology for avionics applications

    Kalmanash, Michael H.; Tompkins, Richard D.


    Avionics displays often require custom image sources tailored to demanding program needs. Flat panel devices are attractive for cockpit installations, however recent history has shown that it is not possible to sustain a business manufacturing custom flat panels in small volume specialty runs. As the number of suppliers willing to undertake this effort shrinks, avionics programs unable to utilize commercial-off-the-shelf (COTS) flat panels are placed in serious jeopardy. Rear projection technology offers a new paradigm, enabling compact systems to be tailored to specific platform needs while using a complement of COTS components. Projection displays enable improved performance, lower cost and shorter development cycles based on inter-program commonality and the wide use of commercial components. This paper reviews the promise and challenges of projection technology and provides an overview of Kaiser Electronics' efforts in developing advanced avionics displays using this approach.

  11. Computer Aided Continuous Time Stochastic Process Modelling

    Kristensen, N.R.; Madsen, Henrik; Jørgensen, Sten Bay


    A grey-box approach to process modelling that combines deterministic and stochastic modelling is advocated for identification of models for model-based control of batch and semi-batch processes. A computer-aided tool designed for supporting decision-making within the corresponding modelling cycle...

  12. Algorithms for image processing and computer vision

    Parker, J R


    A cookbook of algorithms for common image processing applications Thanks to advances in computer hardware and software, algorithms have been developed that support sophisticated image processing without requiring an extensive background in mathematics. This bestselling book has been fully updated with the newest of these, including 2D vision methods in content-based searches and the use of graphics cards as image processing computational aids. It's an ideal reference for software engineers and developers, advanced programmers, graphics programmers, scientists, and other specialists wh

  13. Developing A Generic Optical Avionic Network

    Zhang, Jiang; An, Yi; Berger, Michael Stübert


    We propose a generic optical network design for future avionic systems in order to reduce the weight and power consumption of current networks on board. A three-layered network structure over a ring optical network topology is suggested, as it can provide full reconfiguration flexibility...... and support a wide range of avionic applications. Segregation can be made on different hierarchies according to system criticality and security requirements. The structure of each layer is discussed in detail. Two network configurations are presented, focusing on how to support different network services...... by such a network. Finally, three redundancy scenarios are discussed and compared....

  14. Computer vision camera with embedded FPGA processing

    Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel


    Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.

  15. Practical Secure Computation with Pre-Processing

    Zakarias, Rasmus Winther

    Secure Multiparty Computation has been divided between protocols best suited for binary circuits and protocols best suited for arithmetic circuits. With their MiniMac protocol in [DZ13], Damgård and Zakarias take an important step towards bridging these worlds with an arithmetic protocol tuned...... space for pre-processing material than computing the non-linear parts online (depends on the quality of circuit of course). Surprisingly, even for our optimized AES-circuit this is not the case. We further improve the design of the pre-processing material and end up with only 10 megabyes of pre...... a protocol for small field arithmetic to do fast large integer multipli- cations. This is achieved by devising pre-processing material that allows the Toom-Cook multiplication algorithm to run between the parties with linear communication complexity. With this result computation on the CPU by the parties...

  16. Application engineering for process computer systems

    Mueller, K.


    The variety of tasks for process computers in nuclear power stations necessitates the centralization of all production stages from the planning stage to the delivery of the finished process computer system (PRA) to the user. This so-called 'application engineering' comprises all of the activities connected with the application of the PRA: a) establishment of the PRA concept, b) project counselling, c) handling of offers, d) handling of orders, e) internal handling of orders, f) technical counselling, g) establishing of parameters, h) monitoring deadlines, i) training of customers, j) compiling an operation manual. (orig./AK) [de

  17. Digital image processing mathematical and computational methods

    Blackledge, J M


    This authoritative text (the second part of a complete MSc course) provides mathematical methods required to describe images, image formation and different imaging systems, coupled with the principle techniques used for processing digital images. It is based on a course for postgraduates reading physics, electronic engineering, telecommunications engineering, information technology and computer science. This book relates the methods of processing and interpreting digital images to the 'physics' of imaging systems. Case studies reinforce the methods discussed, with examples of current research

  18. Function Follows Performance in Evolutionary Computational Processing

    Pasold, Anke; Foged, Isak Worre


    As the title ‘Function Follows Performance in Evolutionary Computational Processing’ suggests, this paper explores the potentials of employing multiple design and evaluation criteria within one processing model in order to account for a number of performative parameters desired within varied...


    V. N. Adrov


    Full Text Available Most cpu consuming tasks in photogrammetric processing can be done in parallel. The algorithms take independent bits as input and produce independent bits as output. The independence of bits comes from the nature of such algorithms since images, stereopairs or small image blocks parts can be processed independently. Many photogrammetric algorithms are fully automatic and do not require human interference. Photogrammetric workstations can perform tie points measurements, DTM calculations, orthophoto construction, mosaicing and many other service operations in parallel using distributed calculations. Distributed calculations save time reducing several days calculations to several hours calculations. Modern trends in computer technology show the increase of cpu cores in workstations, speed increase in local networks, and as a result dropping the price of the supercomputers or computer clusters that can contain hundreds or even thousands of computing nodes. Common distributed processing in DPW is usually targeted for interactive work with a limited number of cpu cores and is not optimized for centralized administration. The bottleneck of common distributed computing in photogrammetry can be in the limited lan throughput and storage performance, since the processing of huge amounts of large raster images is needed.

  20. The effect of requirements prioritization on avionics system conceptual design

    Lorentz, John

    This dissertation will provide a detailed approach and analysis of a new collaborative requirements prioritization methodology that has been used successfully on four Coast Guard avionics acquisition and development programs valued at $400M+. A statistical representation of participant study results will be discussed and analyzed in detail. Many technically compliant projects fail to deliver levels of performance and capability that the customer desires. Some of these systems completely meet "threshold" levels of performance; however, the distribution of resources in the process devoted to the development and management of the requirements does not always represent the voice of the customer. This is especially true for technically complex projects such as modern avionics systems. A simplified facilitated process for prioritization of system requirements will be described. The collaborative prioritization process, and resulting artifacts, aids the systems engineer during early conceptual design. All requirements are not the same in terms of customer priority. While there is a tendency to have many thresholds inside of a system design, there is usually a subset of requirements and system performance that is of the utmost importance to the design. These critical capabilities and critical levels of performance typically represent the reason the system is being built. The systems engineer needs processes to identify these critical capabilities, the associated desired levels of performance, and the risks associated with the specific requirements that define the critical capability. The facilitated prioritization exercise is designed to collaboratively draw out these critical capabilities and levels of performance so they can be emphasized in system design. Developing the purpose, scheduling and process for prioritization events are key elements of systems engineering and modern project management. The benefits of early collaborative prioritization flow throughout the

  1. Computer processing of dynamic scintigraphic studies

    Ullmann, V.


    The methods are discussed of the computer processing of dynamic scintigraphic studies which were developed, studied or implemented by the authors within research task no. 30-02-03 in nuclear medicine within the five year plan 1981 to 85. This was mainly the method of computer processing radionuclide angiography, phase radioventriculography, regional lung ventilation, dynamic sequential scintigraphy of kidneys and radionuclide uroflowmetry. The problems are discussed of the automatic definition of fields of interest, the methodology of absolute volumes of the heart chamber in radionuclide cardiology, the design and uses are described of the multipurpose dynamic phantom of heart activity for radionuclide angiocardiography and ventriculography developed within the said research task. All methods are documented with many figures showing typical clinical (normal and pathological) and phantom measurements. (V.U.)

  2. Toward a computational theory of conscious processing.

    Dehaene, Stanislas; Charles, Lucie; King, Jean-Rémi; Marti, Sébastien


    The study of the mechanisms of conscious processing has become a productive area of cognitive neuroscience. Here we review some of the recent behavioral and neuroscience data, with the specific goal of constraining present and future theories of the computations underlying conscious processing. Experimental findings imply that most of the brain's computations can be performed in a non-conscious mode, but that conscious perception is characterized by an amplification, global propagation and integration of brain signals. A comparison of these data with major theoretical proposals suggests that firstly, conscious access must be carefully distinguished from selective attention; secondly, conscious perception may be likened to a non-linear decision that 'ignites' a network of distributed areas; thirdly, information which is selected for conscious perception gains access to additional computations, including temporary maintenance, global sharing, and flexible routing; and finally, measures of the complexity, long-distance correlation and integration of brain signals provide reliable indices of conscious processing, clinically relevant to patients recovering from coma. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Enabling Wireless Avionics Intra-Communications

    Torres, Omar; Nguyen, Truong; Mackenzie, Anne


    The Electromagnetics and Sensors Branch of NASA Langley Research Center (LaRC) is investigating the potential of an all-wireless aircraft as part of the ECON (Efficient Reconfigurable Cockpit Design and Fleet Operations using Software Intensive, Networked and Wireless Enabled Architecture) seedling proposal, which is funded by the Convergent Aeronautics Solutions (CAS) project, Transformative Aeronautics Concepts (TAC) program, and NASA Aeronautics Research Institute (NARI). The project consists of a brief effort carried out by a small team in the Electromagnetic Environment Effects (E3) laboratory with the intention of exposing some of the challenges faced by a wireless communication system inside the reflective cavity of an aircraft and to explore potential solutions that take advantage of that environment for constructive gain. The research effort was named EWAIC for "Enabling Wireless Aircraft Intra-communications." The E3 laboratory is a research facility that includes three electromagnetic reverberation chambers and equipment that allow testing and generation of test data for the investigation of wireless systems in reflective environments. Using these chambers, the EWAIC team developed a set of tests and setups that allow the intentional variation of intensity of a multipath field to reproduce the environment of the various bays and cabins of large transport aircraft. This setup, in essence, simulates an aircraft environment that allows the investigation and testing of wireless communication protocols that can effectively be used as a tool to mitigate some of the risks inherent to an aircraft wireless system for critical functions. In addition, the EWAIC team initiated the development of a computational modeling tool to illustrate the propagation of EM waves inside the reflective cabins and bays of aircraft and to obtain quantifiable information regarding the degradation of signals in aircraft subassemblies. The nose landing gear of a UAV CAD model was used

  4. New Technologies for Space Avionics, 1993

    Aibel, David W.; Harris, David R.; Bartlett, Dave; Black, Steve; Campagna, Dave; Fernald, Nancy; Garbos, Ray


    The report reviews a 1993 effort that investigated issues associated with the development of requirements, with the practice of concurrent engineering and with rapid prototyping, in the development of a next-generation Reaction Jet Drive Controller. This report details lessons learned, the current status of the prototype, and suggestions for future work. The report concludes with a discussion of the vision of future avionics architectures based on the principles associated with open architectures and integrated vehicle health management.

  5. Power, Avionics and Software Communication Network Architecture

    Ivancic, William D.; Sands, Obed S.; Bakula, Casey J.; Oldham, Daniel R.; Wright, Ted; Bradish, Martin A.; Klebau, Joseph M.


    This document describes the communication architecture for the Power, Avionics and Software (PAS) 2.0 subsystem for the Advanced Extravehicular Mobile Unit (AEMU). The following systems are described in detail: Caution Warn- ing and Control System, Informatics, Storage, Video, Audio, Communication, and Monitoring Test and Validation. This document also provides some background as well as the purpose and goals of the PAS project at Glenn Research Center (GRC).

  6. Picture processing computer to control movement by computer provided vision

    Graefe, V


    The author introduces a multiprocessor system which has been specially developed to enable mechanical devices to interpret pictures presented in real time. The separate processors within this system operate simultaneously and independently. By means of freely moveable windows the processors can concentrate on those parts of the picture that are relevant to the control problem. If a machine is to make a correct response to its observation of a picture of moving objects, it must be able to follow the picture sequence, step by step, in real time. As the usual serially operating processors are too slow for such a task, the author describes three models of a special picture processing computer which it has been necessary to develop. 3 references.

  7. Application of industry-standard guidelines for the validation of avionics software

    Hayhurst, Kelly J.; Shagnea, Anita M.


    The application of industry standards to the development of avionics software is discussed, focusing on verification and validation activities. It is pointed out that the procedures that guide the avionics software development and testing process are under increased scrutiny. The DO-178A guidelines, Software Considerations in Airborne Systems and Equipment Certification, are used by the FAA for certifying avionics software. To investigate the effectiveness of the DO-178A guidelines for improving the quality of avionics software, guidance and control software (GCS) is being developed according to the DO-178A development method. It is noted that, due to the extent of the data collection and configuration management procedures, any phase in the life cycle of a GCS implementation can be reconstructed. Hence, a fundamental development and testing platform has been established that is suitable for investigating the adequacy of various software development processes. In particular, the overall effectiveness and efficiency of the development method recommended by the DO-178A guidelines are being closely examined.

  8. Integrated Modular Avionics for Spacecraft: Earth Observation Use Case Demonstrator

    Deredempt, Marie-Helene; Rossignol, Alain; Hyounet, Philippe


    Integrated Modular Avionics (IMA) for Space, as European Space Agency initiative, aimed to make applicable to space domain the time and space partitioning concepts and particularly the ARINC 653 standard [1][2]. Expected benefits of such an approach are development flexibility, capability to provide differential V&V for different criticality level functionalities and to integrate late or In-Orbit delivery. This development flexibility could improve software subcontracting, industrial organization and software reuse. Time and space partitioning technique facilitates integration of software functions as black boxes and integration of decentralized function such as star tracker in On Board Computer to save mass and power by limiting electronics resources. In aeronautical domain, Integrated Modular Avionics architecture is based on a network of LRU (Line Replaceable Unit) interconnected by AFDX (Avionic Full DupleX). Time and Space partitioning concept is applicable to LRU and provides independent partitions which inter communicate using ARINC 653 communication ports. Using End System (LRU component) intercommunication between LRU is managed in the same way than intercommunication between partitions in LRU. In such architecture an application developed using only communication port can be integrated in an LRU or another one without impacting the global architecture. In space domain, a redundant On Board Computer controls (ground monitoring TM) and manages the platform (ground command TC) in terms of power, solar array deployment, attitude, orbit, thermal, maintenance, failure detection and recovery isolation. In addition, Payload units and platform units such as RIU, PCDU, AOCS units (Star tracker, Reaction wheels) are considered in this architecture. Interfaces are mainly realized through MIL-STD-1553B busses and SpaceWire and this could be considered as the main constraint for IMA implementation in space domain. During the first phase of IMA SP project, ARINC653

  9. Applying emerging digital video interface standards to airborne avionics sensor and digital map integrations: benefits outweigh the initial costs

    Kuehl, C. Stephen


    Video signal system performance can be compromised in a military aircraft cockpit management system (CMS) with the tailoring of vintage Electronics Industries Association (EIA) RS170 and RS343A video interface standards. Video analog interfaces degrade when induced system noise is present. Further signal degradation has been traditionally associated with signal data conversions between avionics sensor outputs and the cockpit display system. If the CMS engineering process is not carefully applied during the avionics video and computing architecture development, extensive and costly redesign will occur when visual sensor technology upgrades are incorporated. Close monitoring and technical involvement in video standards groups provides the knowledge-base necessary for avionic systems engineering organizations to architect adaptable and extendible cockpit management systems. With the Federal Communications Commission (FCC) in the process of adopting the Digital HDTV Grand Alliance System standard proposed by the Advanced Television Systems Committee (ATSC), the entertainment and telecommunications industries are adopting and supporting the emergence of new serial/parallel digital video interfaces and data compression standards that will drastically alter present NTSC-M video processing architectures. The re-engineering of the U.S. Broadcasting system must initially preserve the electronic equipment wiring networks within broadcast facilities to make the transition to HDTV affordable. International committee activities in technical forums like ITU-R (former CCIR), ANSI/SMPTE, IEEE, and ISO/IEC are establishing global consensus on video signal parameterizations that support a smooth transition from existing analog based broadcasting facilities to fully digital computerized systems. An opportunity exists for implementing these new video interface standards over existing video coax/triax cabling in military aircraft cockpit management systems. Reductions in signal

  10. Feature extraction & image processing for computer vision

    Nixon, Mark


    This book is an essential guide to the implementation of image processing and computer vision techniques, with tutorial introductions and sample code in Matlab. Algorithms are presented and fully explained to enable complete understanding of the methods and techniques demonstrated. As one reviewer noted, ""The main strength of the proposed book is the exemplar code of the algorithms."" Fully updated with the latest developments in feature extraction, including expanded tutorials and new techniques, this new edition contains extensive new material on Haar wavelets, Viola-Jones, bilateral filt

  11. Process computers automate CERN power supply installations

    Ullrich, H.; Martin, A.


    Higher standards of performance and reliability in the power plants of large particle accelerators necessitate increasing use of automation. The CERN (European Nuclear Research Centre) in Geneva started to employ process computers for plant automation at an early stage in its history. The great complexity and extent of the plants for high-energy physics first led to the setting-up of decentralized automatic systems which are now being increasingly combined into one interconnected automation system. One of these automatic systems controls and monitors the extensive power supply installations for the main ring magnets in the experimental zones. (orig.) [de

  12. Computer performance optimization systems, applications, processes

    Osterhage, Wolfgang W


    Computing power performance was important at times when hardware was still expensive, because hardware had to be put to the best use. Later on this criterion was no longer critical, since hardware had become inexpensive. Meanwhile, however, people have realized that performance again plays a significant role, because of the major drain on system resources involved in developing complex applications. This book distinguishes between three levels of performance optimization: the system level, application level and business processes level. On each, optimizations can be achieved and cost-cutting p

  13. Integrating ISHM with Flight Avionics Architectures for Cyber-Physical Space Systems, Phase I

    National Aeronautics and Space Administration — Autonomous, avionic and robotic systems are used in a variety of applications including launch vehicles, robotic precursor platforms, etc. Most avionic innovations...

  14. Computer Simulation of Developmental Processes and ...

    Rationale: Recent progress in systems toxicology and synthetic biology have paved the way to new thinking about in vitro/in silico modeling of developmental processes and toxicities, both for embryological and reproductive impacts. Novel in vitro platforms such as 3D organotypic culture models, engineered microscale tissues and complex microphysiological systems (MPS), together with computational models and computer simulation of tissue dynamics, lend themselves to a integrated testing strategies for predictive toxicology. As these emergent methodologies continue to evolve, they must be integrally tied to maternal/fetal physiology and toxicity of the developing individual across early lifestage transitions, from fertilization to birth, through puberty and beyond. Scope: This symposium will focus on how the novel technology platforms can help now and in the future, with in vitro/in silico modeling of complex biological systems for developmental and reproductive toxicity issues, and translating systems models into integrative testing strategies. The symposium is based on three main organizing principles: (1) that novel in vitro platforms with human cells configured in nascent tissue architectures with a native microphysiological environments yield mechanistic understanding of developmental and reproductive impacts of drug/chemical exposures; (2) that novel in silico platforms with high-throughput screening (HTS) data, biologically-inspired computational models of

  15. Computational Process Modeling for Additive Manufacturing (OSU)

    Bagg, Stacey; Zhang, Wei


    Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.

  16. Computational simulation of the blood separation process.

    De Gruttola, Sandro; Boomsma, Kevin; Poulikakos, Dimos; Ventikos, Yiannis


    The aim of this work is to construct a computational fluid dynamics model capable of simulating the quasitransient process of apheresis. To this end a Lagrangian-Eulerian model has been developed which tracks the blood particles within a delineated two-dimensional flow domain. Within the Eulerian method, the fluid flow conservation equations within the separator are solved. Taking the calculated values of the flow field and using a Lagrangian method, the displacement of the blood particles is calculated. Thus, the local blood density within the separator at a given time step is known. Subsequently, the flow field in the separator is recalculated. This process continues until a quasisteady behavior is reached. The simulations show good agreement with experimental results. They shows a complete separation of plasma and red blood cells, as well as nearly complete separation of red blood cells and platelets. The white blood cells build clusters in the low concentrate cell bed.

  17. Data processing device for computed tomography system

    Nakayama, N.; Ito, Y.; Iwata, K.; Nishihara, E.; Shibayama, S.


    A data processing device applied to a computed tomography system which examines a living body utilizing radiation of X-rays is disclosed. The X-rays which have penetrated the living body are converted into electric signals in a detecting section. The electric signals are acquired and converted from an analog form into a digital form in a data acquisition section, and then supplied to a matrix data-generating section included in the data processing device. By this matrix data-generating section are generated matrix data which correspond to a plurality of projection data. These matrix data are supplied to a partial sum-producing section. The partial sums respectively corresponding to groups of the matrix data are calculated in this partial sum-producing section and then supplied to an accumulation section. In this accumulation section, the final value corresponding to the total sum of the matrix data is calculated, whereby the calculation for image reconstruction is performed

  18. Advanced Avionics Architecture and Technology Review. Executive Summary and Volume 1, Avionics Technology. Volume 2. Avionics Systems Engineering


    JIAWG core avionics are described in the section below. The JIAWO architecture standard (187-01) describes an open. system architeture which provides...0.35 microns (pRm). Present technology is in the 0.8 npm to 0.5 pm range for aggressive producers. Since the area of a die is approximately proportional ...analog (D/A) converters. The I A/D converter is a device or circuit that examines an analog voltage or current and converts it to a proportional binary

  19. Computer Applications in the Design Process.

    Winchip, Susan

    Computer Assisted Design (CAD) and Computer Assisted Manufacturing (CAM) are emerging technologies now being used in home economics and interior design applications. A microcomputer in a computer network system is capable of executing computer graphic functions such as three-dimensional modeling, as well as utilizing office automation packages to…

  20. Micro-Scale Avionics Thermal Management

    Moran, Matthew E.


    Trends in the thermal management of avionics and commercial ground-based microelectronics are converging, and facing the same dilemma: a shortfall in technology to meet near-term maximum junction temperature and package power projections. Micro-scale devices hold the key to significant advances in thermal management, particularly micro-refrigerators/coolers that can drive cooling temperatures below ambient. A microelectromechanical system (MEMS) Stirling cooler is currently under development at the NASA Glenn Research Center to meet this challenge with predicted efficiencies that are an order of magnitude better than current and future thermoelectric coolers.

  1. An Overview of Computer-Based Natural Language Processing.

    Gevarter, William B.

    Computer-based Natural Language Processing (NLP) is the key to enabling humans and their computer-based creations to interact with machines using natural languages (English, Japanese, German, etc.) rather than formal computer languages. NLP is a major research area in the fields of artificial intelligence and computational linguistics. Commercial…

  2. Identification of Learning Processes by Means of Computer Graphics.

    Sorensen, Birgitte Holm


    Describes a development project for the use of computer graphics and video in connection with an inservice training course for primary education teachers in Denmark. Topics addressed include research approaches to computers; computer graphics in learning processes; activities relating to computer graphics; the role of the teacher; and student…

  3. Computer Simulation of Electron Positron Annihilation Processes

    Chen, y


    With the launching of the Next Linear Collider coming closer and closer, there is a pressing need for physicists to develop a fully-integrated computer simulation of e{sup +}e{sup -} annihilation process at center-of-mass energy of 1TeV. A simulation program acts as the template for future experiments. Either new physics will be discovered, or current theoretical uncertainties will shrink due to more accurate higher-order radiative correction calculations. The existence of an efficient and accurate simulation will help us understand the new data and validate (or veto) some of the theoretical models developed to explain new physics. It should handle well interfaces between different sectors of physics, e.g., interactions happening at parton levels well above the QCD scale which are described by perturbative QCD, and interactions happening at much lower energy scale, which combine partons into hadrons. Also it should achieve competitive speed in real time when the complexity of the simulation increases. This thesis contributes some tools that will be useful for the development of such simulation programs. We begin our study by the development of a new Monte Carlo algorithm intended to perform efficiently in selecting weight-1 events when multiple parameter dimensions are strongly correlated. The algorithm first seeks to model the peaks of the distribution by features, adapting these features to the function using the EM algorithm. The representation of the distribution provided by these features is then improved using the VEGAS algorithm for the Monte Carlo integration. The two strategies mesh neatly into an effective multi-channel adaptive representation. We then present a new algorithm for the simulation of parton shower processes in high energy QCD. We want to find an algorithm which is free of negative weights, produces its output as a set of exclusive events, and whose total rate exactly matches the full Feynman amplitude calculation. Our strategy is to create

  4. Marrying Content and Process in Computer Science Education

    Zendler, A.; Spannagel, C.; Klaudt, D.


    Constructivist approaches to computer science education emphasize that as well as knowledge, thinking skills and processes are involved in active knowledge construction. K-12 computer science curricula must not be based on fashions and trends, but on contents and processes that are observable in various domains of computer science, that can be…

  5. Customer Avionics Interface Development and Analysis (CAIDA): Software Developer for Avionics Systems

    Mitchell, Sherry L.


    The Customer Avionics Interface Development and Analysis (CAIDA) supports the testing of the Launch Control System (LCS), NASA's command and control system for the Space Launch System (SLS), Orion Multi-Purpose Crew Vehicle (MPCV), and ground support equipment. The objective of the semester-long internship was to support day-to-day operations of CAIDA and help prepare for verification and validation of CAIDA software.

  6. Coordination processes in computer supported collaborative writing

    Kanselaar, G.; Erkens, Gijsbert; Jaspers, Jos; Prangsma, M.E.


    In the COSAR-project a computer-supported collaborative learning environment enables students to collaborate in writing an argumentative essay. The TC3 groupware environment (TC3: Text Composer, Computer supported and Collaborative) offers access to relevant information sources, a private notepad, a

  7. A quantum computer based on recombination processes in microelectronic devices

    Theodoropoulos, K; Ntalaperas, D; Petras, I; Konofaos, N


    In this paper a quantum computer based on the recombination processes happening in semiconductor devices is presented. A 'data element' and a 'computational element' are derived based on Schokley-Read-Hall statistics and they can later be used to manifest a simple and known quantum computing process. Such a paradigm is shown by the application of the proposed computer onto a well known physical system involving traps in semiconductor devices

  8. Acquisition of Computers That Process Corporate Information

    Gimble, Thomas


    The Secretary of Defense announced the Corporate Information Management initiative on November 16, 1990, to establish a DoD-wide concept for managing computer, communications, and information management functions...

  9. Study guide to accompany computers data and processing

    Deitel, Harvey M


    Study Guide to Accompany Computer and Data Processing provides information pertinent to the fundamental aspects of computers and computer technology. This book presents the key benefits of using computers.Organized into five parts encompassing 19 chapters, this book begins with an overview of the evolution of modern computing systems from the earliest mechanical calculating devices to microchips. This text then introduces computer hardware and describes the processor. Other chapters describe how microprocessors are made and describe the physical operation of computers. This book discusses as w

  10. Avionics System Development for a Rotary Wing Unmanned Aerial Vehicle

    Greer, Daniel


    .... A helicopter with sufficient lift capability was selected and a lightweight aluminum structure was built to serve as both an avionics platform for the necessary equipment and also as a landing skid...

  11. A Model-based Avionic Prognostic Reasoner (MAPR)

    National Aeronautics and Space Administration — The Model-based Avionic Prognostic Reasoner (MAPR) presented in this paper is an innovative solution for non-intrusively monitoring the state of health (SoH) and...

  12. Avionics for Hibernation and Recovery on Planetary Surfaces

    National Aeronautics and Space Administration — Landers and rovers endure on the Martian equator but experience avionics failures in the cryogenic temperatures of lunar nights and Martian winters. The greatest...

  13. Integrated Power, Avionics, and Software (IPAS) Flexible Systems Integration

    National Aeronautics and Space Administration — The Integrated Power, Avionics, and Software (IPAS) facility is a flexible, multi-mission hardware and software design environment. This project will develop a...

  14. An integrated autonomous rendezvous and docking system architecture using Centaur modern avionics

    Nelson, Kurt


    The avionics system for the Centaur upper stage is in the process of being modernized with the current state-of-the-art in strapdown inertial guidance equipment. This equipment includes an integrated flight control processor with a ring laser gyro based inertial guidance system. This inertial navigation unit (INU) uses two MIL-STD-1750A processors and communicates over the MIL-STD-1553B data bus. Commands are translated into load activation through a Remote Control Unit (RCU) which incorporates the use of solid state relays. Also, a programmable data acquisition system replaces separate multiplexer and signal conditioning units. This modern avionics suite is currently being enhanced through independent research and development programs to provide autonomous rendezvous and docking capability using advanced cruise missile image processing technology and integrated GPS navigational aids. A system concept was developed to combine these technologies in order to achieve a fully autonomous rendezvous, docking, and autoland capability. The current system architecture and the evolution of this architecture using advanced modular avionics concepts being pursued for the National Launch System are discussed.

  15. Computer Simulation of a Hardwood Processing Plant

    D. Earl Kline; Philip A. Araman


    The overall purpose of this paper is to introduce computer simulation as a decision support tool that can be used to provide managers with timely information. A simulation/animation modeling procedure is demonstrated for wood products manufacuring systems. Simulation modeling techniques are used to assist in identifying and solving problems. Animation is used for...

  16. Integration of process computer systems to Cofrentes NPP

    Saettone Justo, A.; Pindado Andres, R.; Buedo Jimenez, J.L.; Jimenez Fernandez-Sesma, A.; Delgado Muelas, J.A.


    The existence of three different process computer systems in Cofrentes NPP and the ageing of two of them have led to the need for their integration into a single real time computer system, known as Integrated ERIS-Computer System (SIEC), which covers the functionality of the three systems: Process Computer (PC), Emergency Response Information System (ERIS) and Nuclear Calculation Computer (OCN). The paper describes the integration project developed, which has essentially consisted in the integration of PC, ERIS and OCN databases into a single database, the migration of programs from the old process computer into the new SIEC hardware-software platform and the installation of a communications programme to transmit all necessary data for OCN programs from the SIEC computer, which in the new configuration is responsible for managing the databases of the whole system. (Author)

  17. Biomimetic design processes in architecture: morphogenetic and evolutionary computational design

    Menges, Achim


    Design computation has profound impact on architectural design methods. This paper explains how computational design enables the development of biomimetic design processes specific to architecture, and how they need to be significantly different from established biomimetic processes in engineering disciplines. The paper first explains the fundamental difference between computer-aided and computational design in architecture, as the understanding of this distinction is of critical importance for the research presented. Thereafter, the conceptual relation and possible transfer of principles from natural morphogenesis to design computation are introduced and the related developments of generative, feature-based, constraint-based, process-based and feedback-based computational design methods are presented. This morphogenetic design research is then related to exploratory evolutionary computation, followed by the presentation of two case studies focusing on the exemplary development of spatial envelope morphologies and urban block morphologies. (paper)

  18. Computer simulation of gear tooth manufacturing processes

    Mavriplis, Dimitri; Huston, Ronald L.


    The use of computer graphics to simulate gear tooth manufacturing procedures is discussed. An analytical basis for the simulation is established for spur gears. The simulation itself, however, is developed not only for spur gears, but for straight bevel gears as well. The applications of the developed procedure extend from the development of finite element models of heretofore intractable geometrical forms, to exploring the fabrication of nonstandard tooth forms.

  19. Proceedings Papers of the AFSC (Air Force Systems Command) Avionics Standardization Conference (2nd) Held at Dayton, Ohio on 30 November-2 December 1982. Volume 3. Embedded Computer Resources Governing Documents.


    1. Validation of computer resource requirements, including soft - ware, risk analyses, planning, preliminary design, security where applicable (DoD...Technology Base Program for soft - ware basic research, exploratory development, advanced devel- opment, and technology demonstrations addressing critical... chancres including agement Procedures (O/S CMP). The basic alose iact of Cr other clu configuration management approach con- tained in the CRISP will be

  20. Coupling Computer-Aided Process Simulation and ...

    A methodology is described for developing a gate-to-gate life cycle inventory (LCI) of a chemical manufacturing process to support the application of life cycle assessment in the design and regulation of sustainable chemicals. The inventories were derived by first applying process design and simulation of develop a process flow diagram describing the energy and basic material flows of the system. Additional techniques developed by the U.S. Environmental Protection Agency for estimating uncontrolled emissions from chemical processing equipment were then applied to obtain a detailed emission profile for the process. Finally, land use for the process was estimated using a simple sizing model. The methodology was applied to a case study of acetic acid production based on the Cativa tm process. The results reveal improvements in the qualitative LCI for acetic acid production compared to commonly used databases and top-down methodologies. The modeling techniques improve the quantitative LCI results for inputs and uncontrolled emissions. With provisions for applying appropriate emission controls, the proposed method can provide an estimate of the LCI that can be used for subsequent life cycle assessments. As part of its mission, the Agency is tasked with overseeing the use of chemicals in commerce. This can include consideration of a chemical's potential impact on health and safety, resource conservation, clean air and climate change, clean water, and sustainable

  1. Computer simulation and automation of data processing

    Tikhonov, A.N.


    The principles of computerized simulation and automation of data processing are presented. The automized processing system is constructed according to the module-hierarchical principle. The main operating conditions of the system are as follows: preprocessing, installation analysis, interpretation, accuracy analysis and controlling parameters. The definition of the quasireal experiment permitting to plan the real experiment is given. It is pointed out that realization of the quasireal experiment by means of the computerized installation model with subsequent automized processing permits to scan the quantitative aspect of the system as a whole as well as provides optimal designing of installation parameters for obtaining maximum resolution [ru

  2. Computer simulation of dynamic processes on accelerators

    Kol'ga, V.V.


    The problems of computer numerical investigation of motion of accelerated particles in accelerators and storages, an effect of different accelerator systems on the motion, determination of optimal characteristics of accelerated charged particle beams are considered. Various simulation representations are discussed which describe the accelerated particle dynamics, such as the enlarged particle method, the representation where a great number of discrete particle is substituted for a field of continuously distributed space charge, the method based on determination of averaged beam characteristics. The procedure is described of numerical studies involving the basic problems, viz. calculation of closed orbits, establishment of stability regions, investigation of resonance propagation determination of the phase stability region, evaluation of the space charge effect the problem of beam extraction. It is shown that most of such problems are reduced to solution of the Cauchy problem using a computer. The ballistic method which is applied to solution of the boundary value problem of beam extraction is considered. It is shown that introduction into the equation under study of additional members with the small positive regularization parameter is a general idea of the methods for regularization of noncorrect problems [ru

  3. Dictionary of computer vision and image processing

    Fisher, R. B


    ... been identified for inclusion since the current edition was published. Revised to include an additional 1000 new terms to reflect current updates, which includes a significantly increased focus on image processing terms, as well as machine learning terms...

  4. Spaceborne computer executive routine functional design specification. Volume 1: Functional design of a flight computer executive program for the reusable shuttle

    Curran, R. T.


    A flight computer functional executive design for the reusable shuttle is presented. The design is given in the form of functional flowcharts and prose description. Techniques utilized in the regulation of process flow to accomplish activation, resource allocation, suspension, termination, and error masking based on process primitives are considered. Preliminary estimates of main storage utilization by the Executive are furnished. Conclusions and recommendations for timely, effective software-hardware integration in the reusable shuttle avionics system are proposed.

  5. Soft computing in big data processing

    Park, Seung-Jong; Lee, Jee-Hyong


    Big data is an essential key to build a smart world as a meaning of the streaming, continuous integration of large volume and high velocity data covering from all sources to final destinations. The big data range from data mining, data analysis and decision making, by drawing statistical rules and mathematical patterns through systematical or automatically reasoning. The big data helps serve our life better, clarify our future and deliver greater value. We can discover how to capture and analyze data. Readers will be guided to processing system integrity and implementing intelligent systems. With intelligent systems, we deal with the fundamental data management and visualization challenges in effective management of dynamic and large-scale data, and efficient processing of real-time and spatio-temporal data. Advanced intelligent systems have led to managing the data monitoring, data processing and decision-making in realistic and effective way. Considering a big size of data, variety of data and frequent chan...

  6. Integration of distributed computing into the drug discovery process.

    von Korff, Modest; Rufener, Christian; Stritt, Manuel; Freyss, Joel; Bär, Roman; Sander, Thomas


    Grid computing offers an opportunity to gain massive computing power at low costs. We give a short introduction into the drug discovery process and exemplify the use of grid computing for image processing, docking and 3D pharmacophore descriptor calculations. The principle of a grid and its architecture are briefly explained. More emphasis is laid on the issues related to a company-wide grid installation and embedding the grid into the research process. The future of grid computing in drug discovery is discussed in the expert opinion section. Most needed, besides reliable algorithms to predict compound properties, is embedding the grid seamlessly into the discovery process. User friendly access to powerful algorithms without any restrictions, that is, by a limited number of licenses, has to be the goal of grid computing in drug discovery.

  7. Anode baking process optimization through computer modelling

    Wilburn, D.; Lancaster, D.; Crowell, B. [Noranda Aluminum, New Madrid, MO (United States); Ouellet, R.; Jiao, Q. [Noranda Technology Centre, Pointe Claire, PQ (Canada)


    Carbon anodes used in aluminum electrolysis are produced in vertical or horizontal type anode baking furnaces. The carbon blocks are formed from petroleum coke aggregate mixed with a coal tar pitch binder. Before the carbon block can be used in a reduction cell it must be heated to pyrolysis. The baking process represents a large portion of the aluminum production cost, and also has a significant effect on anode quality. To ensure that the baking of the anode is complete, it must be heated to about 1100 degrees C. To improve the understanding of the anode baking process and to improve its efficiency, a menu-driven heat, mass and fluid flow simulation tool, called NABSIM (Noranda Anode Baking SIMulation), was developed and calibrated in 1993 and 1994. It has been used since then to evaluate and screen firing practices, and to determine which firing procedure will produce the optimum heat-up rate, final temperature, and soak time, without allowing unburned tar to escape. NABSIM is used as a furnace simulation tool on a daily basis by Noranda plant process engineers and much effort is expended in improving its utility by creating new versions, and the addition of new modules. In the immediate future, efforts will be directed towards optimizing the anode baking process to improve temperature uniformity from pit to pit. 3 refs., 4 figs.

  8. Teaching Process Writing with Computers. Revised Edition.

    Boone, Randy, Ed.

    Focusing on the use of word processing software programs as instructional tools for students learning writing composition, this collection includes 14 research articles and position papers, 16 reports on lesson ideas and projects, 5 articles on keyboarding, and 18 product reviews. These materials relate to teaching writing through the process…

  9. Computer Aided Teaching of Digital Signal Processing.

    Castro, Ian P.


    Describes a microcomputer-based software package developed at the University of Surrey for teaching digital signal processing to undergraduate science and engineering students. Menu-driven software capabilities are explained, including demonstration of qualitative concepts and experimentation with quantitative data, and examples are given of…

  10. Use of Field Programmable Gate Array Technology in Future Space Avionics

    Ferguson, Roscoe C.; Tate, Robert


    Fulfilling NASA's new vision for space exploration requires the development of sustainable, flexible and fault tolerant spacecraft control systems. The traditional development paradigm consists of the purchase or fabrication of hardware boards with fixed processor and/or Digital Signal Processing (DSP) components interconnected via a standardized bus system. This is followed by the purchase and/or development of software. This paradigm has several disadvantages for the development of systems to support NASA's new vision. Building a system to be fault tolerant increases the complexity and decreases the performance of included software. Standard bus design and conventional implementation produces natural bottlenecks. Configuring hardware components in systems containing common processors and DSPs is difficult initially and expensive or impossible to change later. The existence of Hardware Description Languages (HDLs), the recent increase in performance, density and radiation tolerance of Field Programmable Gate Arrays (FPGAs), and Intellectual Property (IP) Cores provides the technology for reprogrammable Systems on a Chip (SOC). This technology supports a paradigm better suited for NASA's vision. Hardware and software production are melded for more effective development; they can both evolve together over time. Designers incorporating this technology into future avionics can benefit from its flexibility. Systems can be designed with improved fault isolation and tolerance using hardware instead of software. Also, these designs can be protected from obsolescence problems where maintenance is compromised via component and vendor availability.To investigate the flexibility of this technology, the core of the Central Processing Unit and Input/Output Processor of the Space Shuttle AP101S Computer were prototyped in Verilog HDL and synthesized into an Altera Stratix FPGA.

  11. Launch Site Computer Simulation and its Application to Processes

    Sham, Michael D.


    This paper provides an overview of computer simulation, the Lockheed developed STS Processing Model, and the application of computer simulation to a wide range of processes. The STS Processing Model is an icon driven model that uses commercial off the shelf software and a Macintosh personal computer. While it usually takes one year to process and launch 8 space shuttles, with the STS Processing Model this process is computer simulated in about 5 minutes. Facilities, orbiters, or ground support equipment can be added or deleted and the impact on launch rate, facility utilization, or other factors measured as desired. This same computer simulation technology can be used to simulate manufacturing, engineering, commercial, or business processes. The technology does not require an 'army' of software engineers to develop and operate, but instead can be used by the layman with only a minimal amount of training. Instead of making changes to a process and realizing the results after the fact, with computer simulation, changes can be made and processes perfected before they are implemented.

  12. Control of Neutralization Process Using Soft Computing

    G. Balasubramanian


    Full Text Available A novel model-based nonlinear control strategy is proposed using an experimental pH neutralization process. The control strategy involves a non linear neural network (NN model, in the context of internal model control (IMC. When integrated into the internal model control scheme, the resulting controller is shown to have favorable practical implications as well as superior performance. The designed model based online IMC controller was implemented to a laboratory scaled pH process in real time using dSPACE 1104 interface card. The responses of pH and acid flow rate shows good tracking for both the set point and load chances over the entire nonlinear region.

  13. Towards a distributed information architecture for avionics data

    Mattmann, Chris; Freeborn, Dana; Crichton, Dan


    Avionics data at the National Aeronautics and Space Administration's (NASA) Jet Propulsion Laboratory (JPL consists of distributed, unmanaged, and heterogeneous information that is hard for flight system design engineers to find and use on new NASA/JPL missions. The development of a systematic approach for capturing, accessing and sharing avionics data critical to the support of NASA/JPL missions and projects is required. We propose a general information architecture for managing the existing distributed avionics data sources and a method for querying and retrieving avionics data using the Object Oriented Data Technology (OODT) framework. OODT uses XML messaging infrastructure that profiles data products and their locations using the ISO-11179 data model for describing data products. Queries against a common data dictionary (which implements the ISO model) are translated to domain dependent source data models, and distributed data products are returned asynchronously through the OODT middleware. Further work will include the ability to 'plug and play' new manufacturer data sources, which are distributed at avionics component manufacturer locations throughout the United States.

  14. The single event upset environment for avionics at high latitude

    Sims, A.J.; Dyer, C.S.; Peerless, C.L.; Farren, J.


    Modern avionic systems for civil and military applications are becoming increasingly reliant upon embedded microprocessors and associated memory devices. The phenomenon of single event upset (SEU) is well known in space systems and designers have generally been careful to use SEU tolerant devices or to implement error detection and correction (EDAC) techniques where appropriate. In the past, avionics designers have had no reason to consider SEU effects but is clear that the more prevalent use of memory devices combined with increasing levels of IC integration will make SEU mitigation an important design consideration for future avionic systems. To this end, it is necessary to work towards producing models of the avionics SEU environment which will permit system designers to choose components and EDAC techniques which are based on predictions of SEU rates correct to much better than an order of magnitude. Measurements of the high latitude SEU environment at avionics altitude have been made on board a commercial airliner. Results are compared with models of primary and secondary cosmic rays and atmospheric neutrons. Ground based SEU tests of static RAMs are used to predict rates in flight

  15. Computer processing techniques in digital radiography research

    Pickens, D.R.; Kugel, J.A.; Waddill, W.B.; Smith, G.D.; Martin, V.N.; Price, R.R.; James, A.E. Jr.


    In the Department of Radiology and Radiological Sciences, Vanderbilt University Medical Center, and the Center for Medical Imaging Research, Nashville, TN, there are several activities which are designed to increase the information available from film-screen acquisition as well as from direct digital acquisition of radiographic information. Two of the projects involve altering the display of images after acquisition, either to remove artifacts present as a result of the acquisition process or to change the manner in which the image is displayed to improve the perception of details in the image. These two projects use methods which can be applied to any type of digital image, but are being implemented with images digitized from conventional x-ray film. One of these research endeavors involves mathematical alteration of the image to correct for motion artifacts or registration errors between images that will be subtracted. Another applies well-known image processing methods to digital radiographic images to improve the image contrast and enhance subtle details in the image. A third project involves the use of dual energy imaging with a digital radiography system to reconstruct images which demonstrate either soft tissue details or the osseous structures. These projects are discussed in greater detail in the following sections of this communication

  16. Proceedings Papers of the AFSC (Air Force Systems Command) Avionics Standardization Conference (2nd) Held at Dayton, Ohio on 30 November-2 December 1982. Volume 2


    validation will result in sustainable avionics. 747 .l REFERENCES 1. Hitt, Ellis F., Webb, Jeff J., Lucius, Charles E., Bridgman, Michael S., Eldredge...There is * software requirement for cross compiler facilities for a t~rget computer system. The Project Manager for the effort has bezo assigned the

  17. Artificial intelligence, expert systems, computer vision, and natural language processing

    Gevarter, W. B.


    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  18. A Tuning Process in a Tunable Archtecture Computer System

    深沢, 良彰; 岸野, 覚; 門倉, 敏夫


    A tuning process in a tunable archtecture computer is described. We have designed a computer system with tunable archtecture. Main components of this computer are four AM2903 bit-slice chips. The control schema of micro instructions is horizontal-type, and the length of each instruction is 104 bits. Our tunable algorithm utilizes an execution history of machine level instructions, because the execution history can be regarded as a property of the user program. In execution histories of simila...

  19. Processing Optimization of Typed Resources with Synchronized Storage and Computation Adaptation in Fog Computing

    Zhengyang Song


    Full Text Available Wide application of the Internet of Things (IoT system has been increasingly demanding more hardware facilities for processing various resources including data, information, and knowledge. With the rapid growth of generated resource quantity, it is difficult to adapt to this situation by using traditional cloud computing models. Fog computing enables storage and computing services to perform at the edge of the network to extend cloud computing. However, there are some problems such as restricted computation, limited storage, and expensive network bandwidth in Fog computing applications. It is a challenge to balance the distribution of network resources. We propose a processing optimization mechanism of typed resources with synchronized storage and computation adaptation in Fog computing. In this mechanism, we process typed resources in a wireless-network-based three-tier architecture consisting of Data Graph, Information Graph, and Knowledge Graph. The proposed mechanism aims to minimize processing cost over network, computation, and storage while maximizing the performance of processing in a business value driven manner. Simulation results show that the proposed approach improves the ratio of performance over user investment. Meanwhile, conversions between resource types deliver support for dynamically allocating network resources.

  20. The Use of Computer Graphics in the Design Process.

    Palazzi, Maria

    This master's thesis examines applications of computer technology to the field of industrial design and ways in which technology can transform the traditional process. Following a statement of the problem, the history and applications of the fields of computer graphics and industrial design are reviewed. The traditional industrial design process…

  1. Selective Bibliography on the History of Computing and Information Processing.

    Aspray, William


    Lists some of the better-known and more accessible books on the history of computing and information processing, covering: (1) popular general works; (2) more technical general works; (3) microelectronics and computing; (4) artificial intelligence and robotics; (5) works relating to Charles Babbage; (6) other biographical and personal accounts;…

  2. Modernization of process computer on the ONACAWA-1 NPP

    Matsuda, Ya.


    Modernization of a process computer caused by a necessity increasing the storage capacity due to introduction of a new type of fuel and replacement of outwork computer components is performed. Comparison of the PC parameters before and after modernization is given

  3. Computer-Aided Multiscale Modelling for Chemical Process Engineering

    Morales Rodriguez, Ricardo; Gani, Rafiqul


    Chemical processes are generally modeled through monoscale approaches, which, while not adequate, satisfy a useful role in product-process design. In this case, use of a multi-dimensional and multi-scale model-based approach has importance in product-process development. A computer-aided framework...

  4. Investigation of an advanced fault tolerant integrated avionics system

    Dunn, W. R.; Cottrell, D.; Flanders, J.; Javornik, A.; Rusovick, M.


    Presented is an advanced, fault-tolerant multiprocessor avionics architecture as could be employed in an advanced rotorcraft such as LHX. The processor structure is designed to interface with existing digital avionics systems and concepts including the Army Digital Avionics System (ADAS) cockpit/display system, navaid and communications suites, integrated sensing suite, and the Advanced Digital Optical Control System (ADOCS). The report defines mission, maintenance and safety-of-flight reliability goals as might be expected for an operational LHX aircraft. Based on use of a modular, compact (16-bit) microprocessor card family, results of a preliminary study examining simplex, dual and standby-sparing architectures is presented. Given the stated constraints, it is shown that the dual architecture is best suited to meet reliability goals with minimum hardware and software overhead. The report presents hardware and software design considerations for realizing the architecture including redundancy management requirements and techniques as well as verification and validation needs and methods.

  5. Computer Forensics Field Triage Process Model

    Marcus K. Rogers


    Full Text Available With the proliferation of digital based evidence, the need for the timely identification, analysis and interpretation of digital evidence is becoming more crucial. In many investigations critical information is required while at the scene or within a short period of time - measured in hours as opposed to days. The traditional cyber forensics approach of seizing a system(s/media, transporting it to the lab, making a forensic image(s, and then searching the entire system for potential evidence, is no longer appropriate in some circumstances. In cases such as child abductions, pedophiles, missing or exploited persons, time is of the essence. In these types of cases, investigators dealing with the suspect or crime scene need investigative leads quickly; in some cases it is the difference between life and death for the victim(s. The Cyber Forensic Field Triage Process Model (CFFTPM proposes an onsite or field approach for providing the identification, analysis and interpretation of digital evidence in a short time frame, without the requirement of having to take the system(s/media back to the lab for an in-depth examination or acquiring a complete forensic image(s. The proposed model adheres to commonly held forensic principles, and does not negate the ability that once the initial field triage is concluded, the system(s/storage media be transported back to a lab environment for a more thorough examination and analysis. The CFFTPM has been successfully used in various real world cases, and its investigative importance and pragmatic approach has been amply demonstrated. Furthermore, the derived evidence from these cases has not been challenged in the court proceedings where it has been introduced. The current article describes the CFFTPM in detail, discusses the model’s forensic soundness, investigative support capabilities and practical considerations.

  6. Automatic processing of radioimmunological research data on a computer

    Korolyuk, I.P.; Gorodenko, A.N.; Gorodenko, S.I.


    A program ''CRITEST'' in the language PL/1 for the EC computer intended for automatic processing of the results of radioimmunological research has been elaborated. The program works in the operation system of the OC EC computer and is performed in the section OC 60 kb. When compiling the program Eitken's modified algorithm was used. The program was clinically approved when determining a number of hormones: CTH, T 4 , T 3 , TSH. The automatic processing of the radioimmunological research data on the computer makes it possible to simplify the labour-consuming analysis and to raise its accuracy

  7. Future trends in power plant process computer techniques

    Dettloff, K.


    The development of new concepts of the process computer technique has advanced in great steps. The steps are in the three sections: hardware, software, application concept. New computers with a new periphery such as, e.g., colour layer equipment, have been developed in hardware. In software, a decisive step in the sector 'automation software' has been made. Through these components, a step forwards has also been made in the question of incorporating the process computer in the structure of the whole power plant control technique. (orig./LH) [de

  8. Graphics processing unit based computation for NDE applications

    Nahas, C. A.; Rajagopal, Prabhu; Balasubramaniam, Krishnan; Krishnamurthy, C. V.


    Advances in parallel processing in recent years are helping to improve the cost of numerical simulation. Breakthroughs in Graphical Processing Unit (GPU) based computation now offer the prospect of further drastic improvements. The introduction of 'compute unified device architecture' (CUDA) by NVIDIA (the global technology company based in Santa Clara, California, USA) has made programming GPUs for general purpose computing accessible to the average programmer. Here we use CUDA to develop parallel finite difference schemes as applicable to two problems of interest to NDE community, namely heat diffusion and elastic wave propagation. The implementations are for two-dimensions. Performance improvement of the GPU implementation against serial CPU implementation is then discussed.

  9. Towards Process Support for Migrating Applications to Cloud Computing

    Chauhan, Muhammad Aufeef; Babar, Muhammad Ali


    Cloud computing is an active area of research for industry and academia. There are a large number of organizations providing cloud computing infrastructure and services. In order to utilize these infrastructure resources and services, existing applications need to be migrated to clouds. However...... for supporting migration to cloud computing based on our experiences from migrating an Open Source System (OSS), Hackystat, to two different cloud computing platforms. We explained the process by performing a comparative analysis of our efforts to migrate Hackystate to Amazon Web Services and Google App Engine....... We also report the potential challenges, suitable solutions, and lesson learned to support the presented process framework. We expect that the reported experiences can serve guidelines for those who intend to migrate software applications to cloud computing....

  10. Snore related signals processing in a private cloud computing system.

    Qian, Kun; Guo, Jian; Xu, Huijie; Zhu, Zhaomeng; Zhang, Gongxuan


    Snore related signals (SRS) have been demonstrated to carry important information about the obstruction site and degree in the upper airway of Obstructive Sleep Apnea-Hypopnea Syndrome (OSAHS) patients in recent years. To make this acoustic signal analysis method more accurate and robust, big SRS data processing is inevitable. As an emerging concept and technology, cloud computing has motivated numerous researchers and engineers to exploit applications both in academic and industry field, which could have an ability to implement a huge blue print in biomedical engineering. Considering the security and transferring requirement of biomedical data, we designed a system based on private cloud computing to process SRS. Then we set the comparable experiments of processing a 5-hour audio recording of an OSAHS patient by a personal computer, a server and a private cloud computing system to demonstrate the efficiency of the infrastructure we proposed.

  11. Proceedings: Distributed digital systems, plant process computers, and networks


    These are the proceedings of a workshop on Distributed Digital Systems, Plant Process Computers, and Networks held in Charlotte, North Carolina on August 16--18, 1994. The purpose of the workshop was to provide a forum for technology transfer, technical information exchange, and education. The workshop was attended by more than 100 representatives of electric utilities, equipment manufacturers, engineering service organizations, and government agencies. The workshop consisted of three days of presentations, exhibitions, a panel discussion and attendee interactions. Original plant process computers at the nuclear power plants are becoming obsolete resulting in increasing difficulties in their effectiveness to support plant operations and maintenance. Some utilities have already replaced their plant process computers by more powerful modern computers while many other utilities intend to replace their aging plant process computers in the future. Information on recent and planned implementations are presented. Choosing an appropriate communications and computing network architecture facilitates integrating new systems and provides functional modularity for both hardware and software. Control room improvements such as CRT-based distributed monitoring and control, as well as digital decision and diagnostic aids, can improve plant operations. Commercially available digital products connected to the plant communications system are now readily available to provide distributed processing where needed. Plant operations, maintenance activities, and engineering analyses can be supported in a cost-effective manner. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database

  12. Design, functioning and possible applications of process computers

    Kussl, V.


    Process computers are useful as automation instruments a) when large numbers of data are processed in analog or digital form, b) for low data flow (data rate), and c) when data must be stored over short or long periods of time. (orig./AK) [de

  13. Splash, pop, sizzle: Information processing with phononic computing

    Sophia R. Sklan


    Full Text Available Phonons, the quanta of mechanical vibration, are important to the transport of heat and sound in solid materials. Recent advances in the fundamental control of phonons (phononics have brought into prominence the potential role of phonons in information processing. In this review, the many directions of realizing phononic computing and information processing are examined. Given the relative similarity of vibrational transport at different length scales, the related fields of acoustic, phononic, and thermal information processing are all included, as are quantum and classical computer implementations. Connections are made between the fundamental questions in phonon transport and phononic control and the device level approach to diodes, transistors, memory, and logic.

  14. Computer Processing Of Tunable-Diode-Laser Spectra

    May, Randy D.


    Tunable-diode-laser spectrometer measuring transmission spectrum of gas operates under control of computer, which also processes measurement data. Measurements in three channels processed into spectra. Computer controls current supplied to tunable diode laser, stepping it through small increments of wavelength while processing spectral measurements at each step. Program includes library of routines for general manipulation and plotting of spectra, least-squares fitting of direct-transmission and harmonic-absorption spectra, and deconvolution for determination of laser linewidth and for removal of instrumental broadening of spectral lines.

  15. Computer Vision and Image Processing: A Paper Review

    victor - wiley


    Full Text Available Computer vision has been studied from many persective. It expands from raw data recording into techniques and ideas combining digital image processing, pattern recognition, machine learning and computer graphics. The wide usage has attracted many scholars to integrate with many disciplines and fields. This paper provide a survey of the recent technologies and theoretical concept explaining the development of computer vision especially related to image processing using different areas of their field application. Computer vision helps scholars to analyze images and video to obtain necessary information,    understand information on events or descriptions, and scenic pattern. It used method of multi-range application domain with massive data analysis. This paper provides contribution of recent development on reviews related to computer vision, image processing, and their related studies. We categorized the computer vision mainstream into four group e.g., image processing, object recognition, and machine learning. We also provide brief explanation on the up-to-date information about the techniques and their performance.

  16. Computer-Aided Modeling of Lipid Processing Technology

    Diaz Tovar, Carlos Axel


    increase along with growing interest in biofuels, the oleochemical industry faces in the upcoming years major challenges in terms of design and development of better products and more sustainable processes to make them. Computer-aided methods and tools for process synthesis, modeling and simulation...... are widely used for design, analysis, and optimization of processes in the chemical and petrochemical industries. These computer-aided tools have helped the chemical industry to evolve beyond commodities toward specialty chemicals and ‘consumer oriented chemicals based products’. Unfortunately...... to develop systematic computer-aided methods (property models) and tools (database) related to the prediction of the necessary physical properties suitable for design and analysis of processes employing lipid technologies. The methods and tools include: the development of a lipid-database (CAPEC...

  17. Applications of evolutionary computation in image processing and pattern recognition

    Cuevas, Erik; Perez-Cisneros, Marco


    This book presents the use of efficient Evolutionary Computation (EC) algorithms for solving diverse real-world image processing and pattern recognition problems. It provides an overview of the different aspects of evolutionary methods in order to enable the reader in reaching a global understanding of the field and, in conducting studies on specific evolutionary techniques that are related to applications in image processing and pattern recognition. It explains the basic ideas of the proposed applications in a way that can also be understood by readers outside of the field. Image processing and pattern recognition practitioners who are not evolutionary computation researchers will appreciate the discussed techniques beyond simple theoretical tools since they have been adapted to solve significant problems that commonly arise on such areas. On the other hand, members of the evolutionary computation community can learn the way in which image processing and pattern recognition problems can be translated into an...

  18. The MGS Avionics System Architecture: Exploring the Limits of Inheritance

    Bunker, R.


    Mars Global Surveyor (MGS) avionics system architecture comprises much of the electronics on board the spacecraft: electrical power, attitude and articulation control, command and data handling, telecommunications, and flight software. Schedule and cost constraints dictated a mix of new and inherited designs, especially hardware upgrades based on findings of the Mars Observer failure review boards.

  19. A Modeling Framework for Schedulability Analysis of Distributed Avionics Systems

    Han, Pujie; Zhai, Zhengjun; Nielsen, Brian


    This paper presents a modeling framework for schedulability analysis of distributed integrated modular avionics (DIMA) systems that consist of spatially distributed ARINC-653 modules connected by a unified AFDX network. We model a DIMA system as a set of stopwatch automata (SWA) in UPPAAL...

  20. An overview of computer-based natural language processing

    Gevarter, W. B.


    Computer based Natural Language Processing (NLP) is the key to enabling humans and their computer based creations to interact with machines in natural language (like English, Japanese, German, etc., in contrast to formal computer languages). The doors that such an achievement can open have made this a major research area in Artificial Intelligence and Computational Linguistics. Commercial natural language interfaces to computers have recently entered the market and future looks bright for other applications as well. This report reviews the basic approaches to such systems, the techniques utilized, applications, the state of the art of the technology, issues and research requirements, the major participants and finally, future trends and expectations. It is anticipated that this report will prove useful to engineering and research managers, potential users, and others who will be affected by this field as it unfolds.

  1. Research on application of computer technologies in jewelry process

    Junbo Xia


    Full Text Available Jewelry production is a process of precious raw materials and low losses in processing. The traditional manual mode is unable to meet the needs of enterprises in reality, while the involvement of computer technology can just solve this practical problem. At present, the problem of restricting the application for computer in jewelry production is mainly a failure to find a production model that can serve the whole industry chain with the computer as the core of production. This paper designs a “synchronous and diversified” production model with “computer aided design technology” and “rapid prototyping technology” as the core, and tests with actual production cases, and achieves certain results, which are forward-looking and advanced.

  2. Image processing with massively parallel computer Quadrics Q1

    Della Rocca, A.B.; La Porta, L.; Ferriani, S.


    Aimed to evaluate the image processing capabilities of the massively parallel computer Quadrics Q1, a convolution algorithm that has been implemented is described in this report. At first the discrete convolution mathematical definition is recalled together with the main Q1 h/w and s/w features. Then the different codification forms of the algorythm are described and the Q1 performances are compared with those obtained by different computers. Finally, the conclusions report on main results and suggestions

  3. Computer-integrated electric-arc melting process control system

    Дёмин, Дмитрий Александрович


    Developing common principles of completing melting process automation systems with hardware and creating on their basis rational choices of computer- integrated electricarc melting control systems is an actual task since it allows a comprehensive approach to the issue of modernizing melting sites of workshops. This approach allows to form the computer-integrated electric-arc furnace control system as part of a queuing system “electric-arc furnace - foundry conveyor” and consider, when taking ...

  4. Deep Learning in Visual Computing and Signal Processing

    Xie, Danfeng; Zhang, Lei; Bai, Li


    Deep learning is a subfield of machine learning, which aims to learn a hierarchy of features from input data. Nowadays, researchers have intensively investigated deep learning algorithms for solving challenging problems in many areas such as image classification, speech recognition, signal processing, and natural language processing. In this study, we not only review typical deep learning algorithms in computer vision and signal processing but also provide detailed information on how to apply...


    Marko Hadjina; Nikša Fafandjel; Tin Matulja


    In this research a shipbuilding production process design methodology, using computer simulation, is suggested. It is expected from suggested methodology to give better and more efficient tool for complex shipbuilding production processes design procedure. Within the first part of this research existing practice for production process design in shipbuilding was discussed, its shortcomings and problem were emphasized. In continuing, discrete event simulation modelling method, as basis of sugge...

  6. Application of computer data processing of well logging in Azerbaijan

    Vorob'ev, Yu.A.; Shilov, G.Ya.; Samedova, A.S.


    Transition from the mannal quantitative interpretation of materials of well-logging study (WLS) to application of computer in production association (PA) Azneftegeologiya is described. WLS materials were processed manually in PA till 1986. Later on interpretation was conducted with the use of computer in order to determine clayiness, porosity, oil and gas saturation, fluid of strata. Examples of presentation of results of computer interpretation of WLS data (including gamma-logging and neutron-gamma-logging) for determining porosity and oil saturation of sandy mudrocks are given

  7. Desk-top computer assisted processing of thermoluminescent dosimeters

    Archer, B.R.; Glaze, S.A.; North, L.B.; Bushong, S.C.


    An accurate dosimetric system utilizing a desk-top computer and high sensitivity ribbon type TLDs has been developed. The system incorporates an exposure history file and procedures designed for constant spatial orientation of each dosimeter. Processing of information is performed by two computer programs. The first calculates relative response factors to insure that the corrected response of each TLD is identical following a given dose of radiation. The second program computes a calibration factor and uses it and the relative response factor to determine the actual dose registered by each TLD. (U.K.)

  8. Image processing and computer graphics in radiology. Pt. A

    Toennies, K.D.


    The reports give a full review of all aspects of digital imaging in radiology which are of significance to image processing and the subsequent picture archiving and communication techniques. The review strongly clings to practice and illustrates the various contributions from specialized areas of the computer sciences, such as computer vision, computer graphics, database systems and information and communication systems, man-machine interactions and software engineering. Methods and models available are explained and assessed for their respective performance and value, and basic principles are briefly explained. (DG) [de

  9. Image processing and computer graphics in radiology. Pt. B

    Toennies, K.D.


    The reports give a full review of all aspects of digital imaging in radiology which are of significance to image processing and the subsequent picture archiving and communication techniques. The review strongly clings to practice and illustrates the various contributions from specialized areas of the computer sciences, such as computer vision, computer graphics, database systems and information and communication systems, man-machine interactions and software engineering. Methods and models available are explained and assessed for their respective performance and value, and basic principles are briefly explained. (DG) [de

  10. Computation and brain processes, with special reference to neuroendocrine systems.

    Toni, Roberto; Spaletta, Giulia; Casa, Claudia Della; Ravera, Simone; Sandri, Giorgio


    The development of neural networks and brain automata has made neuroscientists aware that the performance limits of these brain-like devices lies, at least in part, in their computational power. The computational basis of a. standard cybernetic design, in fact, refers to that of a discrete and finite state machine or Turing Machine (TM). In contrast, it has been suggested that a number of human cerebral activites, from feedback controls up to mental processes, rely on a mixing of both finitary, digital-like and infinitary, continuous-like procedures. Therefore, the central nervous system (CNS) of man would exploit a form of computation going beyond that of a TM. This "non conventional" computation has been called hybrid computation. Some basic structures for hybrid brain computation are believed to be the brain computational maps, in which both Turing-like (digital) computation and continuous (analog) forms of calculus might occur. The cerebral cortex and brain stem appears primary candidate for this processing. However, also neuroendocrine structures like the hypothalamus are believed to exhibit hybrid computional processes, and might give rise to computational maps. Current theories on neural activity, including wiring and volume transmission, neuronal group selection and dynamic evolving models of brain automata, bring fuel to the existence of natural hybrid computation, stressing a cooperation between discrete and continuous forms of communication in the CNS. In addition, the recent advent of neuromorphic chips, like those to restore activity in damaged retina and visual cortex, suggests that assumption of a discrete-continuum polarity in designing biocompatible neural circuitries is crucial for their ensuing performance. In these bionic structures, in fact, a correspondence exists between the original anatomical architecture and synthetic wiring of the chip, resulting in a correspondence between natural and cybernetic neural activity. Thus, chip "form

  11. Exploiting graphics processing units for computational biology and bioinformatics.

    Payne, Joshua L; Sinnott-Armstrong, Nicholas A; Moore, Jason H


    Advances in the video gaming industry have led to the production of low-cost, high-performance graphics processing units (GPUs) that possess more memory bandwidth and computational capability than central processing units (CPUs), the standard workhorses of scientific computing. With the recent release of generalpurpose GPUs and NVIDIA's GPU programming language, CUDA, graphics engines are being adopted widely in scientific computing applications, particularly in the fields of computational biology and bioinformatics. The goal of this article is to concisely present an introduction to GPU hardware and programming, aimed at the computational biologist or bioinformaticist. To this end, we discuss the primary differences between GPU and CPU architecture, introduce the basics of the CUDA programming language, and discuss important CUDA programming practices, such as the proper use of coalesced reads, data types, and memory hierarchies. We highlight each of these topics in the context of computing the all-pairs distance between instances in a dataset, a common procedure in numerous disciplines of scientific computing. We conclude with a runtime analysis of the GPU and CPU implementations of the all-pairs distance calculation. We show our final GPU implementation to outperform the CPU implementation by a factor of 1700.

  12. Facilitating NASA Earth Science Data Processing Using Nebula Cloud Computing

    Chen, A.; Pham, L.; Kempler, S.; Theobald, M.; Esfandiari, A.; Campino, J.; Vollmer, B.; Lynnes, C.


    Cloud Computing technology has been used to offer high-performance and low-cost computing and storage resources for both scientific problems and business services. Several cloud computing services have been implemented in the commercial arena, e.g. Amazon's EC2 & S3, Microsoft's Azure, and Google App Engine. There are also some research and application programs being launched in academia and governments to utilize Cloud Computing. NASA launched the Nebula Cloud Computing platform in 2008, which is an Infrastructure as a Service (IaaS) to deliver on-demand distributed virtual computers. Nebula users can receive required computing resources as a fully outsourced service. NASA Goddard Earth Science Data and Information Service Center (GES DISC) migrated several GES DISC's applications to the Nebula as a proof of concept, including: a) The Simple, Scalable, Script-based Science Processor for Measurements (S4PM) for processing scientific data; b) the Atmospheric Infrared Sounder (AIRS) data process workflow for processing AIRS raw data; and c) the GES-DISC Interactive Online Visualization ANd aNalysis Infrastructure (GIOVANNI) for online access to, analysis, and visualization of Earth science data. This work aims to evaluate the practicability and adaptability of the Nebula. The initial work focused on the AIRS data process workflow to evaluate the Nebula. The AIRS data process workflow consists of a series of algorithms being used to process raw AIRS level 0 data and output AIRS level 2 geophysical retrievals. Migrating the entire workflow to the Nebula platform is challenging, but practicable. After installing several supporting libraries and the processing code itself, the workflow is able to process AIRS data in a similar fashion to its current (non-cloud) configuration. We compared the performance of processing 2 days of AIRS level 0 data through level 2 using a Nebula virtual computer and a local Linux computer. The result shows that Nebula has significantly

  13. Business Process Quality Computation : Computing Non-Functional Requirements to Improve Business Processes

    Heidari, F.


    Business process modelling is an important part of system design. When designing or redesigning a business process, stakeholders specify, negotiate, and agree on business requirements to be satisfied, including non-functional requirements that concern the quality of the business process. This thesis

  14. Process computer system for the prototype ATR 'Fugen'

    Oteru, Shigeru


    In recent nuclear power plants, computers are regarded as one of component equipments, and data processing, plant monitoring and performance calculation tend to be carried out with one on-line computer. As plants become large and complex, and the operational conditions become strict, the system having the function of performance calculation and reflecting the results immediately to operation is introduced. In the process computer for the prototype ATR ''Fugen'', the function of prediction to simulate the state after operation before the operation accompanied by the change of reactivity in a core, such as the operation of control rods and the control of liquid poison during operation, was given in addition to the functions of data processing, plant monitoring and detailed performance calculation. Core periodic monitoring program, core operational aid program, core any time data collecting program, and core periodic data collecting program, and their application programs are explained. Core performance calculation is the calculation of thermal output distribution in the core and the various accompanying characteristics and the monitoring of thermal limiting values. The computer used is a Hitachi control computer HIDIC-500, and typewriters, a process colored display, an operating console and other peripheral equipments are connected to it. (Kako, I.)

  15. Some Aspects of Process Computers Configuration Control in Nuclear Power Plant Krsko - Process Computer Signal Configuration Database (PCSCDB)

    Mandic, D.; Kocnar, R.; Sucic, B.


    During the operation of NEK and other nuclear power plants it has been recognized that certain issues related to the usage of digital equipment and associated software in NPP technological process protection, control and monitoring, is not adequately addressed in the existing programs and procedures. The term and the process of Process Computers Configuration Control joins three 10CFR50 Appendix B quality requirements of Process Computers application in NPP: Design Control, Document Control and Identification and Control of Materials, Parts and Components. This paper describes Process Computer Signal Configuration Database (PCSCDB), that was developed and implemented in order to resolve some aspects of Process Computer Configuration Control related to the signals or database points that exist in the life cycle of different Process Computer Systems (PCS) in Nuclear Power Plant Krsko. PCSCDB is controlled, master database, related to the definition and description of the configurable database points associated with all Process Computer Systems in NEK. PCSCDB holds attributes related to the configuration of addressable and configurable real time database points and attributes related to the signal life cycle references and history data such as: Input/Output signals, Manually Input database points, Program constants, Setpoints, Calculated (by application program or SCADA calculation tools) database points, Control Flags (example: enable / disable certain program feature) Signal acquisition design references to the DCM (Document Control Module Application software for document control within Management Information System - MIS) and MECL (Master Equipment and Component List MIS Application software for identification and configuration control of plant equipment and components) Usage of particular database point in particular application software packages, and in the man-machine interface features (display mimics, printout reports, ...) Signals history (EEAR Engineering

  16. Avionics Configuration Assessment for Flightdeck Interval Management: A Comparison of Avionics and Notification Methods

    Latorella, Kara A.


    Flightdeck Interval Management is one of the NextGen operational concepts that FAA is sponsoring to realize requisite National Airspace System (NAS) efficiencies. Interval Management will reduce variability in temporal deviations at a position, and thereby reduce buffers typically applied by controllers - resulting in higher arrival rates, and more efficient operations. Ground software generates a strategic schedule of aircraft pairs. Air Traffic Control (ATC) provides an IM clearance with the IM spacing objective (i.e., the TTF, and at which point to achieve the appropriate spacing from this aircraft) to the IM aircraft. Pilots must dial FIM speeds into the speed window on the Mode Control Panel in a timely manner, and attend to deviations between actual speed and the instantaneous FIM profile speed. Here, the crew is assumed to be operating the aircraft with autothrottles on, with autopilot engaged, and the autoflight system in Vertical Navigation (VNAV) and Lateral Navigation (LNAV); and is responsible for safely flying the aircraft while maintaining situation awareness of their ability to follow FIM speed commands and to achieve the FIM spacing goal. The objective of this study is to examine whether three Notification Methods and four Avionics Conditions affect pilots' performance, ratings on constructs associated with performance (workload, situation awareness), or opinions on acceptability. Three Notification Methods (alternate visual and aural alerts that notified pilots to the onset of a speed target, conformance deviation from the required speed profile, and reminded them if they failed to enter the speed within 10 seconds) were examined. These Notification Methods were: VVV (visuals for all three events), VAV (visuals for all three events, plus an aural for speed conformance deviations), and AAA (visual indications and the same aural to indicate all three of these events). Avionics Conditions were defined by the instrumentation (and location) used to

  17. Bioinformation processing a primer on computational cognitive science

    Peterson, James K


    This book shows how mathematics, computer science and science can be usefully and seamlessly intertwined. It begins with a general model of cognitive processes in a network of computational nodes, such as neurons, using a variety of tools from mathematics, computational science and neurobiology. It then moves on to solve the diffusion model from a low-level random walk point of view. It also demonstrates how this idea can be used in a new approach to solving the cable equation, in order to better understand the neural computation approximations. It introduces specialized data for emotional content, which allows a brain model to be built using MatLab tools, and also highlights a simple model of cognitive dysfunction.

  18. Experimental data processing techniques by a personal computer

    Matsuura, Kiyokata; Tsuda, Kenzo; Abe, Yoshihiko; Kojima, Tsuyoshi; Nishikawa, Akira; Shimura, Hitoshi; Hyodo, Hiromi; Yamagishi, Shigeru.


    A personal computer (16-bit, about 1 MB memory) can be used at a low cost in the experimental data processing. This report surveys the important techniques on A/D and D/A conversion, display, store and transfer of the experimental data. It is also discussed the items to be considered in the software. Practical softwares programed BASIC and Assembler language are given as examples. Here, we present some techniques to get faster process in BASIC language and show that the system composed of BASIC and Assembler is useful in a practical experiment. The system performance such as processing speed and flexibility in setting operation condition will depend strongly on programming language. We have made test for processing speed by some typical programming languages; BASIC(interpreter), C, FORTRAN and Assembler. As for the calculation, FORTRAN has the best performance which is comparable to or better than Assembler even in the personal computer. (author)

  19. Developments in medical image processing and computational vision

    Jorge, Renato


    This book presents novel and advanced topics in Medical Image Processing and Computational Vision in order to solidify knowledge in the related fields and define their key stakeholders. It contains extended versions of selected papers presented in VipIMAGE 2013 – IV International ECCOMAS Thematic Conference on Computational Vision and Medical Image, which took place in Funchal, Madeira, Portugal, 14-16 October 2013.  The twenty-two chapters were written by invited experts of international recognition and address important issues in medical image processing and computational vision, including: 3D vision, 3D visualization, colour quantisation, continuum mechanics, data fusion, data mining, face recognition, GPU parallelisation, image acquisition and reconstruction, image and video analysis, image clustering, image registration, image restoring, image segmentation, machine learning, modelling and simulation, object detection, object recognition, object tracking, optical flow, pattern recognition, pose estimat...

  20. Application of Computer Simulation Modeling to Medication Administration Process Redesign

    Huynh, Nathan; Snyder, Rita; Vidal, Jose M.; Tavakoli, Abbas S.; Cai, Bo


    The medication administration process (MAP) is one of the most high-risk processes in health care. MAP workflow redesign can precipitate both unanticipated and unintended consequences that can lead to new medication safety risks and workflow inefficiencies. Thus, it is necessary to have a tool to evaluate the impact of redesign approaches in advance of their clinical implementation. This paper discusses the development of an agent-based MAP computer simulation model that can be used to assess...

  1. 77 FR 51571 - Certain Wireless Communication Devices, Portable Music and Data Processing Devices, Computers...


    ... Music and Data Processing Devices, Computers, and Components Thereof; Notice of Receipt of Complaint... complaint entitled Wireless Communication Devices, Portable Music and Data Processing Devices, Computers..., portable music and data processing devices, computers, and components thereof. The complaint names as...

  2. Tutorial: Signal Processing in Brain-Computer Interfaces

    Garcia Molina, G.


    Research in Electroencephalogram (EEG) based Brain-Computer Interfaces (BCIs) has been considerably expanding during the last few years. Such an expansion owes to a large extent to the multidisciplinary and challenging nature of BCI research. Signal processing undoubtedly constitutes an essential

  3. The use of process computers in reactor protection systems


    The report contains the papers presented at the LRA information meeting in spring 1972, concerning the use of process computers in reactor protection systems. The main interest was directed at a system conception as proposed from AEG for future BWR-plants. (orig.) [de

  4. Computer program for source distribution process in radiation facility

    Al-Kassiri, H.; Abdul Ghani, B.


    Computer simulation for dose distribution using Visual Basic has been done according to the arrangement and activities of Co-60 sources. This program provides dose distribution in treated products depending on the product density and desired dose. The program is useful for optimization of sources distribution during loading process. there is good agreement between calculated data for the program and experimental data.(Author)

  5. Development of Integrated Modular Avionics Application Based on Simulink and XtratuM

    Fons-Albert, Borja; Usach-Molina, Hector; Vila-Carbo, Joan; Crespo-Lorente, Alfons


    This paper presents an integral approach for designing avionics applications that meets the requirements for software development and execution of this application domain. Software design follows the Model-Based design process and is performed in Simulink. This approach allows easy and quick testbench development and helps satisfying DO-178B requirements through the use of proper tools. The software execution platform is based on XtratuM, a minimal bare-metal hypervisor designed in our research group. XtratuM provides support for IMA-SP (Integrated Modular Avionics for Space) architectures. This approach allows the code generation of a Simulink model to be executed on top of Lithos as XtratuM partition. Lithos is a ARINC-653 compliant RTOS for XtratuM. The paper concentrates in how to smoothly port Simulink designs to XtratuM solving problems like application partitioning, automatic code generation, real-time tasking, interfacing, and others. This process is illustrated with an autopilot design test using a flight simulator.

  6. [INVITED] Computational intelligence for smart laser materials processing

    Casalino, Giuseppe


    Computational intelligence (CI) involves using a computer algorithm to capture hidden knowledge from data and to use them for training ;intelligent machine; to make complex decisions without human intervention. As simulation is becoming more prevalent from design and planning to manufacturing and operations, laser material processing can also benefit from computer generating knowledge through soft computing. This work is a review of the state-of-the-art on the methodology and applications of CI in laser materials processing (LMP), which is nowadays receiving increasing interest from world class manufacturers and 4.0 industry. The focus is on the methods that have been proven effective and robust in solving several problems in welding, cutting, drilling, surface treating and additive manufacturing using the laser beam. After a basic description of the most common computational intelligences employed in manufacturing, four sections, namely, laser joining, machining, surface, and additive covered the most recent applications in the already extensive literature regarding the CI in LMP. Eventually, emerging trends and future challenges were identified and discussed.

  7. Definition, analysis and development of an optical data distribution network for integrated avionics and control systems. Part 2: Component development and system integration

    Yen, H. W.; Morrison, R. J.


    Fiber optic transmission is emerging as an attractive concept in data distribution onboard civil aircraft. Development of an Optical Data Distribution Network for Integrated Avionics and Control Systems for commercial aircraft will provide a data distribution network that gives freedom from EMI-RFI and ground loop problems, eliminates crosstalk and short circuits, provides protection and immunity from lightning induced transients and give a large bandwidth data transmission capability. In addition there is a potential for significantly reducing the weight and increasing the reliability over conventional data distribution networks. Wavelength Division Multiplexing (WDM) is a candidate method for data communication between the various avionic subsystems. With WDM all systems could conceptually communicate with each other without time sharing and requiring complicated coding schemes for each computer and subsystem to recognize a message. However, the state of the art of optical technology limits the application of fiber optics in advanced integrated avionics and control systems. Therefore, it is necessary to address the architecture for a fiber optics data distribution system for integrated avionics and control systems as well as develop prototype components and systems.

  8. A sampler of useful computational tools for applied geometry, computer graphics, and image processing foundations for computer graphics, vision, and image processing

    Cohen-Or, Daniel; Ju, Tao; Mitra, Niloy J; Shamir, Ariel; Sorkine-Hornung, Olga; Zhang, Hao (Richard)


    A Sampler of Useful Computational Tools for Applied Geometry, Computer Graphics, and Image Processing shows how to use a collection of mathematical techniques to solve important problems in applied mathematics and computer science areas. The book discusses fundamental tools in analytical geometry and linear algebra. It covers a wide range of topics, from matrix decomposition to curvature analysis and principal component analysis to dimensionality reduction.Written by a team of highly respected professors, the book can be used in a one-semester, intermediate-level course in computer science. It

  9. Single instruction computer architecture and its application in image processing

    Laplante, Phillip A.


    A single processing computer system using only half-adder circuits is described. In addition, it is shown that only a single hard-wired instruction is needed in the control unit to obtain a complete instruction set for this general purpose computer. Such a system has several advantages. First it is intrinsically a RISC machine--in fact the 'ultimate RISC' machine. Second, because only a single type of logic element is employed the entire computer system can be easily realized on a single, highly integrated chip. Finally, due to the homogeneous nature of the computer's logic elements, the computer has possible implementations as an optical or chemical machine. This in turn suggests possible paradigms for neural computing and artificial intelligence. After showing how we can implement a full-adder, min, max and other operations using the half-adder, we use an array of such full-adders to implement the dilation operation for two black and white images. Next we implement the erosion operation of two black and white images using a relative complement function and the properties of erosion and dilation. This approach was inspired by papers by van der Poel in which a single instruction is used to furnish a complete set of general purpose instructions and by Bohm- Jacopini where it is shown that any problem can be solved using a Turing machine with one entry and one exit.

  10. Parallel processing using an optical delay-based reservoir computer

    Van der Sande, Guy; Nguimdo, Romain Modeste; Verschaffelt, Guy


    Delay systems subject to delayed optical feedback have recently shown great potential in solving computationally hard tasks. By implementing a neuro-inspired computational scheme relying on the transient response to optical data injection, high processing speeds have been demonstrated. However, reservoir computing systems based on delay dynamics discussed in the literature are designed by coupling many different stand-alone components which lead to bulky, lack of long-term stability, non-monolithic systems. Here we numerically investigate the possibility of implementing reservoir computing schemes based on semiconductor ring lasers. Semiconductor ring lasers are semiconductor lasers where the laser cavity consists of a ring-shaped waveguide. SRLs are highly integrable and scalable, making them ideal candidates for key components in photonic integrated circuits. SRLs can generate light in two counterpropagating directions between which bistability has been demonstrated. We demonstrate that two independent machine learning tasks , even with different nature of inputs with different input data signals can be simultaneously computed using a single photonic nonlinear node relying on the parallelism offered by photonics. We illustrate the performance on simultaneous chaotic time series prediction and a classification of the Nonlinear Channel Equalization. We take advantage of different directional modes to process individual tasks. Each directional mode processes one individual task to mitigate possible crosstalk between the tasks. Our results indicate that prediction/classification with errors comparable to the state-of-the-art performance can be obtained even with noise despite the two tasks being computed simultaneously. We also find that a good performance is obtained for both tasks for a broad range of the parameters. The results are discussed in detail in [Nguimdo et al., IEEE Trans. Neural Netw. Learn. Syst. 26, pp. 3301-3307, 2015

  11. Teaching and Learning of Computational Modelling in Creative Shaping Processes

    Daniela REIMANN


    Full Text Available Today, not only diverse design-related disciplines are required to actively deal with the digitization of information and its potentials and side effects for education processes. In Germany, technology didactics developed in vocational education and computer science education in general education, both separated from media pedagogy as an after-school program. Media education is not a subject in German schools yet. However, in the paper we argue for an interdisciplinary approach to learn about computational modeling in creative processes and aesthetic contexts. It crosses the borders of programming technology, arts and design processes in meaningful contexts. Educational scenarios using smart textile environments are introduced and reflected for project based learning.

  12. Computer modeling of lung cancer diagnosis-to-treatment process.

    Ju, Feng; Lee, Hyo Kyung; Osarogiagbon, Raymond U; Yu, Xinhua; Faris, Nick; Li, Jingshan


    We introduce an example of a rigorous, quantitative method for quality improvement in lung cancer care-delivery. Computer process modeling methods are introduced for lung cancer diagnosis, staging and treatment selection process. Two types of process modeling techniques, discrete event simulation (DES) and analytical models, are briefly reviewed. Recent developments in DES are outlined and the necessary data and procedures to develop a DES model for lung cancer diagnosis, leading up to surgical treatment process are summarized. The analytical models include both Markov chain model and closed formulas. The Markov chain models with its application in healthcare are introduced and the approach to derive a lung cancer diagnosis process model is presented. Similarly, the procedure to derive closed formulas evaluating the diagnosis process performance is outlined. Finally, the pros and cons of these methods are discussed.

  13. Case studies in Gaussian process modelling of computer codes

    Kennedy, Marc C.; Anderson, Clive W.; Conti, Stefano; O'Hagan, Anthony


    In this paper we present a number of recent applications in which an emulator of a computer code is created using a Gaussian process model. Tools are then applied to the emulator to perform sensitivity analysis and uncertainty analysis. Sensitivity analysis is used both as an aid to model improvement and as a guide to how much the output uncertainty might be reduced by learning about specific inputs. Uncertainty analysis allows us to reflect output uncertainty due to unknown input parameters, when the finished code is used for prediction. The computer codes themselves are currently being developed within the UK Centre for Terrestrial Carbon Dynamics

  14. Analytical calculation of heavy quarkonia production processes in computer

    Braguta, V V; Likhoded, A K; Luchinsky, A V; Poslavsky, S V


    This report is devoted to the analytical calculation of heavy quarkonia production processes in modern experiments such as LHC, B-factories and superB-factories in computer. Theoretical description of heavy quarkonia is based on the factorization theorem. This theorem leads to special structure of the production amplitudes which can be used to develop computer algorithm which calculates these amplitudes automatically. This report is devoted to the description of this algorithm. As an example of its application we present the results of the calculation of double charmonia production in bottomonia decays and inclusive the χ cJ mesons production in pp-collisions

  15. Test bank to accompany Computers data and processing

    Deitel, Harvey M


    Test Bank to Accompany Computers and Data Processing provides a variety of questions from which instructors can easily custom tailor exams appropriate for their particular courses. This book contains over 4000 short-answer questions that span the full range of topics for introductory computing course.This book is organized into five parts encompassing 19 chapters. This text provides a very large number of questions so that instructors can produce different exam testing essentially the same topics in succeeding semesters. Three types of questions are included in this book, including multiple ch

  16. Computational Modeling in Plasma Processing for 300 mm Wafers

    Meyyappan, Meyya; Arnold, James O. (Technical Monitor)


    Migration toward 300 mm wafer size has been initiated recently due to process economics and to meet future demands for integrated circuits. A major issue facing the semiconductor community at this juncture is development of suitable processing equipment, for example, plasma processing reactors that can accomodate 300 mm wafers. In this Invited Talk, scaling of reactors will be discussed with the aid of computational fluid dynamics results. We have undertaken reactor simulations using CFD with reactor geometry, pressure, and precursor flow rates as parameters in a systematic investigation. These simulations provide guidelines for scaling up in reactor design.

  17. A critique of reliability prediction techniques for avionics applications

    Guru Prasad PANDIAN


    Full Text Available Avionics (aeronautics and aerospace industries must rely on components and systems of demonstrated high reliability. For this, handbook-based methods have been traditionally used to design for reliability, develop test plans, and define maintenance requirements and sustainment logistics. However, these methods have been criticized as flawed and leading to inaccurate and misleading results. In its recent report on enhancing defense system reliability, the U.S. National Academy of Sciences has recently discredited these methods, judging the Military Handbook (MIL-HDBK-217 and its progeny as invalid and inaccurate. This paper discusses the issues that arise with the use of handbook-based methods in commercial and military avionics applications. Alternative approaches to reliability design (and its demonstration are also discussed, including similarity analysis, testing, physics-of-failure, and data analytics for prognostics and systems health management.

  18. Sail GTS ground system analysis: Avionics system engineering

    Lawton, R. M.


    A comparison of two different concepts for the guidance, navigation and control test set signal ground system is presented. The first is a concept utilizing a ground plate to which crew station, avionics racks, electrical power distribution system, master electrical common connection assembly and marshall mated elements system grounds are connected by 4/0 welding cable. An alternate approach has an aluminum sheet interconnecting the signal ground reference points between the crew station and avionics racks. The comparison analysis quantifies the differences between the two concepts in terms of dc resistance, ac resistance and inductive reactance. These parameters are figures of merit for ground system conductors in that the system with the lowest impedance is the most effective in minimizing noise voltage. Although the welding cable system is probably adequate, the aluminum sheet system provides a higher probability of a successful system design.

  19. Computer aided analysis, simulation and optimisation of thermal sterilisation processes.

    Narayanan, C M; Banerjee, Arindam


    Although thermal sterilisation is a widely employed industrial process, little work is reported in the available literature including patents on the mathematical analysis and simulation of these processes. In the present work, software packages have been developed for computer aided optimum design of thermal sterilisation processes. Systems involving steam sparging, jacketed heating/cooling, helical coils submerged in agitated vessels and systems that employ external heat exchangers (double pipe, shell and tube and plate exchangers) have been considered. Both batch and continuous operations have been analysed and simulated. The dependence of del factor on system / operating parameters such as mass or volume of substrate to be sterilised per batch, speed of agitation, helix diameter, substrate to steam ratio, rate of substrate circulation through heat exchanger and that through holding tube have been analysed separately for each mode of sterilisation. Axial dispersion in the holding tube has also been adequately accounted for through an appropriately defined axial dispersion coefficient. The effect of exchanger characteristics/specifications on the system performance has also been analysed. The multiparameter computer aided design (CAD) software packages prepared are thus highly versatile in nature and they permit to make the most optimum choice of operating variables for the processes selected. The computed results have been compared with extensive data collected from a number of industries (distilleries, food processing and pharmaceutical industries) and pilot plants and satisfactory agreement has been observed between the two, thereby ascertaining the accuracy of the CAD softwares developed. No simplifying assumptions have been made during the analysis and the design of associated heating / cooling equipment has been performed utilising the most updated design correlations and computer softwares.

  20. Installation of new Generation General Purpose Computer (GPC) compact unit


    In the Kennedy Space Center's (KSC's) Orbiter Processing Facility (OPF) high bay 2, Spacecraft Electronics technician Ed Carter (right), wearing clean suit, prepares for (26864) and installs (26865) the new Generation General Purpose Computer (GPC) compact IBM unit in Atlantis', Orbiter Vehicle (OV) 104's, middeck avionics bay as Orbiter Systems Quality Control technician Doug Snider looks on. Both men work for NASA contractor Lockheed Space Operations Company. All three orbiters are being outfitted with the compact IBM unit, which replaces a two-unit earlier generation computer.

  1. First International Conference Multimedia Processing, Communication and Computing Applications

    Guru, Devanur


    ICMCCA 2012 is the first International Conference on Multimedia Processing, Communication and Computing Applications and the theme of the Conference is chosen as ‘Multimedia Processing and its Applications’. Multimedia processing has been an active research area contributing in many frontiers of today’s science and technology. This book presents peer-reviewed quality papers on multimedia processing, which covers a very broad area of science and technology. The prime objective of the book is to familiarize readers with the latest scientific developments that are taking place in various fields of multimedia processing and is widely used in many disciplines such as Medical Diagnosis, Digital Forensic, Object Recognition, Image and Video Analysis, Robotics, Military, Automotive Industries, Surveillance and Security, Quality Inspection, etc. The book will assist the research community to get the insight of the overlapping works which are being carried out across the globe at many medical hospitals and instit...

  2. Computer-Aided Sustainable Process Synthesis-Design and Analysis

    Kumar Tula, Anjan

    -groups is that, the performance of the entire process can be evaluated from the contributions of the individual process-groups towards the selected flowsheet property (for example, energy consumed). The developed flowsheet property models include energy consumption, carbon footprint, product recovery, product......Process synthesis involves the investigation of chemical reactions needed to produce the desired product, selection of the separation techniques needed for downstream processing, as well as taking decisions on sequencing the involved separation operations. For an effective, efficient and flexible...... focuses on the development and application of a computer-aided framework for sustainable synthesis-design and analysis of process flowsheets by generating feasible alternatives covering the entire search space and includes analysis tools for sustainability, LCA and economics. The synthesis method is based...

  3. A distributed computing model for telemetry data processing

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.


    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  4. A distributed computing model for telemetry data processing

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.


    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  5. Graphics processing units in bioinformatics, computational biology and systems biology.

    Nobile, Marco S; Cazzaniga, Paolo; Tangherloni, Andrea; Besozzi, Daniela


    Several studies in Bioinformatics, Computational Biology and Systems Biology rely on the definition of physico-chemical or mathematical models of biological systems at different scales and levels of complexity, ranging from the interaction of atoms in single molecules up to genome-wide interaction networks. Traditional computational methods and software tools developed in these research fields share a common trait: they can be computationally demanding on Central Processing Units (CPUs), therefore limiting their applicability in many circumstances. To overcome this issue, general-purpose Graphics Processing Units (GPUs) are gaining an increasing attention by the scientific community, as they can considerably reduce the running time required by standard CPU-based software, and allow more intensive investigations of biological systems. In this review, we present a collection of GPU tools recently developed to perform computational analyses in life science disciplines, emphasizing the advantages and the drawbacks in the use of these parallel architectures. The complete list of GPU-powered tools here reviewed is available at © The Author 2016. Published by Oxford University Press.

  6. Development of a Comprehensive Digital Avionics Curriculum for the Aeronautical Engineer

    Hofer, Thomas W


    ... avionics curriculum does not yet exist that satisfies the needs of graduates who will serve as aeronautical engineers involved with the development, integration, testing, fielding, and supporting...

  7. 1st International Conference on Computer Vision and Image Processing

    Kumar, Sanjeev; Roy, Partha; Sen, Debashis


    This edited volume contains technical contributions in the field of computer vision and image processing presented at the First International Conference on Computer Vision and Image Processing (CVIP 2016). The contributions are thematically divided based on their relation to operations at the lower, middle and higher levels of vision systems, and their applications. The technical contributions in the areas of sensors, acquisition, visualization and enhancement are classified as related to low-level operations. They discuss various modern topics – reconfigurable image system architecture, Scheimpflug camera calibration, real-time autofocusing, climate visualization, tone mapping, super-resolution and image resizing. The technical contributions in the areas of segmentation and retrieval are classified as related to mid-level operations. They discuss some state-of-the-art techniques – non-rigid image registration, iterative image partitioning, egocentric object detection and video shot boundary detection. Th...

  8. Modular, Cost-Effective, Extensible Avionics Architecture for Secure, Mobile Communications

    Ivancic, William D.


    Current onboard communication architectures are based upon an all-in-one communications management unit. This unit and associated radio systems has regularly been designed as a one-off, proprietary system. As such, it lacks flexibility and cannot adapt easily to new technology, new communication protocols, and new communication links. This paper describes the current avionics communication architecture and provides a historical perspective of the evolution of this system. A new onboard architecture is proposed that allows full use of commercial-off-the-shelf technologies to be integrated in a modular approach thereby enabling a flexible, cost-effective and fully deployable design that can take advantage of ongoing advances in the computer, cryptography, and telecommunications industries.

  9. Application of analogue computers to radiotracer data processing

    Chmielewski, A.G.


    Some applications of analogue computers for processing the flow-system radiotracer-investigation data are presented. Analysis of the impulse response shaped to obtain the frequency response of the system under consideration can be performed on the basis of an estimated transfer function. Furthermore, simulation of the system behaviour for other excitation functions is discussed. Simple approach is made for estimating the model parameters in situations where the input signal is not approximated by the unit impulse function. (author)

  10. The Computational Processing of Intonational Prominence: A Functional Prosody Perspective

    Nakatani, Christine Hisayo


    Intonational prominence, or accent, is a fundamental prosodic feature that is said to contribute to discourse meaning. This thesis outlines a new, computational theory of the discourse interpretation of prominence, from a FUNCTIONAL PROSODY perspective. Functional prosody makes the following two important assumptions: first, there is an aspect of prominence interpretation that centrally concerns discourse processes, namely the discourse focusing nature of prominence; and second, the role of p...

  11. Synthesis of computational structures for analog signal processing

    Popa, Cosmin Radu


    Presents the most important classes of computational structures for analog signal processing, including differential or multiplier structures, squaring or square-rooting circuits, exponential or Euclidean distance structures and active resistor circuitsIntroduces the original concept of the multifunctional circuit, an active structure that is able to implement, starting from the same circuit core, a multitude of continuous mathematical functionsCovers mathematical analysis, design and implementation of a multitude of function generator structures

  12. Parameters that affect parallel processing for computational electromagnetic simulation codes on high performance computing clusters

    Moon, Hongsik

    What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the

  13. Missile signal processing common computer architecture for rapid technology upgrade

    Rabinkin, Daniel V.; Rutledge, Edward; Monticciolo, Paul


    Interceptor missiles process IR images to locate an intended target and guide the interceptor towards it. Signal processing requirements have increased as the sensor bandwidth increases and interceptors operate against more sophisticated targets. A typical interceptor signal processing chain is comprised of two parts. Front-end video processing operates on all pixels of the image and performs such operations as non-uniformity correction (NUC), image stabilization, frame integration and detection. Back-end target processing, which tracks and classifies targets detected in the image, performs such algorithms as Kalman tracking, spectral feature extraction and target discrimination. In the past, video processing was implemented using ASIC components or FPGAs because computation requirements exceeded the throughput of general-purpose processors. Target processing was performed using hybrid architectures that included ASICs, DSPs and general-purpose processors. The resulting systems tended to be function-specific, and required custom software development. They were developed using non-integrated toolsets and test equipment was developed along with the processor platform. The lifespan of a system utilizing the signal processing platform often spans decades, while the specialized nature of processor hardware and software makes it difficult and costly to upgrade. As a result, the signal processing systems often run on outdated technology, algorithms are difficult to update, and system effectiveness is impaired by the inability to rapidly respond to new threats. A new design approach is made possible three developments; Moore's Law - driven improvement in computational throughput; a newly introduced vector computing capability in general purpose processors; and a modern set of open interface software standards. Today's multiprocessor commercial-off-the-shelf (COTS) platforms have sufficient throughput to support interceptor signal processing requirements. This application

  14. The Implementation of Computer Data Processing Software for EAST NBI

    Zhang Xiaodan; Hu Chundong; Sheng Peng; Zhao Yuanzhe; Wu Deyun; Cui Qinglong


    One of the most important project missions of neutral beam injectors is the implementation of 100 s neutral beam injection (NBI) with high power energy to the plasma of the EAST superconducting tokamak. Correspondingly, it's necessary to construct a high-speed and reliable computer data processing system for processing experimental data, such as data acquisition, data compression and storage, data decompression and query, as well as data analysis. The implementation of computer data processing application software (CDPS) for EAST NBI is presented in this paper in terms of its functional structure and system realization. The set of software is programmed in C language and runs on Linux operating system based on TCP network protocol and multi-threading technology. The hardware mainly includes industrial control computer (IPC), data server, PXI DAQ cards and so on. Now this software has been applied to EAST NBI system, and experimental results show that the CDPS can serve EAST NBI very well. (fusion engineering)

  15. Computer vision applications for coronagraphic optical alignment and image processing.

    Savransky, Dmitry; Thomas, Sandrine J; Poyneer, Lisa A; Macintosh, Bruce A


    Modern coronagraphic systems require very precise alignment between optical components and can benefit greatly from automated image processing. We discuss three techniques commonly employed in the fields of computer vision and image analysis as applied to the Gemini Planet Imager, a new facility instrument for the Gemini South Observatory. We describe how feature extraction and clustering methods can be used to aid in automated system alignment tasks, and also present a search algorithm for finding regular features in science images used for calibration and data processing. Along with discussions of each technique, we present our specific implementation and show results of each one in operation.

  16. Data Mining Process Optimization in Computational Multi-agent Systems

    Kazík, O.; Neruda, R. (Roman)


    In this paper, we present an agent-based solution of metalearning problem which focuses on optimization of data mining processes. We exploit the framework of computational multi-agent systems in which various meta-learning problems have been already studied, e.g. parameter-space search or simple method recommendation. In this paper, we examine the effect of data preprocessing for machine learning problems. We perform the set of experiments in the search-space of data mining processes which is...


    Foster, C.


    The development of facilities to deal with the disposition of nuclear materials at an acceptable level of Occupational Radiation Exposure (ORE) is a significant issue facing the nuclear community. One solution is to minimize the worker's exposure though the use of automated systems. However, the adoption of automated systems for these tasks is hampered by the challenging requirements that these systems must meet in order to be cost effective solutions in the hazardous nuclear materials processing environment. Retrofitting current glove box technologies with automation systems represents potential near-term technology that can be applied to reduce worker ORE associated with work in nuclear materials processing facilities. Successful deployment of automation systems for these applications requires the development of testing and deployment strategies to ensure the highest level of safety and effectiveness. Historically, safety tests are conducted with glove box mock-ups around the finished design. This late detection of problems leads to expensive redesigns and costly deployment delays. With wide spread availability of computers and cost effective simulation software it is possible to discover and fix problems early in the design stages. Computer simulators can easily create a complete model of the system allowing a safe medium for testing potential failures and design shortcomings. The majority of design specification is now done on computer and moving that information to a model is relatively straightforward. With a complete model and results from a Failure Mode Effect Analysis (FMEA), redesigns can be worked early. Additional issues such as user accessibility, component replacement, and alignment problems can be tackled early in the virtual environment provided by computer simulation. In this case, a commercial simulation package is used to simulate a lathe process operation at the Los Alamos National Laboratory (LANL). The Lathe process operation is indicative of

  18. A simplified computational memory model from information processing

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang


    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view. PMID:27876847

  19. A simplified computational memory model from information processing.

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang


    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view.

  20. Microwave processing of a dental ceramic used in computer-aided design/computer-aided manufacturing.

    Pendola, Martin; Saha, Subrata


    Because of their favorable mechanical properties and natural esthetics, ceramics are widely used in restorative dentistry. The conventional ceramic sintering process required for their use is usually slow, however, and the equipment has an elevated energy consumption. Sintering processes that use microwaves have several advantages compared to regular sintering: shorter processing times, lower energy consumption, and the capacity for volumetric heating. The objective of this study was to test the mechanical properties of a dental ceramic used in computer-aided design/computer-aided manufacturing (CAD/CAM) after the specimens were processed with microwave hybrid sintering. Density, hardness, and bending strength were measured. When ceramic specimens were sintered with microwaves, the processing times were reduced and protocols were simplified. Hardness was improved almost 20% compared to regular sintering, and flexural strength measurements suggested that specimens were approximately 50% stronger than specimens sintered in a conventional system. Microwave hybrid sintering may preserve or improve the mechanical properties of dental ceramics designed for CAD/CAM processing systems, reducing processing and waiting times.

  1. Perspectives of using spin waves for computing and signal processing

    Csaba, György, E-mail: [Center for Nano Science and Technology, University of Notre Dame (United States); Faculty for Information Technology and Bionics, Pázmány Péter Catholic University (Hungary); Papp, Ádám [Center for Nano Science and Technology, University of Notre Dame (United States); Faculty for Information Technology and Bionics, Pázmány Péter Catholic University (Hungary); Porod, Wolfgang [Center for Nano Science and Technology, University of Notre Dame (United States)


    Highlights: • We give an overview of spin wave-based computing with emphasis on non-Boolean signal processors. • Spin waves can combine the best of electronics and photonics and do it in an on-chip and integrable way. • Copying successful approaches from microelectronics may not be the best way toward spin-wave based computing. • Practical devices can be constructed by minimizing the number of required magneto-electric interconnections. - Abstract: Almost all the world's information is processed and transmitted by either electric currents or photons. Now they may get a serious contender: spin-wave-based devices may just perform some information-processing tasks in a lot more efficient and practical way. In this article, we give an engineering perspective of the potential of spin-wave-based devices. After reviewing various flavors for spin-wave-based processing devices, we argue that the niche for spin-wave-based devices is low-power, compact and high-speed signal-processing devices, where most traditional electronics show poor performance.

  2. A survey of process control computers at the Idaho Chemical Processing Plant

    Dahl, C.A.


    The Idaho Chemical Processing Plant (ICPP) at the Idaho National Engineering Laboratory is charged with the safe processing of spent nuclear fuel elements for the United States Department of Energy. The ICPP was originally constructed in the late 1950s and used state-of-the-art technology for process control at that time. The state of process control instrumentation at the ICPP has steadily improved to keep pace with emerging technology. Today, the ICPP is a college of emerging computer technology in process control with some systems as simple as standalone measurement computers while others are state-of-the-art distributed control systems controlling the operations in an entire facility within the plant. The ICPP has made maximal use of process computer technology aimed at increasing surety, safety, and efficiency of the process operations. Many benefits have been derived from the use of the computers for minimal costs, including decreased misoperations in the facility, and more benefits are expected in the future

  3. A Computer- Based Digital Signal Processing for Nuclear Scintillator Detectors

    Ashour, M.A.; Abo Shosha, A.M.


    In this paper, a Digital Signal Processing (DSP) Computer-based system for the nuclear scintillation signals with exponential decay is presented. The main objective of this work is to identify the characteristics of the acquired signals smoothly, this can be done by transferring the signal environment from random signal domain to deterministic domain using digital manipulation techniques. The proposed system consists of two major parts. The first part is the high performance data acquisition system (DAQ) that depends on a multi-channel Logic Scope. Which is interfaced with the host computer through the General Purpose Interface Board (GPIB) Ver. IEEE 488.2. Also, a Graphical User Interface (GUI) has been designed for this purpose using the graphical programming facilities. The second of the system is the DSP software Algorithm which analyses, demonstrates, monitoring these data to obtain the main characteristics of the acquired signals; the amplitude, the pulse count, the pulse width, decay factor, and the arrival time

  4. Computed tomography in space-occupying intraspinal processes

    Proemper, C.; Friedmann, G.


    Spinal computed tomography has considerably enhanced differential diagnostic safety in the course of the past two years. It has disclosed new possibilities of indication in the diagnosis of the vertebral column. With the expected improvement in apparatus technology, computed tomography will increasingly replace invasive examination methods. Detailed knowledge of clinical data, classification of the neurological findings, and localisation of the height - as far as possible - are the necessary prerequisites of successful diagnosis. If they are absent, it is recommended to perform myelography followed by secondary CT-myelography. If these preliminary conditions are observed, spinal CT can make outstanding contributions to be diagnosis of slipped disk, of the constricted vertebral canal, as well as tumours, malformations and posttraumatic conditions, postoperative changes and inflammatory processes. (orig.) [de

  5. Global tree network for computing structures enabling global processing operations

    Blumrich; Matthias A.; Chen, Dong; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Hoenicke, Dirk; Steinmacher-Burow, Burkhard D.; Takken, Todd E.; Vranas, Pavlos M.


    A system and method for enabling high-speed, low-latency global tree network communications among processing nodes interconnected according to a tree network structure. The global tree network enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices are included that interconnect the nodes of the tree via links to facilitate performance of low-latency global processing operations at nodes of the virtual tree and sub-tree structures. The global operations performed include one or more of: broadcast operations downstream from a root node to leaf nodes of a virtual tree, reduction operations upstream from leaf nodes to the root node in the virtual tree, and point-to-point message passing from any node to the root node. The global tree network is configurable to provide global barrier and interrupt functionality in asynchronous or synchronized manner, and, is physically and logically partitionable.

  6. Application of Computer Simulation Modeling to Medication Administration Process Redesign

    Nathan Huynh


    Full Text Available The medication administration process (MAP is one of the most high-risk processes in health care. MAP workflow redesign can precipitate both unanticipated and unintended consequences that can lead to new medication safety risks and workflow inefficiencies. Thus, it is necessary to have a tool to evaluate the impact of redesign approaches in advance of their clinical implementation. This paper discusses the development of an agent-based MAP computer simulation model that can be used to assess the impact of MAP workflow redesign on MAP performance. The agent-based approach is adopted in order to capture Registered Nurse medication administration performance. The process of designing, developing, validating, and testing such a model is explained. Work is underway to collect MAP data in a hospital setting to provide more complex MAP observations to extend development of the model to better represent the complexity of MAP.

  7. Classification of bacterial contamination using image processing and distributed computing.

    Ahmed, W M; Bayraktar, B; Bhunia, A; Hirleman, E D; Robinson, J P; Rajwa, B


    Disease outbreaks due to contaminated food are a major concern not only for the food-processing industry but also for the public at large. Techniques for automated detection and classification of microorganisms can be a great help in preventing outbreaks and maintaining the safety of the nations food supply. Identification and classification of foodborne pathogens using colony scatter patterns is a promising new label-free technique that utilizes image-analysis and machine-learning tools. However, the feature-extraction tools employed for this approach are computationally complex, and choosing the right combination of scatter-related features requires extensive testing with different feature combinations. In the presented work we used computer clusters to speed up the feature-extraction process, which enables us to analyze the contribution of different scatter-based features to the overall classification accuracy. A set of 1000 scatter patterns representing ten different bacterial strains was used. Zernike and Chebyshev moments as well as Haralick texture features were computed from the available light-scatter patterns. The most promising features were first selected using Fishers discriminant analysis, and subsequently a support-vector-machine (SVM) classifier with a linear kernel was used. With extensive testing we were able to identify a small subset of features that produced the desired results in terms of classification accuracy and execution speed. The use of distributed computing for scatter-pattern analysis, feature extraction, and selection provides a feasible mechanism for large-scale deployment of a light scatter-based approach to bacterial classification.

  8. Grid Computing Application for Brain Magnetic Resonance Image Processing

    Valdivia, F; Crépeault, B; Duchesne, S


    This work emphasizes the use of grid computing and web technology for automatic post-processing of brain magnetic resonance images (MRI) in the context of neuropsychiatric (Alzheimer's disease) research. Post-acquisition image processing is achieved through the interconnection of several individual processes into pipelines. Each process has input and output data ports, options and execution parameters, and performs single tasks such as: a) extracting individual image attributes (e.g. dimensions, orientation, center of mass), b) performing image transformations (e.g. scaling, rotation, skewing, intensity standardization, linear and non-linear registration), c) performing image statistical analyses, and d) producing the necessary quality control images and/or files for user review. The pipelines are built to perform specific sequences of tasks on the alphanumeric data and MRIs contained in our database. The web application is coded in PHP and allows the creation of scripts to create, store and execute pipelines and their instances either on our local cluster or on high-performance computing platforms. To run an instance on an external cluster, the web application opens a communication tunnel through which it copies the necessary files, submits the execution commands and collects the results. We present result on system tests for the processing of a set of 821 brain MRIs from the Alzheimer's Disease Neuroimaging Initiative study via a nonlinear registration pipeline composed of 10 processes. Our results show successful execution on both local and external clusters, and a 4-fold increase in performance if using the external cluster. However, the latter's performance does not scale linearly as queue waiting times and execution overhead increase with the number of tasks to be executed.

  9. Safety applications of computer based systems for the process industry

    Bologna, Sandro; Picciolo, Giovanni; Taylor, Robert


    Computer based systems, generally referred to as Programmable Electronic Systems (PESs) are being increasingly used in the process industry, also to perform safety functions. The process industry as they intend in this document includes, but is not limited to, chemicals, oil and gas production, oil refining and power generation. Starting in the early 1970's the wide application possibilities and the related development problems of such systems were recognized. Since then, many guidelines and standards have been developed to direct and regulate the application of computers to perform safety functions (EWICS-TC7, IEC, ISA). Lessons learnt in the last twenty years can be summarised as follows: safety is a cultural issue; safety is a management issue; safety is an engineering issue. In particular, safety systems can only be properly addressed in the overall system context. No single method can be considered sufficient to achieve the safety features required in many safety applications. Good safety engineering approach has to address not only hardware and software problems in isolation but also their interfaces and man-machine interface problems. Finally, the economic and industrial aspects of the safety applications and development of PESs in process plants are evidenced throughout all the Report. Scope of the Report is to contribute to the development of an adequate awareness of these problems and to illustrate technical solutions applied or being developed

  10. Review of computational fluid dynamics applications in biotechnology processes.

    Sharma, C; Malhotra, D; Rathore, A S


    Computational fluid dynamics (CFD) is well established as a tool of choice for solving problems that involve one or more of the following phenomena: flow of fluids, heat transfer,mass transfer, and chemical reaction. Unit operations that are commonly utilized in biotechnology processes are often complex and as such would greatly benefit from application of CFD. The thirst for deeper process and product understanding that has arisen out of initiatives such as quality by design provides further impetus toward usefulness of CFD for problems that may otherwise require extensive experimentation. Not surprisingly, there has been increasing interest in applying CFD toward a variety of applications in biotechnology processing in the last decade. In this article, we will review applications in the major unit operations involved with processing of biotechnology products. These include fermentation,centrifugation, chromatography, ultrafiltration, microfiltration, and freeze drying. We feel that the future applications of CFD in biotechnology processing will focus on establishing CFD as a tool of choice for providing process understanding that can be then used to guide more efficient and effective experimentation. This article puts special emphasis on the work done in the last 10 years. © 2011 American Institute of Chemical Engineers

  11. Computer-aided analysis of cutting processes for brittle materials

    Ogorodnikov, A. I.; Tikhonov, I. N.


    This paper is focused on 3D computer simulation of cutting processes for brittle materials and silicon wafers. Computer-aided analysis of wafer scribing and dicing is carried out with the use of the ANSYS CAE (computer-aided engineering) software, and a parametric model of the processes is created by means of the internal ANSYS APDL programming language. Different types of tool tip geometry are analyzed to obtain internal stresses, such as a four-sided pyramid with an included angle of 120° and a tool inclination angle to the normal axis of 15°. The quality of the workpieces after cutting is studied by optical microscopy to verify the FE (finite-element) model. The disruption of the material structure during scribing occurs near the scratch and propagates into the wafer or over its surface at a short range. The deformation area along the scratch looks like a ragged band, but the stress width is rather low. The theory of cutting brittle semiconductor and optical materials is developed on the basis of the advanced theory of metal turning. The fall of stress intensity along the normal on the way from the tip point to the scribe line can be predicted using the developed theory and with the verified FE model. The crystal quality and dimensions of defects are determined by the mechanics of scratching, which depends on the shape of the diamond tip, the scratching direction, the velocity of the cutting tool and applied force loads. The disunity is a rate-sensitive process, and it depends on the cutting thickness. The application of numerical techniques, such as FE analysis, to cutting problems enhances understanding and promotes the further development of existing machining technologies.

  12. Advanced computational modelling for drying processes – A review

    Defraeye, Thijs


    Highlights: • Understanding the product dehydration process is a key aspect in drying technology. • Advanced modelling thereof plays an increasingly important role for developing next-generation drying technology. • Dehydration modelling should be more energy-oriented. • An integrated “nexus” modelling approach is needed to produce more energy-smart products. • Multi-objective process optimisation requires development of more complete multiphysics models. - Abstract: Drying is one of the most complex and energy-consuming chemical unit operations. R and D efforts in drying technology have skyrocketed in the past decades, as new drivers emerged in this industry next to procuring prime product quality and high throughput, namely reduction of energy consumption and carbon footprint as well as improving food safety and security. Solutions are sought in optimising existing technologies or developing new ones which increase energy and resource efficiency, use renewable energy, recuperate waste heat and reduce product loss, thus also the embodied energy therein. Novel tools are required to push such technological innovations and their subsequent implementation. Particularly computer-aided drying process engineering has a large potential to develop next-generation drying technology, including more energy-smart and environmentally-friendly products and dryers systems. This review paper deals with rapidly emerging advanced computational methods for modelling dehydration of porous materials, particularly for foods. Drying is approached as a combined multiphysics, multiscale and multiphase problem. These advanced methods include computational fluid dynamics, several multiphysics modelling methods (e.g. conjugate modelling), multiscale modelling and modelling of material properties and the associated propagation of material property variability. Apart from the current challenges for each of these, future perspectives should be directed towards material property

  13. Computer simulation of atomic collision processes in solids

    Robinson, M.T.


    Computer simulation is a major tool for studying the interactions of swift ions with solids which underlie processes such as particle backscattering, ion implantation, radiation damage, and sputtering. Numerical models are classed as molecular dynamics or binary collision models, along with some intermediate types. Binary collision models are divided into those for crystalline targets and those for structureless ones. The foundations of such models are reviewed, including interatomic potentials, electron excitations, and relationships among the various types of codes. Some topics of current interest are summarized

  14. Computational information geometry for image and signal processing

    Critchley, Frank; Dodson, Christopher


    This book focuses on the application and development of information geometric methods in the analysis, classification and retrieval of images and signals. It provides introductory chapters to help those new to information geometry and applies the theory to several applications. This area has developed rapidly over recent years, propelled by the major theoretical developments in information geometry, efficient data and image acquisition and the desire to process and interpret large databases of digital information. The book addresses both the transfer of methodology to practitioners involved in database analysis and in its efficient computational implementation.

  15. Radar data processing using a distributed computational system

    Mota, Gilberto F.


    This research specifies and validates a new concurrent decomposition scheme, called Confined Space Search Decomposition (CSSD), to exploit parallelism of Radar Data Processing algorithms using a Distributed Computational System. To formalize the specification, we propose and apply an object-oriented methodology called Decomposition Cost Evaluation Model (DCEM). To reduce the penalties of load imbalance, we propose a distributed dynamic load balance heuristic called Object Reincarnation (OR). To validate the research, we first compare our decomposition with an identified alternative using the proposed DCEM model and then develop a theoretical prediction of selected parameters. We also develop a simulation to check the Object Reincarnation Concept.

  16. Characterization of the MCNPX computer code in micro processed architectures

    Almeida, Helder C.; Dominguez, Dany S.; Orellana, Esbel T.V.; Milian, Felix M.


    The MCNPX (Monte Carlo N-Particle extended) can be used to simulate the transport of several types of nuclear particles, using probabilistic methods. The technique used for MCNPX is to follow the history of each particle from its origin to its extinction that can be given by absorption, escape or other reasons. To obtain accurate results in simulations performed with the MCNPX is necessary to process a large number of histories, which demand high computational cost. Currently the MCNPX can be installed in virtually all computing platforms available, however there is virtually no information on the performance of the application in each. This paper studies the performance of MCNPX, to work with electrons and photons in phantom Faux on two platforms used by most researchers, Windows and Li nux. Both platforms were tested on the same computer to ensure the reliability of the hardware in the measures of performance. The performance of MCNPX was measured by time spent to run a simulation, making the variable time the main measure of comparison. During the tests the difference in performance between the two platforms MCNPX was evident. In some cases we were able to gain speed more than 10% only with the exchange platforms, without any specific optimization. This shows the relevance of the study to optimize this tool on the platform most appropriate for its use. (author)

  17. On a Multiprocessor Computer Farm for Online Physics Data Processing

    Sinanis, N J


    The topic of this thesis is the design-phase performance evaluation of a large multiprocessor (MP) computer farm intended for the on-line data processing of the Compact Muon Solenoid (CMS) experiment. CMS is a high energy Physics experiment, planned to operate at CERN (Geneva, Switzerland) during the year 2005. The CMS computer farm is consisting of 1,000 MP computer systems and a 1,000 X 1,000 communications switch. The followed approach to the farm performance evaluation is through simulation studies and evaluation of small prototype systems building blocks of the farm. For the purposes of the simulation studies, we have developed a discrete-event, event-driven simulator that is capable to describe the high-level architecture of the farm and give estimates of the farm's performance. The simulator is designed in a modular way to facilitate the development of various modules that model the behavior of the farm building blocks in the desired level of detail. With the aid of this simulator, we make a particular...

  18. Personal Computer (PC) based image processing applied to fluid mechanics

    Cho, Y.-C.; Mclachlan, B. G.


    A PC based image processing system was employed to determine the instantaneous velocity field of a two-dimensional unsteady flow. The flow was visualized using a suspension of seeding particles in water, and a laser sheet for illumination. With a finite time exposure, the particle motion was captured on a photograph as a pattern of streaks. The streak pattern was digitized and processed using various imaging operations, including contrast manipulation, noise cleaning, filtering, statistical differencing, and thresholding. Information concerning the velocity was extracted from the enhanced image by measuring the length and orientation of the individual streaks. The fluid velocities deduced from the randomly distributed particle streaks were interpolated to obtain velocities at uniform grid points. For the interpolation a simple convolution technique with an adaptive Gaussian window was used. The results are compared with a numerical prediction by a Navier-Stokes computation.

  19. Radioimmunoassay data processing program for IBM PC computers


    The Medical Applications Section of the International Atomic Energy Agency (IAEA) has previously developed several programs for use on the Hewlett-Packard HP-41C programmable calculator to facilitate better quality control in radioimmunoassay through improved data processing. The program described in this document is designed for off-line analysis using an IBM PC (or compatible) for counting data from standards and unknown specimens (i.e. for analysis of counting data previously recorded by a counter), together with internal quality control (IQC) data both within and between batch. The greater computing power of the IBM PC has enabled the imprecision profile and IQC control curves which were unavailable on the HP-41C version. It is intended that the program would make available good data processing capability to laboratories having limited financial resources and serious problems of quality control. 3 refs

  20. Topics in medical image processing and computational vision

    Jorge, Renato


      The sixteen chapters included in this book were written by invited experts of international recognition and address important issues in Medical Image Processing and Computational Vision, including: Object Recognition, Object Detection, Object Tracking, Pose Estimation, Facial Expression Recognition, Image Retrieval, Data Mining, Automatic Video Understanding and Management, Edges Detection, Image Segmentation, Modelling and Simulation, Medical thermography, Database Systems, Synthetic Aperture Radar and Satellite Imagery.   Different applications are addressed and described throughout the book, comprising: Object Recognition and Tracking, Facial Expression Recognition, Image Database, Plant Disease Classification, Video Understanding and Management, Image Processing, Image Segmentation, Bio-structure Modelling and Simulation, Medical Imaging, Image Classification, Medical Diagnosis, Urban Areas Classification, Land Map Generation.   The book brings together the current state-of-the-art in the various mul...

  1. ISHM-oriented adaptive fault diagnostics for avionics based on a distributed intelligent agent system

    Xu, Jiuping; Zhong, Zhengqiang; Xu, Lei


    In this paper, an integrated system health management-oriented adaptive fault diagnostics and model for avionics is proposed. With avionics becoming increasingly complicated, precise and comprehensive avionics fault diagnostics has become an extremely complicated task. For the proposed fault diagnostic system, specific approaches, such as the artificial immune system, the intelligent agents system and the Dempster-Shafer evidence theory, are used to conduct deep fault avionics diagnostics. Through this proposed fault diagnostic system, efficient and accurate diagnostics can be achieved. A numerical example is conducted to apply the proposed hybrid diagnostics to a set of radar transmitters on an avionics system and to illustrate that the proposed system and model have the ability to achieve efficient and accurate fault diagnostics. By analyzing the diagnostic system's feasibility and pragmatics, the advantages of this system are demonstrated.

  2. Avionics Reliability, Its Techniques and Related Disciplines.


    CENTRALCENTIE DESE DONNEESTDFNTO uv UV EVENTULFLUX ACTNFNS MAINSLIN CORPEancheE3 15-12 A M *1 = Z ]3i 04 1 CD- Le~ ..s2 At 15-13 PILOTES FORMES DE SqLt PARCS...the manufacturing process to incorporate the design changes, and, possibly, retrofit those units already fielded. This not only costs money , but also...initial studies but is useful to control counterfeiting , substitution, unauthorized change, and any lapse of compliance with the military specification

  3. Computational modelling of a thermoforming process for thermoplastic starch

    Szegda, D.; Song, J.; Warby, M. K.; Whiteman, J. R.


    Plastic packaging waste currently forms a significant part of municipal solid waste and as such is causing increasing environmental concerns. Such packaging is largely non-biodegradable and is particularly difficult to recycle or to reuse due to its complex composition. Apart from limited recycling of some easily identifiable packaging wastes, such as bottles, most packaging waste ends up in landfill sites. In recent years, in an attempt to address this problem in the case of plastic packaging, the development of packaging materials from renewable plant resources has received increasing attention and a wide range of bioplastic materials based on starch are now available. Environmentally these bioplastic materials also reduce reliance on oil resources and have the advantage that they are biodegradable and can be composted upon disposal to reduce the environmental impact. Many food packaging containers are produced by thermoforming processes in which thin sheets are inflated under pressure into moulds to produce the required thin wall structures. Hitherto these thin sheets have almost exclusively been made of oil-based polymers and it is for these that computational models of thermoforming processes have been developed. Recently, in the context of bioplastics, commercial thermoplastic starch sheet materials have been developed. The behaviour of such materials is influenced both by temperature and, because of the inherent hydrophilic characteristics of the materials, by moisture content. Both of these aspects affect the behaviour of bioplastic sheets during the thermoforming process. This paper describes experimental work and work on the computational modelling of thermoforming processes for thermoplastic starch sheets in an attempt to address the combined effects of temperature and moisture content. After a discussion of the background of packaging and biomaterials, a mathematical model for the deformation of a membrane into a mould is presented, together with its

  4. Emergency healthcare process automation using mobile computing and cloud services.

    Poulymenopoulou, M; Malamateniou, F; Vassilacopoulos, G


    Emergency care is basically concerned with the provision of pre-hospital and in-hospital medical and/or paramedical services and it typically involves a wide variety of interdependent and distributed activities that can be interconnected to form emergency care processes within and between Emergency Medical Service (EMS) agencies and hospitals. Hence, in developing an information system for emergency care processes, it is essential to support individual process activities and to satisfy collaboration and coordination needs by providing readily access to patient and operational information regardless of location and time. Filling this information gap by enabling the provision of the right information, to the right people, at the right time fosters new challenges, including the specification of a common information format, the interoperability among heterogeneous institutional information systems or the development of new, ubiquitous trans-institutional systems. This paper is concerned with the development of an integrated computer support to emergency care processes by evolving and cross-linking institutional healthcare systems. To this end, an integrated EMS cloud-based architecture has been developed that allows authorized users to access emergency case information in standardized document form, as proposed by the Integrating the Healthcare Enterprise (IHE) profile, uses the Organization for the Advancement of Structured Information Standards (OASIS) standard Emergency Data Exchange Language (EDXL) Hospital Availability Exchange (HAVE) for exchanging operational data with hospitals and incorporates an intelligent module that supports triaging and selecting the most appropriate ambulances and hospitals for each case.

  5. Computer processing of the Δlambda/lambda measured results

    Draguniene, V.J.; Makariuniene, E.K.


    For the processing of the experimental data on the influence of the chemical environment on the radioactive decay constants, five programs have been written in the Fortran language in the version for the monitoring system DUBNA on the BESM-6 computer. Each program corresponds to a definite stage of data processing and acquirement of the definite answer. The first and second programs are calculation of the ratio of the pulse numbers measured with different sources and calculation of the mean value of dispersions. The third program is the averaging of the ratios of the pulse numbers. The fourth and the fifth are determination of the change of the radioactive decay constant. The created programs for the processing of the measurement results permit the processing of the experimental data beginning from the values of pulse numbers obtained directly in the experiments. The programs allow to treat a file of the experimental results, to calculated various errors in all the stages of the calculations. Printing of the obtained results is convenient for usage

  6. Plant process computer system upgrades at the KSG simulator centre


    The human-machine interface (HMI) of a modern plant process computer system (PPC) differs significantly from that of older systems. Along with HMI changes, there are often improvements to system functionality such as alarm display and printing functions and transient data analysis capabilities. Therefore, the upgrade or replacement of a PPC in the reference plant will typically require an upgrade of the simulator (see Section 6.5.1 for additional information). Several options are available for this type of project including stimulation of a replica system,or emulation, or simulation of PPC functionality within the simulation environment. To simulate or emulate a PCC, detailed knowledge of hardware and software functionality is required. This is typically vendor proprietary information, which leads to licensing and other complications. One of the added benefits of stimulating the PPC system is that the simulator can be used as a test bed for functional testing (i.e. verification and validation) of the system prior to installation in the reference plant. Some of this testing may include validation of the process curve and system diagram displays. Over the past few years several German NPPs decided to modernize their plant process computer (PPC) systems. After the NPPs had selected the desired system to meet their requirements the question arose how to modernize the PPC systems on the corresponding simulators. Six German NPPs selected the same PPC system from the same vendor and it was desired to perform integral tests of the HMI on the simulators. In this case the vendor offered a stimulated variant of their system and it therefore made sense to choose that implementation method for upgrade of the corresponding simulators. The first simulator PPC modernization project can be considered as a prototype project for the follow-on projects. In general, from the simulator project execution perspective the implementation of several stimulated PPC systems of the same type

  7. An electronic flight bag for NextGen avionics

    Zelazo, D. Eyton


    The introduction of the Next Generation Air Transportation System (NextGen) initiative by the Federal Aviation Administration (FAA) will impose new requirements for cockpit avionics. A similar program is also taking place in Europe by the European Organisation for the Safety of Air Navigation (Eurocontrol) called the Single European Sky Air Traffic Management Research (SESAR) initiative. NextGen will require aircraft to utilize Automatic Dependent Surveillance-Broadcast (ADS-B) in/out technology, requiring substantial changes to existing cockpit display systems. There are two ways that aircraft operators can upgrade their aircraft in order to utilize ADS-B technology. The first is to replace existing primary flight displays with new displays that are ADS-B compatible. The second, less costly approach is to install an advanced Class 3 Electronic Flight Bag (EFB) system. The installation of Class 3 EFBs in the cockpit will allow aircraft operators to utilize ADS-B technology in a lesser amount of time with a decreased cost of implementation and will provide additional benefits to the operator. This paper describes a Class 3 EFB, the NexisTM Flight-Intelligence System, which has been designed to allow users a direct interface with NextGen avionics sensors while additionally providing the pilot with all the necessary information to meet NextGen requirements.

  8. Software testability and its application to avionic software

    Voas, Jeffrey M.; Miller, Keith W.; Payne, Jeffery E.


    Randomly generated black-box testing is an established yet controversial method of estimating software reliability. Unfortunately, as software applications have required higher reliabilities, practical difficulties with black-box testing have become increasingly problematic. These practical problems are particularly acute in life-critical avionics software, where requirements of 10 exp -7 failures per hour of system reliability can translate into a probability of failure (POF) of perhaps 10 exp -9 or less for each individual execution of the software. This paper describes the application of one type of testability analysis called 'sensitivity analysis' to B-737 avionics software; one application of sensitivity analysis is to quantify whether software testing is capable of detecting faults in a particular program and thus whether we can be confident that a tested program is not hiding faults. We so 80 by finding the testabilities of the individual statements of the program, and then use those statement testabilities to find the testabilities of the functions and modules. For the B-737 system we analyzed, we were able to isolate those functions that are more prone to hide errors during system/reliability testing.

  9. IXV avionics architecture: Design, qualification and mission results

    Succa, Massimo; Boscolo, Ilario; Drocco, Alessandro; Malucchi, Giovanni; Dussy, Stephane


    The paper details the IXV avionics presenting the architecture and the constituting subsystems and equipment. It focuses on the novelties introduced, such as the Ethernet-based protocol for the experiment data acquisition system, and on the synergy with Ariane 5 and Vega equipment, pursued in order to comply with the design-to-cost requirement for the avionics system development. Emphasis is given to the adopted model philosophy in relation to OTS/COTS items heritage and identified activities necessary to extend the qualification level to be compliant with the IXV environment. Associated lessons learned are identified. Then, the paper provides the first results and interpretation from the flight recorders telemetry, covering the behavior of the Data Handling System, the quality of telemetry recording and real-time/delayed transmission, the performance of the batteries and the Power Protection and Distribution Unit, the ground segment coverage during visibility windows and the performance of the GNC sensors (IMU and GPS) and actuators. Finally, some preliminary tracks of the IXV follow on are given, introducing the objectives of the Innovative Space Vehicle and the necessary improvements to be developed in the frame of PRIDE.

  10. Some computer applications and digital image processing in nuclear medicine

    Lowinger, T.


    Methods of digital image processing are applied to problems in nuclear medicine imaging. The symmetry properties of central nervous system lesions are exploited in an attempt to determine the three-dimensional radioisotope density distribution within the lesions. An algorithm developed by astronomers at the end of the 19th century to determine the distribution of matter in globular clusters is applied to tumors. This algorithm permits the emission-computed-tomographic reconstruction of spherical lesions from a single view. The three-dimensional radioisotope distribution derived by the application of the algorithm can be used to characterize the lesions. The applicability to nuclear medicine images of ten edge detection methods in general usage in digital image processing were evaluated. A general model of image formation by scintillation cameras is developed. The model assumes that objects to be imaged are composed of a finite set of points. The validity of the model has been verified by its ability to duplicate experimental results. Practical applications of this work involve quantitative assessment of the distribution of radipharmaceuticals under clinical situations and the study of image processing algorithms

  11. A Cloud Computing Model for Optimization of Transport Logistics Process

    Benotmane Zineb


    Full Text Available In any increasing competitive environment and even in companies; we must adopt a good logistic chain management policy which is the main objective to increase the overall gain by maximizing profits and minimizing costs, including manufacturing costs such as: transaction, transport, storage, etc. In this paper, we propose a cloud platform of this chain logistic for decision support; in fact, this decision must be made to adopt new strategy for cost optimization, besides, the decision-maker must have knowledge on the consequences of this new strategy. Our proposed cloud computing platform has a multilayer structure; this later is contained from a set of web services to provide a link between applications using different technologies; to enable sending; and receiving data through protocols, which should be understandable by everyone. The chain logistic is a process-oriented business; it’s used to evaluate logistics process costs, to propose optimal solutions and to evaluate these solutions before their application. As a scenario, we have formulated the problem for the delivery process, and we have proposed a modified Bin-packing algorithm to improve vehicles loading.

  12. Contingency Analysis Post-Processing With Advanced Computing and Visualization

    Chen, Yousu; Glaesemann, Kurt; Fitzhenry, Erin


    Contingency analysis is a critical function widely used in energy management systems to assess the impact of power system component failures. Its outputs are important for power system operation for improved situational awareness, power system planning studies, and power market operations. With the increased complexity of power system modeling and simulation caused by increased energy production and demand, the penetration of renewable energy and fast deployment of smart grid devices, and the trend of operating grids closer to their capacity for better efficiency, more and more contingencies must be executed and analyzed quickly in order to ensure grid reliability and accuracy for the power market. Currently, many researchers have proposed different techniques to accelerate the computational speed of contingency analysis, but not much work has been published on how to post-process the large amount of contingency outputs quickly. This paper proposes a parallel post-processing function that can analyze contingency analysis outputs faster and display them in a web-based visualization tool to help power engineers improve their work efficiency by fast information digestion. Case studies using an ESCA-60 bus system and a WECC planning system are presented to demonstrate the functionality of the parallel post-processing technique and the web-based visualization tool.

  13. Multi-fidelity Gaussian process regression for computer experiments

    Le-Gratiet, Loic


    This work is on Gaussian-process based approximation of a code which can be run at different levels of accuracy. The goal is to improve the predictions of a surrogate model of a complex computer code using fast approximations of it. A new formulation of a co-kriging based method has been proposed. In particular this formulation allows for fast implementation and for closed-form expressions for the predictive mean and variance for universal co-kriging in the multi-fidelity framework, which is a breakthrough as it really allows for the practical application of such a method in real cases. Furthermore, fast cross validation, sequential experimental design and sensitivity analysis methods have been extended to the multi-fidelity co-kriging framework. This thesis also deals with a conjecture about the dependence of the learning curve (i.e. the decay rate of the mean square error) with respect to the smoothness of the underlying function. A proof in a fairly general situation (which includes the classical models of Gaussian-process based meta-models with stationary covariance functions) has been obtained while the previous proofs hold only for degenerate kernels (i.e. when the process is in fact finite- dimensional). This result allows for addressing rigorously practical questions such as the optimal allocation of the budget between different levels of codes in the multi-fidelity framework. (author) [fr

  14. Optical computing - an alternate approach to trigger processing

    Cleland, W.E.


    The enormous rate reduction factors required by most ISABELLE experiments suggest that we should examine every conceivable approach to trigger processing. One approach that has not received much attention by high energy physicists is optical data processing. The past few years have seen rapid advances in optoelectronic technology, stimulated mainly by the military and the communications industry. An intriguing question is whether one can utilize this technology together with the optical computing techniques that have been developed over the past two decades to develop a rapid trigger processor for high energy physics experiments. Optical data processing is a method for performing a few very specialized operations on data which is inherently two dimensional. Typical operations are the formation of convolution or correlation integrals between the input data and information stored in the processor in the form of an optical filter. Optical processors are classed as coherent or incoherent, according to the spatial coherence of the input wavefront. Typically, in a coherent processor a laser beam is modulated with a photographic transparency which represents the input data. In an incoherent processor, the input may be an incoherently illuminated transparency, but self-luminous objects, such as an oscilloscope trace, have also been used. We consider here an incoherent processor in which the input data is converted into an optical wavefront through the excitation of an array of point sources - either light emitting diodes or injection lasers

  15. SOLVEX: a computer program for simulation of solvent extraction processes

    Scotten, W.C.


    SOLVEX is a FORTRAN IV computer program that simulates the dynamic behavior of solvent extraction processes conducted in mixer-settlers and centrifugal contactors. Two options permit terminating dynamic phases by time or by achieving steady state, and a third option permits artificial rapid close to steady state. Thus the program is well suited to multiple phases of dynamic problems and multiple input of steady state problems. Changes from the previous problem are the only inputs required for each succeeding problem. Distribution data can be supplied by two-variable third-power polynomial equations or by three-variable tables in any one of 16 different combinations involving phase concentrations or distribution coefficients (ratio of phase concentrations) or their logarithms

  16. A computational model of human auditory signal processing and perception

    Jepsen, Morten Løve; Ewert, Stephan D.; Dau, Torsten


    A model of computational auditory signal-processing and perception that accounts for various aspects of simultaneous and nonsimultaneous masking in human listeners is presented. The model is based on the modulation filterbank model described by Dau et al. [J. Acoust. Soc. Am. 102, 2892 (1997...... discrimination with pure tones and broadband noise, tone-in-noise detection, spectral masking with narrow-band signals and maskers, forward masking with tone signals and tone or noise maskers, and amplitude-modulation detection with narrow- and wideband noise carriers. The model can account for most of the key...... properties of the data and is more powerful than the original model. The model might be useful as a front end in technical applications....

  17. Advanced information processing system: Inter-computer communication services

    Burkhardt, Laura; Masotto, Tom; Sims, J. Terry; Whittredge, Roy; Alger, Linda S.


    The purpose is to document the functional requirements and detailed specifications for the Inter-Computer Communications Services (ICCS) of the Advanced Information Processing System (AIPS). An introductory section is provided to outline the overall architecture and functional requirements of the AIPS and to present an overview of the ICCS. An overview of the AIPS architecture as well as a brief description of the AIPS software is given. The guarantees of the ICCS are provided, and the ICCS is described as a seven-layered International Standards Organization (ISO) Model. The ICCS functional requirements, functional design, and detailed specifications as well as each layer of the ICCS are also described. A summary of results and suggestions for future work are presented.

  18. Simple computation of reaction–diffusion processes on point clouds

    Macdonald, Colin B.; Merriman, Barry; Ruuth, Steven J.


    The study of reaction-diffusion processes is much more complicated on general curved surfaces than on standard Cartesian coordinate spaces. Here we show how to formulate and solve systems of reaction-diffusion equations on surfaces in an extremely simple way, using only the standard Cartesian form of differential operators, and a discrete unorganized point set to represent the surface. Our method decouples surface geometry from the underlying differential operators. As a consequence, it becomes possible to formulate and solve rather general reaction-diffusion equations on general surfaces without having to consider the complexities of differential geometry or sophisticated numerical analysis. To illustrate the generality of the method, computations for surface diffusion, pattern formation, excitable media, and bulk-surface coupling are provided for a variety of complex point cloud surfaces.

  19. Simple computation of reaction–diffusion processes on point clouds

    Macdonald, Colin B.


    The study of reaction-diffusion processes is much more complicated on general curved surfaces than on standard Cartesian coordinate spaces. Here we show how to formulate and solve systems of reaction-diffusion equations on surfaces in an extremely simple way, using only the standard Cartesian form of differential operators, and a discrete unorganized point set to represent the surface. Our method decouples surface geometry from the underlying differential operators. As a consequence, it becomes possible to formulate and solve rather general reaction-diffusion equations on general surfaces without having to consider the complexities of differential geometry or sophisticated numerical analysis. To illustrate the generality of the method, computations for surface diffusion, pattern formation, excitable media, and bulk-surface coupling are provided for a variety of complex point cloud surfaces.

  20. 78 FR 24775 - Certain Wireless Communication Devices, Portable Music and Data Processing Devices, Computers and...


    ... Devices, Portable Music and Data Processing Devices, Computers and Components Thereof; Commission Decision... importation of certain wireless communication devices, portable music and data processing devices, computers... '826 patent''). The complaint further alleges the existence of a domestic industry. The Commission's...


    William M. Bond; Salih Ersayin


    This project involved industrial scale testing of a mineral processing simulator to improve the efficiency of a taconite processing plant, namely the Minorca mine. The Concentrator Modeling Center at the Coleraine Minerals Research Laboratory, University of Minnesota Duluth, enhanced the capabilities of available software, Usim Pac, by developing mathematical models needed for accurate simulation of taconite plants. This project provided funding for this technology to prove itself in the industrial environment. As the first step, data representing existing plant conditions were collected by sampling and sample analysis. Data were then balanced and provided a basis for assessing the efficiency of individual devices and the plant, and also for performing simulations aimed at improving plant efficiency. Performance evaluation served as a guide in developing alternative process strategies for more efficient production. A large number of computer simulations were then performed to quantify the benefits and effects of implementing these alternative schemes. Modification of makeup ball size was selected as the most feasible option for the target performance improvement. This was combined with replacement of existing hydrocyclones with more efficient ones. After plant implementation of these modifications, plant sampling surveys were carried out to validate findings of the simulation-based study. Plant data showed very good agreement with the simulated data, confirming results of simulation. After the implementation of modifications in the plant, several upstream bottlenecks became visible. Despite these bottlenecks limiting full capacity, concentrator energy improvement of 7% was obtained. Further improvements in energy efficiency are expected in the near future. The success of this project demonstrated the feasibility of a simulation-based approach. Currently, the Center provides simulation-based service to all the iron ore mining companies operating in northern

  2. Linking CATHENA with other computer codes through a remote process

    Vasic, A.; Hanna, B.N.; Waddington, G.M. [Atomic Energy of Canada Limited, Chalk River, Ontario (Canada); Sabourin, G. [Atomic Energy of Canada Limited, Montreal, Quebec (Canada); Girard, R. [Hydro-Quebec, Montreal, Quebec (Canada)


    'Full text:' CATHENA (Canadian Algorithm for THErmalhydraulic Network Analysis) is a computer code developed by Atomic Energy of Canada Limited (AECL). The code uses a transient, one-dimensional, two-fluid representation of two-phase flow in piping networks. CATHENA is used primarily for the analysis of postulated upset conditions in CANDU reactors; however, the code has found a wider range of applications. In the past, the CATHENA thermalhydraulics code included other specialized codes, i.e. ELOCA and the Point LEPreau CONtrol system (LEPCON) as callable subroutine libraries. The combined program was compiled and linked as a separately named code. This code organizational process is not suitable for independent development, maintenance, validation and version tracking of separate computer codes. The alternative solution to provide code development independence is to link CATHENA to other computer codes through a Parallel Virtual Machine (PVM) interface process. PVM is a public domain software package, developed by Oak Ridge National Laboratory and enables a heterogeneous collection of computers connected by a network to be used as a single large parallel machine. The PVM approach has been well accepted by the global computing community and has been used successfully for solving large-scale problems in science, industry, and business. Once development of the appropriate interface for linking independent codes through PVM is completed, future versions of component codes can be developed, distributed separately and coupled as needed by the user. This paper describes the coupling of CATHENA to the ELOCA-IST and the TROLG2 codes through a PVM remote process as an illustration of possible code connections. ELOCA (Element Loss Of Cooling Analysis) is the Industry Standard Toolset (IST) code developed by AECL to simulate the thermo-mechanical response of CANDU fuel elements to transient thermalhydraulics boundary conditions. A separate ELOCA driver program

  3. Linking CATHENA with other computer codes through a remote process

    Vasic, A.; Hanna, B.N.; Waddington, G.M.; Sabourin, G.; Girard, R.


    'Full text:' CATHENA (Canadian Algorithm for THErmalhydraulic Network Analysis) is a computer code developed by Atomic Energy of Canada Limited (AECL). The code uses a transient, one-dimensional, two-fluid representation of two-phase flow in piping networks. CATHENA is used primarily for the analysis of postulated upset conditions in CANDU reactors; however, the code has found a wider range of applications. In the past, the CATHENA thermalhydraulics code included other specialized codes, i.e. ELOCA and the Point LEPreau CONtrol system (LEPCON) as callable subroutine libraries. The combined program was compiled and linked as a separately named code. This code organizational process is not suitable for independent development, maintenance, validation and version tracking of separate computer codes. The alternative solution to provide code development independence is to link CATHENA to other computer codes through a Parallel Virtual Machine (PVM) interface process. PVM is a public domain software package, developed by Oak Ridge National Laboratory and enables a heterogeneous collection of computers connected by a network to be used as a single large parallel machine. The PVM approach has been well accepted by the global computing community and has been used successfully for solving large-scale problems in science, industry, and business. Once development of the appropriate interface for linking independent codes through PVM is completed, future versions of component codes can be developed, distributed separately and coupled as needed by the user. This paper describes the coupling of CATHENA to the ELOCA-IST and the TROLG2 codes through a PVM remote process as an illustration of possible code connections. ELOCA (Element Loss Of Cooling Analysis) is the Industry Standard Toolset (IST) code developed by AECL to simulate the thermo-mechanical response of CANDU fuel elements to transient thermalhydraulics boundary conditions. A separate ELOCA driver program starts, ends

  4. Adaptive Dynamic Process Scheduling on Distributed Memory Parallel Computers

    Wei Shu


    Full Text Available One of the challenges in programming distributed memory parallel machines is deciding how to allocate work to processors. This problem is particularly important for computations with unpredictable dynamic behaviors or irregular structures. We present a scheme for dynamic scheduling of medium-grained processes that is useful in this context. The adaptive contracting within neighborhood (ACWN is a dynamic, distributed, load-dependent, and scalable scheme. It deals with dynamic and unpredictable creation of processes and adapts to different systems. The scheme is described and contrasted with two other schemes that have been proposed in this context, namely the randomized allocation and the gradient model. The performance of the three schemes on an Intel iPSC/2 hypercube is presented and analyzed. The experimental results show that even though the ACWN algorithm incurs somewhat larger overhead than the randomized allocation, it achieves better performance in most cases due to its adaptiveness. Its feature of quickly spreading the work helps it outperform the gradient model in performance and scalability.

  5. Computer processing of the scintigraphic image using digital filtering techniques

    Matsuo, Michimasa


    The theory of digital filtering was studied as a method for the computer processing of scintigraphic images. The characteristics and design techniques of finite impulse response (FIR) digital filters with linear phases were examined using the z-transform. The conventional data processing method, smoothing, could be recognized as one kind of linear phase FIR low-pass digital filtering. Ten representatives of FIR low-pass digital filters with various cut-off frequencies were scrutinized from the frequency domain in one-dimension and two-dimensions. These filters were applied to phantom studies with cold targets, using a Scinticamera-Minicomputer on-line System. These studies revealed that the resultant images had a direct connection with the magnitude response of the filter, that is, they could be estimated fairly well from the frequency response of the digital filter used. The filter, which was estimated from phantom studies as optimal for liver scintigrams using 198 Au-colloid, was successfully applied in clinical use for detecting true cold lesions and, at the same time, for eliminating spurious images. (J.P.N.)

  6. Managing internode data communications for an uninitialized process in a parallel computer

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Parker, Jeffrey J; Ratterman, Joseph D; Smith, Brian E


    A parallel computer includes nodes, each having main memory and a messaging unit (MU). Each MU includes computer memory, which in turn includes, MU message buffers. Each MU message buffer is associated with an uninitialized process on the compute node. In the parallel computer, managing internode data communications for an uninitialized process includes: receiving, by an MU of a compute node, one or more data communications messages in an MU message buffer associated with an uninitialized process on the compute node; determining, by an application agent, that the MU message buffer associated with the uninitialized process is full prior to initialization of the uninitialized process; establishing, by the application agent, a temporary message buffer for the uninitialized process in main computer memory; and moving, by the application agent, data communications messages from the MU message buffer associated with the uninitialized process to the temporary message buffer in main computer memory.

  7. Computer Use by School Teachers in Teaching-Learning Process

    Bhalla, Jyoti


    Developing countries have a responsibility not merely to provide computers for schools, but also to foster a habit of infusing a variety of ways in which computers can be integrated in teaching-learning amongst the end users of these tools. Earlier researches lacked a systematic study of the manner and the extent of computer-use by teachers. The…

  8. Securing the Data Storage and Processing in Cloud Computing Environment

    Owens, Rodney


    Organizations increasingly utilize cloud computing architectures to reduce costs and energy consumption both in the data warehouse and on mobile devices by better utilizing the computing resources available. However, the security and privacy issues with publicly available cloud computing infrastructures have not been studied to a sufficient depth…

  9. CanOpen on RASTA: The Integration of the CanOpen IP Core in the Avionics Testbed

    Furano, Gianluca; Guettache, Farid; Magistrati, Giorgio; Tiotto, Gabriele; Ortega, Carlos Urbina; Valverde, Alberto


    This paper presents the work done within the ESA Estec Data Systems Division, targeting the integration of the CanOpen IP Core with the existing Reference Architecture Test-bed for Avionics (RASTA). RASTA is the reference testbed system of the ESA Avionics Lab, designed to integrate the main elements of a typical Data Handling system. It aims at simulating a scenario where a Mission Control Center communicates with on-board computers and systems through a TM/TC link, thus providing the data management through qualified processors and interfaces such as Leon2 core processors, CAN bus controllers, MIL-STD-1553 and SpaceWire. This activity aims at the extension of the RASTA with two boards equipped with HurriCANe controller, acting as CANOpen slaves. CANOpen software modules have been ported on the RASTA system I/O boards equipped with Gaisler GR-CAN controller and acts as master communicating with the CCIPC boards. CanOpen serves as upper application layer for based on CAN defined within the CAN-in-Automation standard and can be regarded as the definitive standard for the implementation of CAN-based systems solutions. The development and integration of CCIPC performed by SITAEL S.p.A., is the first application that aims to bring the CANOpen standard for space applications. The definition of CANOpen within the European Cooperation for Space Standardization (ECSS) is under development.

  10. Computational approach for a pair of bubble coalescence process

    Nurul Hasan; Zalinawati binti Zakaria


    The coalescence of bubbles has great value in mineral recovery and oil industry. In this paper, two co-axial bubbles rising in a cylinder is modelled to study the coalescence of bubbles for four computational experimental test cases. The Reynolds' (Re) number is chosen in between 8.50 and 10, Bond number, Bo ∼4.25-50, Morton number, M 0.0125-14.7. The viscosity ratio (μ r ) and density ratio (ρ r ) of liquid to bubble are kept constant (100 and 850 respectively). It was found that the Bo number has significant effect on the coalescence process for constant Re, μ r and ρ r . The bubble-bubble distance over time was validated against published experimental data. The results show that VOF approach can be used to model these phenomena accurately. The surface tension was changed to alter the Bo and density of the fluids to alter the Re and M, keeping the μ r and ρ r the same. It was found that for lower Bo, the bubble coalesce is slower and the pocket at the lower part of the leading bubble is less concave (towards downward) which is supported by the experimental data.

  11. Cholinergic modulation of cognitive processing: insights drawn from computational models

    Ehren L Newman


    Full Text Available Acetylcholine plays an important role in cognitive function, as shown by pharmacological manipulations that impact working memory, attention, episodic memory and spatial memory function. Acetylcholine also shows striking modulatory influences on the cellular physiology of hippocampal and cortical neurons. Modeling of neural circuits provides a framework for understanding how the cognitive functions may arise from the influence of acetylcholine on neural and network dynamics. We review the influences of cholinergic manipulations on behavioral performance in working memory, attention, episodic memory and spatial memory tasks, the physiological effects of acetylcholine on neural and circuit dynamics, and the computational models that provide insight into the functional relationships between the physiology and behavior. Specifically, we discuss the important role of acetylcholine in governing mechanisms of active maintenance in working memory tasks and in regulating network dynamics important for effective processing of stimuli in attention and episodic memory tasks. We also propose that theta rhythm play a crucial role as an intermediary between the physiological influences of acetylcholine and behavior in episodic and spatial memory tasks. We conclude with a synthesis of the existing modeling work and highlight future directions that are likely to be rewarding given the existing state of the literature for both empiricists and modelers.

  12. Natural language processing tools for computer assisted language learning

    Vandeventer Faltin, Anne


    Full Text Available This paper illustrates the usefulness of natural language processing (NLP tools for computer assisted language learning (CALL through the presentation of three NLP tools integrated within a CALL software for French. These tools are (i a sentence structure viewer; (ii an error diagnosis system; and (iii a conjugation tool. The sentence structure viewer helps language learners grasp the structure of a sentence, by providing lexical and grammatical information. This information is derived from a deep syntactic analysis. Two different outputs are presented. The error diagnosis system is composed of a spell checker, a grammar checker, and a coherence checker. The spell checker makes use of alpha-codes, phonological reinterpretation, and some ad hoc rules to provide correction proposals. The grammar checker employs constraint relaxation and phonological reinterpretation as diagnosis techniques. The coherence checker compares the underlying "semantic" structures of a stored answer and of the learners' input to detect semantic discrepancies. The conjugation tool is a resource with enhanced capabilities when put on an electronic format, enabling searches from inflected and ambiguous verb forms.

  13. Computed tomography perfusion imaging denoising using Gaussian process regression

    Zhu Fan; Gonzalez, David Rodriguez; Atkinson, Malcolm; Carpenter, Trevor; Wardlaw, Joanna


    Brain perfusion weighted images acquired using dynamic contrast studies have an important clinical role in acute stroke diagnosis and treatment decisions. However, computed tomography (CT) images suffer from low contrast-to-noise ratios (CNR) as a consequence of the limitation of the exposure to radiation of the patient. As a consequence, the developments of methods for improving the CNR are valuable. The majority of existing approaches for denoising CT images are optimized for 3D (spatial) information, including spatial decimation (spatially weighted mean filters) and techniques based on wavelet and curvelet transforms. However, perfusion imaging data is 4D as it also contains temporal information. Our approach using Gaussian process regression (GPR), which takes advantage of the temporal information, to reduce the noise level. Over the entire image, GPR gains a 99% CNR improvement over the raw images and also improves the quality of haemodynamic maps allowing a better identification of edges and detailed information. At the level of individual voxel, GPR provides a stable baseline, helps us to identify key parameters from tissue time-concentration curves and reduces the oscillations in the curve. GPR is superior to the comparable techniques used in this study. (note)

  14. Bioinformatics process management: information flow via a computational journal

    Lushington Gerald


    Full Text Available Abstract This paper presents the Bioinformatics Computational Journal (BCJ, a framework for conducting and managing computational experiments in bioinformatics and computational biology. These experiments often involve series of computations, data searches, filters, and annotations which can benefit from a structured environment. Systems to manage computational experiments exist, ranging from libraries with standard data models to elaborate schemes to chain together input and output between applications. Yet, although such frameworks are available, their use is not widespread–ad hoc scripts are often required to bind applications together. The BCJ explores another solution to this problem through a computer based environment suitable for on-site use, which builds on the traditional laboratory notebook paradigm. It provides an intuitive, extensible paradigm designed for expressive composition of applications. Extensive features facilitate sharing data, computational methods, and entire experiments. By focusing on the bioinformatics and computational biology domain, the scope of the computational framework was narrowed, permitting us to implement a capable set of features for this domain. This report discusses the features determined critical by our system and other projects, along with design issues. We illustrate the use of our implementation of the BCJ on two domain-specific examples.

  15. Integrating ISHM with Flight Avionics Architectures for Cyber-Physical Space Systems, Phase II

    National Aeronautics and Space Administration — Substantial progress has been made by NASA in integrating flight avionics and ISHM with well-defined caution and warning system, however, the scope of ACAW alerting...

  16. Rad-hard Smallsat / CubeSat Avionics Board, Phase I

    National Aeronautics and Space Administration — VORAGO will design a rad-hard Smallsat / CubeSat Avionics single board that has the necessary robustness needed for long duration missions in harsh mission...

  17. Estimation of Airline Benefits from Avionics Upgrade under Preferential Merge Re-sequence Scheduling

    Kotegawa, Tatsuya; Cayabyab, Charlene Anne; Almog, Noam


    Modernization of the airline fleet avionics is essential to fully enable future technologies and procedures for increasing national airspace system capacity. However in the current national airspace system, system-wide benefits gained by avionics upgrade are not fully directed to aircraft/airlines that upgrade, resulting in slow fleet modernization rate. Preferential merge re-sequence scheduling is a best-equipped-best-served concept designed to incentivize avionics upgrade among airlines by allowing aircraft with new avionics (high-equipped) to be re-sequenced ahead of aircraft without the upgrades (low-equipped) at enroute merge waypoints. The goal of this study is to investigate the potential benefits gained or lost by airlines under a high or low-equipped fleet scenario if preferential merge resequence scheduling is implemented.

  18. Use of electronic computers for processing of spectrometric data in instrument neutron activation analysis

    Vyropaev, V.Ya.; Zlokazov, V.B.; Kul'kina, L.I.; Maslov, O.D.; Fefilov, B.V.


    A computer program is described for processing gamma spectra in the instrumental activation analysis of multicomponent objects. Structural diagrams of various variants of connection with the computer are presented. The possibility of using a mini-computer as an analyser and for preliminary processing of gamma spectra is considered

  19. Computer-Mediated Collaborative Projects: Processes for Enhancing Group Development

    Dupin-Bryant, Pamela A.


    Groups are a fundamental part of the business world. Yet, as companies continue to expand internationally, a major challenge lies in promoting effective communication among employees who work in varying time zones. Global expansion often requires group collaboration through computer systems. Computer-mediated groups lead to different communicative…

  20. Fog Computing and Edge Computing Architectures for Processing Data From Diabetes Devices Connected to the Medical Internet of Things.

    Klonoff, David C


    The Internet of Things (IoT) is generating an immense volume of data. With cloud computing, medical sensor and actuator data can be stored and analyzed remotely by distributed servers. The results can then be delivered via the Internet. The number of devices in IoT includes such wireless diabetes devices as blood glucose monitors, continuous glucose monitors, insulin pens, insulin pumps, and closed-loop systems. The cloud model for data storage and analysis is increasingly unable to process the data avalanche, and processing is being pushed out to the edge of the network closer to where the data-generating devices are. Fog computing and edge computing are two architectures for data handling that can offload data from the cloud, process it nearby the patient, and transmit information machine-to-machine or machine-to-human in milliseconds or seconds. Sensor data can be processed near the sensing and actuating devices with fog computing (with local nodes) and with edge computing (within the sensing devices). Compared to cloud computing, fog computing and edge computing offer five advantages: (1) greater data transmission speed, (2) less dependence on limited bandwidths, (3) greater privacy and security, (4) greater control over data generated in foreign countries where laws may limit use or permit unwanted governmental access, and (5) lower costs because more sensor-derived data are used locally and less data are transmitted remotely. Connected diabetes devices almost all use fog computing or edge computing because diabetes patients require a very rapid response to sensor input and cannot tolerate delays for cloud computing.

  1. Facilitating NASA Earth Science Data Processing Using Nebula Cloud Computing

    Pham, Long; Chen, Aijun; Kempler, Steven; Lynnes, Christopher; Theobald, Michael; Asghar, Esfandiari; Campino, Jane; Vollmer, Bruce


    Cloud Computing has been implemented in several commercial arenas. The NASA Nebula Cloud Computing platform is an Infrastructure as a Service (IaaS) built in 2008 at NASA Ames Research Center and 2010 at GSFC. Nebula is an open source Cloud platform intended to: a) Make NASA realize significant cost savings through efficient resource utilization, reduced energy consumption, and reduced labor costs. b) Provide an easier way for NASA scientists and researchers to efficiently explore and share large and complex data sets. c) Allow customers to provision, manage, and decommission computing capabilities on an as-needed bases

  2. A Generic Software Development Process Refined from Best Practices for Cloud Computing

    Soojin Park; Mansoo Hwang; Sangeun Lee; Young B. Park


    Cloud computing has emerged as more than just a piece of technology, it is rather a new IT paradigm. The philosophy behind cloud computing shares its view with green computing where computing environments and resources are not as subjects to own but as subjects of sustained use. However, converting currently used IT services to Software as a Service (SaaS) cloud computing environments introduces several new risks. To mitigate such risks, existing software development processes must undergo si...

  3. Process for computing geometric perturbations for probabilistic analysis

    Fitch, Simeon H. K. [Charlottesville, VA; Riha, David S [San Antonio, TX; Thacker, Ben H [San Antonio, TX


    A method for computing geometric perturbations for probabilistic analysis. The probabilistic analysis is based on finite element modeling, in which uncertainties in the modeled system are represented by changes in the nominal geometry of the model, referred to as "perturbations". These changes are accomplished using displacement vectors, which are computed for each node of a region of interest and are based on mean-value coordinate calculations.

  4. Partial reflection data collection and processing using a small computer

    Birley, M. H.; Sechrist, C. F., Jr.


    Online data collection of the amplitudes of circularly polarized radio waves, partially reflected from the D region of the earth's ionosphere, has enabled the calculation of an electron-density profile in the height region 60-90 km. A PDP 15/30 digital computer with an analog to digital converter and magnetic tape as an intermediary storage device are used. The computer configuration, the software developed, and the preliminary results are described.

  5. Students’ needs of Computer Science: learning about image processing

    Juana Marlen Tellez Reinoso


    Full Text Available To learn the treatment to image, specifically in the application Photoshop Marinates is one of the objectives in the specialty of Degree in Education, Computer Sciencie, guided to guarantee the preparation of the students as future professional, being able to reach in each citizen of our country an Integral General Culture. With that purpose a computer application is suggested, of tutorial type, entitled “Learning Treatment to Image".

  6. Realization of the computation process in the M-6000 computer for physical process automatization systems basing on CAMAC system

    Antonichev, G.M.; Vesenev, V.A.; Volkov, A.S.; Maslov, V.V.; Shilkin, I.P.; Bespalova, T.V.; Golutvin, I.A.; Nevskaya, N.A.


    Software for physical experiments using the CAMAC devices and the M-6000 computer are further developed. The construction principles and operation of the data acquisition system and the system generator are described. Using the generator for the data acquisition system the experimenter realizes the logic for data exchange between the CAMAC devices and the computer

  7. Modeling and characterization of VCSEL-based avionics full-duplex ethernet (AFDX) gigabit links

    Ly, Khadijetou S.; Rissons, A.; Gambardella, E.; Bajon, D.; Mollier, J.-C.


    Low cost and intrinsic performances of 850 nm Vertical Cavity Surface Emitting Lasers (VCSELs) compared to Light Emitting Diodes make them very attractive for high speed and short distances data communication links through optical fibers. Weight saving and Electromagnetic Interference withstanding requirements have led to the need of a reliable solution to improve existing avionics high speed buses (e.g. AFDX) up to 1Gbps over 100m. To predict and optimize the performance of the link, the physical behavior of the VCSEL must be well understood. First, a theoretical study is performed through the rate equations adapted to VCSEL in large signal modulation. Averaged turn-on delays and oscillation effects are analytically computed and analyzed for different values of the on- and off state currents. This will affect the eye pattern, timing jitter and Bit Error Rate (BER) of the signal that must remain within IEEE 802.3 standard limits. In particular, the off-state current is minimized below the threshold to allow the highest possible Extinction Ratio. At this level, the spontaneous emission is dominating and leads to significant turn-on delay, turn-on jitter and bit pattern effects. Also, the transverse multimode behavior of VCSELs, caused by Spatial Hole Burning leads to some dispersion in the fiber and degradation of BER. VCSEL to Multimode Fiber coupling model is provided for prediction and optimization of modal dispersion. Lastly, turn-on delay measurements are performed on a real mock-up and results are compared with calculations.

  8. Computer simulation of energy use, greenhouse gas emissions, and process economics of the fluid milk process.

    Tomasula, P M; Yee, W C F; McAloon, A J; Nutter, D W; Bonnaillie, L M


    Energy-savings measures have been implemented in fluid milk plants to lower energy costs and the energy-related carbon dioxide (CO2) emissions. Although these measures have resulted in reductions in steam, electricity, compressed air, and refrigeration use of up to 30%, a benchmarking framework is necessary to examine the implementation of process-specific measures that would lower energy use, costs, and CO2 emissions even further. In this study, using information provided by the dairy industry and equipment vendors, a customizable model of the fluid milk process was developed for use in process design software to benchmark the electrical and fuel energy consumption and CO2 emissions of current processes. It may also be used to test the feasibility of new processing concepts to lower energy and CO2 emissions with calculation of new capital and operating costs. The accuracy of the model in predicting total energy usage of the entire fluid milk process and the pasteurization step was validated using available literature and industry energy data. Computer simulation of small (40.0 million L/yr), medium (113.6 million L/yr), and large (227.1 million L/yr) processing plants predicted the carbon footprint of milk, defined as grams of CO2 equivalents (CO2e) per kilogram of packaged milk, to within 5% of the value of 96 g of CO 2e/kg of packaged milk obtained in an industry-conducted life cycle assessment and also showed, in agreement with the same study, that plant size had no effect on the carbon footprint of milk but that larger plants were more cost effective in producing milk. Analysis of the pasteurization step showed that increasing the percentage regeneration of the pasteurizer from 90 to 96% would lower its thermal energy use by almost 60% and that implementation of partial homogenization would lower electrical energy use and CO2e emissions of homogenization by 82 and 5.4%, respectively. It was also demonstrated that implementation of steps to lower non-process

  9. Principles of computer processing of Landsat data for geologic applications

    Taranik, James V.


    The main objectives of computer processing of Landsat data for geologic applications are to improve display of image data to the analyst or to facilitate evaluation of the multispectral characteristics of the data. Interpretations of the data are made from enhanced and classified data by an analyst trained in geology. Image enhancements involve adjustments of brightness values for individual picture elements. Image classification involves determination of the brightness values of picture elements for a particular cover type. Histograms are used to display the range and frequency of occurrence of brightness values. Landsat-1 and -2 data are preprocessed at Goddard Space Flight Center (GSFC) to adjust for the detector response of the multispectral scanner (MSS). Adjustments are applied to minimize the effects of striping, adjust for bad-data lines and line segments and lost individual pixel data. Because illumination conditions and landscape characteristics vary considerably and detector response changes with time, the radiometric adjustments applied at GSFC are seldom perfect and some detector striping remain in Landsat data. Rotation of the Earth under the satellite and movements of the satellite platform introduce geometric distortions in the data that must also be compensated for if image data are to be correctly displayed to the data analyst. Adjustments to Landsat data are made to compensate for variable solar illumination and for atmospheric effects. GeoMetric registration of Landsat data involves determination of the spatial location of a pixel in. the output image and the determination of a new value for the pixel. The general objective of image enhancement is to optimize display of the data to the analyst. Contrast enhancements are employed to expand the range of brightness values in Landsat data so that the data can be efficiently recorded in a manner desired by the analyst. Spatial frequency enhancements are designed to enhance boundaries between features


    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...


    I. Fisk


    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...


    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  13. Enhancing Manufacturing Process Education via Computer Simulation and Visualization

    Manohar, Priyadarshan A.; Acharya, Sushil; Wu, Peter


    Industrially significant metal manufacturing processes such as melting, casting, rolling, forging, machining, and forming are multi-stage, complex processes that are labor, time, and capital intensive. Academic research develops mathematical modeling of these processes that provide a theoretical framework for understanding the process variables…

  14. Plant process computer replacements - techniques to limit installation schedules and costs

    Baker, M.D.; Olson, J.L.


    Plant process computer systems, a standard fixture in all nuclear power plants, are used to monitor and display important plant process parameters. Scanning thousands of field sensors and alarming out-of-limit values, these computer systems are heavily relied on by control room operators. The original nuclear steam supply system (NSSS) vendor for the power plant often supplied the plant process computer. Designed using sixties and seventies technology, a plant's original process computer has been obsolete for some time. Driven by increased maintenance costs and new US Nuclear Regulatory Commission regulations such as NUREG-0737, Suppl. 1, many utilities have replaced their process computers with more modern computer systems. Given that computer systems are by their nature prone to rapid obsolescence, this replacement cycle will likely repeat. A process computer replacement project can be a significant capital expenditure and must be performed during a scheduled refueling outage. The object of the installation process is to install a working system on schedule. Experience gained by supervising several computer replacement installations has taught lessons that, if applied, will shorten the schedule and limit the risk of costly delays. Examples illustrating this technique are given. This paper and these examples deal only with the installation process and assume that the replacement computer system has been adequately designed, and development and factory tested

  15. Analysis of technology requirements and potential demand for general aviation avionics systems for operation in the 1980's

    Cohn, D. M.; Kayser, J. H.; Senko, G. M.; Glenn, D. R.


    Avionics systems are identified which promise to reduce economic constraints and provide significant improvements in performance, operational capability and utility for general aviation aircraft in the 1980's.

  16. Semiautonomous Avionics-and-Sensors System for a UAV

    Shams, Qamar


    Unmanned Aerial Vehicles (UAVs) autonomous or remotely controlled pilotless aircraft have been recently thrust into the spotlight for military applications, for homeland security, and as test beds for research. In addition to these functions, there are many space applications in which lightweight, inexpensive, small UAVS can be used e.g., to determine the chemical composition and other qualities of the atmospheres of remote planets. Moreover, on Earth, such UAVs can be used to obtain information about weather in various regions; in particular, they can be used to analyze wide-band acoustic signals to aid in determining the complex dynamics of movement of hurricanes. The Advanced Sensors and Electronics group at Langley Research Center has developed an inexpensive, small, integrated avionics-and-sensors system to be installed in a UAV that serves two purposes. The first purpose is to provide flight data to an AI (Artificial Intelligence) controller as part of an autonomous flight-control system. The second purpose is to store data from a subsystem of distributed MEMS (microelectromechanical systems) sensors. Examples of these MEMS sensors include humidity, temperature, and acoustic sensors, plus chemical sensors for detecting various vapors and other gases in the environment. The critical sensors used for flight control are a differential- pressure sensor that is part of an apparatus for determining airspeed, an absolute-pressure sensor for determining altitude, three orthogonal accelerometers for determining tilt and acceleration, and three orthogonal angular-rate detectors (gyroscopes). By using these eight sensors, it is possible to determine the orientation, height, speed, and rates of roll, pitch, and yaw of the UAV. This avionics-and-sensors system is shown in the figure. During the last few years, there has been rapid growth and advancement in the technological disciplines of MEMS, of onboard artificial-intelligence systems, and of smaller, faster, and

  17. Autonomous safety and reliability features of the K-1 avionics system

    Mueller, G.E.; Kohrs, D.; Bailey, R.; Lai, G. [Kistler Aerospace Corp., Kirkland, WA (United States)


    Kistler Aerospace Corporation is developing the K-1, a fully reusable, two-stage-to-orbit launch vehicle. Both stages return to the launch site using parachutes and airbags. Initial flight operations will occur from Woomera, Australia. K-1 guidance is performed autonomously. Each stage of the K- 1 employs a triplex, fault tolerant avionics architecture, including three fault tolerant computers and three radiation hardened Embedded GPS/INS units with a hardware voter. The K-1 has an Integrated Vehicle Health Management (IVHM) system on each stage residing in the three vehicle computers based on similar systems in commercial aircraft. During first-stage ascent, the IVHM system performs an Instantaneous Impact Prediction (IIP) calculation 25 times per second, initiating an abort in the event the vehicle is outside a predetermined safety corridor for at least three consecutive calculations. In this event, commands are issued to terminate thrust, separate the stages, dump all propellant in the first-stage, and initiate a normal landing sequence. The second-stage flight computer calculates its ability to reach orbit along its state vector, initiating an abort sequence similar to the first stage if it cannot. On a nominal mission, following separation, the second-stage also performs calculations to assure its impact point is within a safety corridor. The K-1's guidance and control design is being tested through simulation with hardware-in-the-loop at Draper Laboratory. Kistler's verification strategy assures reliable and safe operation of the K-1. (author)

  18. A Cellular Automata Approach to Computer Vision and Image Processing.


    the ACM, vol. 15, no. 9, pp. 827-837. [ Duda and Hart] R. 0. Duda and P. E. Hart, Pattern Classification and Scene Analysis, Wiley, New York, 1973...Center TR-738, 1979. [Farley] Arthur M. Farley and Andrzej Proskurowski, "Gossiping in Grid Graphs", University of Oregon Computer Science Department CS-TR

  19. Computer processing of microscopic images of bacteria : morphometry and fluorimetry

    Wilkinson, Michael H.F.; Jansen, Gijsbert J.; Waaij, Dirk van der


    Several techniques that use computer analysis of microscopic images have been developed to study the complicated microbial flora in the human intestine, including measuring the shape and fluorescence intensity of bacteria. These techniques allow rapid assessment of changes in the intestinal flora


    I. Fisk


    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  1. Cloud Computing and the Innovation Process of Technology Consulting

    Isse, Jordi


    Cloud Computing is heralded as the next big thing in enterprise IT. It will likely have a growing impact on IT and business activities in many organizations. It is changing the way IT departments used to work in order to get competitive advantages and meet the needs of the global economy. Accenture consulting currently has an advantage because they are developing innovation inside and also bringing innovation from outside to its current offer. However, what is of particular interest in this r...

  2. Computer Processing and Display of Positron Scintigrams and Dynamic Function Curves

    Wilensky, S.; Ashare, A. B.; Pizer, S. M.; Hoop, B. Jr.; Brownell, G. L. [Massachusetts General Hospital, Boston, MA (United States)


    A computer processing and display system for handling radioisotope data is described. The system has been used to upgrade and display brain scans and to process dynamic function curves. The hardware and software are described, and results are presented. (author)

  3. New data processing technologies at LHC: From Grid to Cloud Computing and beyond

    De Salvo, A.


    Since a few years the LHC experiments at CERN are successfully using the Grid Computing Technologies for their distributed data processing activities, on a global scale. Recently, the experience gained with the current systems allowed the design of the future Computing Models, involving new technologies like Could Computing, virtualization and high performance distributed database access. In this paper we shall describe the new computational technologies of the LHC experiments at CERN, comparing them with the current models, in terms of features and performance.

  4. Thinking processes used by high-performing students in a computer programming task

    Marietjie Havenga


    Full Text Available Computer programmers must be able to understand programming source code and write programs that execute complex tasks to solve real-world problems. This article is a trans- disciplinary study at the intersection of computer programming, education and psychology. It outlines the role of mental processes in the process of programming and indicates how successful thinking processes can support computer science students in writing correct and well-defined programs. A mixed methods approach was used to better understand the thinking activities and programming processes of participating students. Data collection involved both computer programs and students’ reflective thinking processes recorded in their journals. This enabled analysis of psychological dimensions of participants’ thinking processes and their problem-solving activities as they considered a programming problem. Findings indicate that the cognitive, reflective and psychological processes used by high-performing programmers contributed to their success in solving a complex programming problem. Based on the thinking processes of high performers, we propose a model of integrated thinking processes, which can support computer programming students. Keywords: Computer programming, education, mixed methods research, thinking processes.  Disciplines: Computer programming, education, psychology

  5. Computer Simulation of Bound Component Washing To Minimize Processing Costs

    Dagmar Janáčová


    Full Text Available In this paper we focused on the optimization of the washing processes because many technological processes are characterizedby large consumption of water, electrical energy and auxiliary chemicals mainly. For this reason it is very important to deal withthem. For the optimization of process of washing it is possible to set up an access of the indirect modeling that is based on make-up ofmathematical models coming out of study of the physical operation mechanism. The process is diffusion character it is characterizedby the value of diffusion effective coefficient and so called structure power of the removing item to the solid phase. The mentionedparameters belong to input data that are appropriate for the automatic control of washing process.

  6. 49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).


    ... 49 Transportation 9 2010-10-01 2010-10-01 false Computers and data processing equipment (account XX-27-46). 1242.46 Section 1242.46 Transportation Other Regulations Relating to Transportation... RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46...

  7. 77 FR 38826 - Certain Wireless Communication Devices, Portable Music and Data Processing Devices, Computers and...


    ... Devices, Portable Music and Data Processing Devices, Computers and Components Thereof, Commission Decision... importation of certain wireless communication devices, portable music and data processing devices, computers... further alleges the existence of a domestic industry. The Commission's notice of investigation named Apple...

  8. 77 FR 52759 - Certain Wireless Communication Devices, Portable Music and Data Processing Devices, Computers and...


    ... Devices, Portable Music and Data Processing Devices, Computers and Components Thereof; Notice of... communication devices, portable music and data processing devices, computers and components thereof by reason of... complaint further alleges the existence of a domestic industry. The Commission's notice of investigation...

  9. 77 FR 58576 - Certain Wireless Communication Devices, Portable Music and Data Processing Devices, Computers...


    ... Devices, Portable Music and Data Processing Devices, Computers, and Components Thereof; Institution of... communication devices, portable music and data processing devices, computers, and components thereof by reason... alleges that an industry in the United States exists as required by subsection (a)(2) of section 337. The...

  10. 78 FR 12785 - Certain Wireless Communication Devices, Portable Music and Data Processing Devices, Computers and...


    ... Devices, Portable Music and Data Processing Devices, Computers and Components Thereof; Commission Decision... importation of certain wireless communication devices, portable music and data processing devices, computers... further alleges the existence of a domestic industry. The Commission's notice of investigation named Apple...

  11. Expanding AirSTAR Capability for Flight Research in an Existing Avionics Design

    Laughter, Sean A.


    The NASA Airborne Subscale Transport Aircraft Research (AirSTAR) project is an Unmanned Aerial Systems (UAS) test bed for experimental flight control laws and vehicle dynamics research. During its development, the test bed has gone through a number of system permutations, each meant to add functionality to the concept of operations of the system. This enabled the build-up of not only the system itself, but also the support infrastructure and processes necessary to support flight operations. These permutations were grouped into project phases and the move from Phase-III to Phase-IV was marked by a significant increase in research capability and necessary safety systems due to the integration of an Internal Pilot into the control system chain already established for the External Pilot. The major system changes in Phase-IV operations necessitated a new safety and failsafe system to properly integrate both the Internal and External Pilots and to meet acceptable project safety margins. This work involved retrofitting an existing data system into the evolved concept of operations. Moving from the first Phase-IV aircraft to the dynamically scaled aircraft further involved restructuring the system to better guard against electromagnetic interference (EMI), and the entire avionics wiring harness was redesigned in order to facilitate better maintenance and access to onboard electronics. This retrofit and harness re-design will be explored and how it integrates with the evolved Phase-IV operations.

  12. A knowledge-based flight status monitor for real-time application in digital avionics systems

    Duke, E. L.; Disbrow, J. D.; Butler, G. F.


    The Dryden Flight Research Facility of the National Aeronautics and Space Administration (NASA) Ames Research Center (Ames-Dryden) is the principal NASA facility for the flight testing and evaluation of new and complex avionics systems. To aid in the interpretation of system health and status data, a knowledge-based flight status monitor was designed. The monitor was designed to use fault indicators from the onboard system which are telemetered to the ground and processed by a rule-based model of the aircraft failure management system to give timely advice and recommendations in the mission control room. One of the important constraints on the flight status monitor is the need to operate in real time, and to pursue this aspect, a joint research activity between NASA Ames-Dryden and the Royal Aerospace Establishment (RAE) on real-time knowledge-based systems was established. Under this agreement, the original LISP knowledge base for the flight status monitor was reimplemented using the intelligent knowledge-based system toolkit, MUSE, which was developed under RAE sponsorship. Details of the flight status monitor and the MUSE implementation are presented.

  13. Self-Contained Avionics Sensing and Flight Control System for Small Unmanned Aerial Vehicle

    Shams, Qamar A. (Inventor); Logan, Michael J. (Inventor); Fox, Robert L. (Inventor); Fox, legal representative, Christopher L. (Inventor); Fox, legal representative, Melanie L. (Inventor); Ingham, John C. (Inventor); Laughter, Sean A. (Inventor); Kuhn, III, Theodore R. (Inventor); Adams, James K. (Inventor); Babel, III, Walter C. (Inventor)


    A self-contained avionics sensing and flight control system is provided for an unmanned aerial vehicle (UAV). The system includes sensors for sensing flight control parameters and surveillance parameters, and a Global Positioning System (GPS) receiver. Flight control parameters and location signals are processed to generate flight control signals. A Field Programmable Gate Array (FPGA) is configured to provide a look-up table storing sets of values with each set being associated with a servo mechanism mounted on the UAV and with each value in each set indicating a unique duty cycle for the servo mechanism associated therewith. Each value in each set is further indexed to a bit position indicative of a unique percentage of a maximum duty cycle for the servo mechanism associated therewith. The FPGA is further configured to provide a plurality of pulse width modulation (PWM) generators coupled to the look-up table. Each PWM generator is associated with and adapted to be coupled to one of the servo mechanisms.

  14. 3D data processing with advanced computer graphics tools

    Zhang, Song; Ekstrand, Laura; Grieve, Taylor; Eisenmann, David J.; Chumbley, L. Scott


    Often, the 3-D raw data coming from an optical profilometer contains spiky noises and irregular grid, which make it difficult to analyze and difficult to store because of the enormously large size. This paper is to address these two issues for an optical profilometer by substantially reducing the spiky noise of the 3-D raw data from an optical profilometer, and by rapidly re-sampling the raw data into regular grids at any pixel size and any orientation with advanced computer graphics tools. Experimental results will be presented to demonstrate the effectiveness of the proposed approach.

  15. Radar Data Processing Using a Distributed Computational System


    objects to processors must reduce Toc (N) (i.e., the time to compute on 85 N nodes) [Ref. 28]. Time spent communicating can represent a degradation Sistemas e Computaq&o, s/ data. [9] Vilhena R. "IntroduqAo aos Algoritmos para Processamento de Marcaq6es e DistAncias", Escola Naval - Notas de...Aula - Automaq&o de Sistemas Navais, s/ data. (101 Averbuch A., Itzikcwitz S., and Kapon T. "Parallel Implementation of Multiple Model Tracking

  16. Dual-Energy Computed Tomography: Image Acquisition, Processing, and Workflow.

    Megibow, Alec J; Kambadakone, Avinash; Ananthakrishnan, Lakshmi


    Dual energy computed tomography has been available for more than 10 years; however, it is currently on the cusp of widespread clinical use. The way dual energy data are acquired and assembled must be appreciated at the clinical level so that the various reconstruction types can extend its diagnostic power. The type of scanner that is present in a given practice dictates the way in which the dual energy data can be presented and used. This article compares and contrasts how dual source, rapid kV switching, and spectral technologies acquire and present dual energy reconstructions to practicing radiologists. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Computational simulation of laser heat processing of materials

    Shankar, Vijaya; Gnanamuthu, Daniel


    A computational model simulating the laser heat treatment of AISI 4140 steel plates with a CW CO2 laser beam has been developed on the basis of the three-dimensional, time-dependent heat equation (subject to the appropriate boundary conditions). The solution method is based on Newton iteration applied to a triple-approximate factorized form of the equation. The method is implicit and time-accurate; the maintenance of time-accuracy in the numerical formulation is noted to be critical for the simulation of finite length workpieces with a finite laser beam dwell time.

  18. Signal validation with control-room information-processing computers

    Belblidia, L.A.; Carlson, R.W.; Russell, J.L. Jr.


    One of the 'lessons learned' from the Three Mile Island accident focuses upon the need for a validated source of plant-status information in the control room. The utilization of computer-generated graphics to display the readings of the major plant instrumentation has introduced the capability of validating signals prior to their presentation to the reactor operations staff. The current operations philosophies allow the operator a quick look at the gauges to form an impression of the fraction of full scale as the basis for knowledge of the current plant conditions. After the introduction of a computer-based information-display system such as the Safety Parameter Display System (SPDS), operational decisions can be based upon precise knowledge of the parameters that define the operation of the reactor and auxiliary systems. The principal impact of this system on the operator will be to remove the continuing concern for the validity of the instruments which provide the information that governs the operator's decisions. (author)

  19. Object oriented business process modelling in RFID applied computing environment

    Zhao, X.; Liu, Chengfei; Lin, T.; Ranasinghe, D.C.; Sheng, Q.Z.


    As a tracking technology, Radio Frequency Identification (RFID) is now widely applied to enhance the context awareness of enterprise information systems. Such awareness provides great opportunities to facilitate business process automation and thereby improve operation efficiency and accuracy. With

  20. Computer Aided Synthesis of Innovative Processes: Renewable Adipic Acid Production

    Rosengarta, Alessandro; Bertran, Maria-Ona; Manenti, Flavio


    A promising biotechnological route for the production of adipic acid from renewables has been evaluated, applying a systematic methodology for process network synthesis and optimization. The method allows organizing in a structured database the available knowledge from different sources (prelimin...

  1. CIPSS [computer-integrated process and safeguards system]: The integration of computer-integrated manufacturing and robotics with safeguards, security, and process operations

    Leonard, R.S.; Evans, J.C.


    This poster session describes the computer-integrated process and safeguards system (CIPSS). The CIPSS combines systems developed for factory automation and automated mechanical functions (robots) with varying degrees of intelligence (expert systems) to create an integrated system that would satisfy current and emerging security and safeguards requirements. Specifically, CIPSS is an extension of the automated physical security functions concepts. The CIPSS also incorporates the concepts of computer-integrated manufacturing (CIM) with integrated safeguards concepts, and draws upon the Defense Advance Research Project Agency's (DARPA's) strategic computing program

  2. PEAC: A Power-Efficient Adaptive Computing Technology for Enabling Swarm of Small Spacecraft and Deployable Mini-Payloads

    National Aeronautics and Space Administration — This task is to develop and demonstrate a path-to-flight and power-adaptive avionics technology PEAC (Power Efficient Adaptive Computing). PEAC will enable emerging...

  3. Further improvement in ABWR (part-4) open distributed plant process computer system

    Makino, Shigenori; Hatori, Yoshinori


    In the nuclear industry of Japan, the electric power companies have promoted the plant process computer (PPC) technology of nuclear power plant (NPP). When PPC was introduced to NPP for the first time, because of very tight requirement such as high reliability, high speed processing, the large-scale customized computer was applied. As for recent computer field, the large market of computer contributes to the remarkable progress of engineering work station (EWS) and personal computer (PC) technology. Moreover because the data transmission technology has been progressing at the same time, world wide computer network has been established. Thanks to progress of both technologies, the distributed computer system has been established at reasonable price. So Tokyo Electric Power Company (TEPCO) is trying to apply it for PPC of NPP. (author)

  4. Development of the computer-aided process planning (CAPP system for polymer injection molds manufacturing

    J. Tepić


    Full Text Available Beginning of production and selling of polymer products largely depends on mold manufacturing. The costs of mold manufacturing have significant share in the final price of a product. The best way to improve and rationalize polymer injection molds production process is by doing mold design automation and manufacturing process planning automation. This paper reviews development of a dedicated process planning system for manufacturing of the mold for injection molding, which integrates computer-aided design (CAD, computer-aided process planning (CAPP and computer-aided manufacturing (CAM technologies.

  5. The modernization of the process computer of the Trillo Nuclear Power Plant

    Martin Aparicio, J.; Atanasio, J.


    The paper describes the modernization of the Process computer of the Trillo Nuclear Power Plant. The process computer functions, have been incorporated in the non Safety I and C platform selected in Trillo NPP: the Siemens SPPA-T2000 OM690 (formerly known as Teleperm XP). The upgrade of the Human Machine Interface of the control room has been included in the project. The modernization project has followed the same development process used in the upgrade of the process computer of PWR German nuclear power plants. (Author)

  6. Avionics Systems Laboratory/Building 16. Historical Documentation

    Slovinac, Patricia; Deming, Joan


    As part of this nation-wide study, in September 2006, historical survey and evaluation of NASA-owned and managed facilities that was conducted by NASA s Lyndon B. Johnson Space Center (JSC) in Houston, Texas. The results of this study are presented in a report entitled, "Survey and Evaluation of NASA-owned Historic Facilities and Properties in the Context of the U.S. Space Shuttle Program, Lyndon B. Johnson Space Center, Houston, Texas," prepared in November 2007 by NASA JSC s contractor, Archaeological Consultants, Inc. As a result of this survey, the Avionics Systems Laboratory (Building 16) was determined eligible for listing in the NRHP, with concurrence by the Texas State Historic Preservation Officer (SHPO). The survey concluded that Building 5 is eligible for the NRHP under Criteria A and C in the context of the U.S. Space Shuttle program (1969-2010). Because it has achieved significance within the past 50 years, Criteria Consideration G applies. At the time of this documentation, Building 16 was still used to support the SSP as an engineering research facility, which is also sometimes used for astronaut training. This documentation package precedes any undertaking as defined by Section 106 of the NHPA, as amended, and implemented in 36 CFR Part 800, as NASA JSC has decided to proactively pursue efforts to mitigate the potential adverse affects of any future modifications to the facility. It includes a historical summary of the Space Shuttle program; the history of JSC in relation to the SSP; a narrative of the history of Building 16 and how it supported the SSP; and a physical description of the structure. In addition, photographs documenting the construction and historical use of Building 16 in support of the SSP, as well as photographs of the facility documenting the existing conditions, special technological features, and engineering details, are included. A contact sheet printed on archival paper, and an electronic copy of the work product on CD, are

  7. Software Development Processes Applied to Computational Icing Simulation

    Levinson, Laurie H.; Potapezuk, Mark G.; Mellor, Pamela A.


    The development of computational icing simulation methods is making the transition form the research to common place use in design and certification efforts. As such, standards of code management, design validation, and documentation must be adjusted to accommodate the increased expectations of the user community with respect to accuracy, reliability, capability, and usability. This paper discusses these concepts with regard to current and future icing simulation code development efforts as implemented by the Icing Branch of the NASA Lewis Research Center in collaboration with the NASA Lewis Engineering Design and Analysis Division. With the application of the techniques outlined in this paper, the LEWICE ice accretion code has become a more stable and reliable software product.

  8. Outsourcing Set Intersection Computation Based on Bloom Filter for Privacy Preservation in Multimedia Processing

    Hongliang Zhu


    Full Text Available With the development of cloud computing, the advantages of low cost and high computation ability meet the demands of complicated computation of multimedia processing. Outsourcing computation of cloud could enable users with limited computing resources to store and process distributed multimedia application data without installing multimedia application software in local computer terminals, but the main problem is how to protect the security of user data in untrusted public cloud services. In recent years, the privacy-preserving outsourcing computation is one of the most common methods to solve the security problems of cloud computing. However, the existing computation cannot meet the needs for the large number of nodes and the dynamic topologies. In this paper, we introduce a novel privacy-preserving outsourcing computation method which combines GM homomorphic encryption scheme and Bloom filter together to solve this problem and propose a new privacy-preserving outsourcing set intersection computation protocol. Results show that the new protocol resolves the privacy-preserving outsourcing set intersection computation problem without increasing the complexity and the false positive probability. Besides, the number of participants, the size of input secret sets, and the online time of participants are not limited.

  9. ALSAN - A system for disturbance analysis by process computers

    Felkel, L.; Grumbach, R.


    The program system ALSAN has been developed to process the large number of signals due to a disturbance in a complex technical process, to recognize the important (in order to settle the disturbance within a minimum amount of time) information, and to display it to the operators. By means of the results, clear decisions can be made on what counteractions have to be taken. The system works in on-line-open-loop mode, and analyses disturbances autonomously as well as in dialog with the operators. (orig.) [de

  10. The Simulation and Analysis of the Closed Die Hot Forging Process by A Computer Simulation Method

    Dipakkumar Gohil


    Full Text Available The objective of this research work is to study the variation of various parameters such as stress, strain, temperature, force, etc. during the closed die hot forging process. A computer simulation modeling approach has been adopted to transform the theoretical aspects in to a computer algorithm which would be used to simulate and analyze the closed die hot forging process. For the purpose of process study, the entire deformation process has been divided in to finite number of steps appropriately and then the output values have been computed at each deformation step. The results of simulation have been graphically represented and suitable corrective measures are also recommended, if the simulation results do not agree with the theoretical values. This computer simulation approach would significantly improve the productivity and reduce the energy consumption of the overall process for the components which are manufactured by the closed die forging process and contribute towards the efforts in reducing the global warming.

  11. Use of personal computer image for processing a magnetic resonance image (MRI)

    Yamamoto, Tetsuo; Tanaka, Hitoshi


    Image processing of MR imaging was attempted by using a popular personal computer as 16-bit model. The computer processed the images on a 256 x 256 matrix and 512 x 512 matrix. The softwer languages for image-processing were those of Macro-Assembler performed by (MS-DOS). The original images, acuired with an 0.5 T superconducting machine (VISTA MR 0.5 T, Picker International) were transfered to the computer by the flexible disket. Image process are the display of image to monitor, other the contrast enhancement, the unsharped mask contrast enhancement, the various filter process, the edge detections or the color histogram was obtained in 1.6 sec to 67 sec, indicating that commercialzed personal computer had ability for routine clinical purpose in MRI-processing. (author)

  12. Computer Simulation of Developmental Processes and Toxicities (SOT)

    Rationale: Recent progress in systems toxicology and synthetic biology have paved the way to new thinking about in vitro/in silico modeling of developmental processes and toxicities, both for embryological and reproductive impacts. Novel in vitro platforms such as 3D organotypic ...

  13. A Computational Evaluation of Sentence Processing Deficits in Aphasia

    Patil, Umesh; Hanne, Sandra; Burchert, Frank; De Bleser, Ria; Vasishth, Shravan


    Individuals with agrammatic Broca's aphasia experience difficulty when processing reversible non-canonical sentences. Different accounts have been proposed to explain this phenomenon. The Trace Deletion account (Grodzinsky, 1995, 2000, 2006) attributes this deficit to an impairment in syntactic representations, whereas others (e.g., Caplan,…

  14. Modeling and computational simulation of the osmotic evaporation process

    Freddy Forero Longas


    Conclusions: It was found that for the conditions studied the Knudsen diffusion model is most suitable to describe the transfer of water vapor through the hydrophobic membrane. Simulations developed adequately describe the process of osmotic evaporation, becoming a tool for faster economic development of this technology.

  15. The Use Of Computer Intelligent Processing Technologies Among ...

    This paper assesses the awareness and usage of a novel approach to data and information processing among scientists, researchers and students in the field of environmental sciences. In depth and structured interview was conducted, targeting a population who are working in a variety of environmental issues. The data ...

  16. Use of NESTLE computer code for NPP transition process analysis

    Gal'chenko, V.V.


    A newly created WWER-440 reactor model with use NESTLE code is discussed. Results of 'fast' and 'slow' transition processes based on it are presented. This model was developed for Rovno NPP reactor and it can be used also for WWER-1000 reactor in Zaporozhe NPP

  17. The certification process of the LHCb distributed computing software

    CERN. Geneva


    DIRAC contains around 200 thousand lines of python code, and LHCbDIRAC around 120 thousand. The testing process for each release consists of a number of steps, that includes static code analysis, unit tests, integration tests, regression tests, system tests. We dubbed the full p...

  18. Recent Advances in Computational Methods for Nuclear Magnetic Resonance Data Processing

    Gao, Xin


    Although three-dimensional protein structure determination using nuclear magnetic resonance (NMR) spectroscopy is a computationally costly and tedious process that would benefit from advanced computational techniques, it has not garnered much research attention from specialists in bioinformatics and computational biology. In this paper, we review recent advances in computational methods for NMR protein structure determination. We summarize the advantages of and bottlenecks in the existing methods and outline some open problems in the field. We also discuss current trends in NMR technology development and suggest directions for research on future computational methods for NMR.

  19. Process-Oriented Parallel Programming with an Application to Data-Intensive Computing

    Givelberg, Edward


    We introduce process-oriented programming as a natural extension of object-oriented programming for parallel computing. It is based on the observation that every class of an object-oriented language can be instantiated as a process, accessible via a remote pointer. The introduction of process pointers requires no syntax extension, identifies processes with programming objects, and enables processes to exchange information simply by executing remote methods. Process-oriented programming is a h...


    I. Fisk


    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  1. Predicting Cost/Reliability/Maintainability of Advanced General Aviation Avionics Equipment

    Davis, M. R.; Kamins, M.; Mooz, W. E.


    A methodology is provided for assisting NASA in estimating the cost, reliability, and maintenance (CRM) requirements for general avionics equipment operating in the 1980's. Practical problems of predicting these factors are examined. The usefulness and short comings of different approaches for modeling coast and reliability estimates are discussed together with special problems caused by the lack of historical data on the cost of maintaining general aviation avionics. Suggestions are offered on how NASA might proceed in assessing cost reliability CRM implications in the absence of reliable generalized predictive models.

  2. Avionics system design for requirements for the United States Coast Guard HH-65A Dolphin

    Young, D. A.


    Aerospatiale Helicopter Corporation (AHC) was awarded a contract by the United States Coast Guard for a new Short Range Recovery (SRR) Helicopter on 14 June 1979. The award was based upon an overall evaluation of performance, cost, and technical suitability. In this last respect, the SRR helicopter was required to meet a wide variety of mission needs for which the integrated avionics system has a high importance. This paper illustrates the rationale for the avionics system requirements, the system architecture, its capabilities and reliability and its adaptability to a wide variety of military and commercial purposes.

  3. Computed tomography: acquisition process, technology and current state

    Óscar Javier Espitia Mendoza


    Full Text Available Computed tomography is a noninvasive scan technique widely applied in areas such as medicine, industry, and geology. This technique allows the three-dimensional reconstruction of the internal structure of an object which is lighted with an X-rays source. The reconstruction is formed with two-dimensional cross-sectional images of the object. Each cross-sectional is obtained from measurements of physical phenomena, such as attenuation, dispersion, and diffraction of X-rays, as result of their interaction with the object. In general, measurements acquisition is performed with methods based on any of these phenomena and according to various architectures classified in generations. Furthermore, in response to the need to simulate acquisition systems for CT, software dedicated to this task has been developed. The objective of this research is to determine the current state of CT techniques, for this, a review of methods, different architectures used for the acquisition and some of its applications is presented. Additionally, results of simulations are presented. The main contributions of this work are the detailed description of acquisition methods and the presentation of the possible trends of the technique.

  4. Inferring Group Processes from Computer-Mediated Affective Text Analysis

    Schryver, Jack C [ORNL; Begoli, Edmon [ORNL; Jose, Ajith [Missouri University of Science and Technology; Griffin, Christopher [Pennsylvania State University


    Political communications in the form of unstructured text convey rich connotative meaning that can reveal underlying group social processes. Previous research has focused on sentiment analysis at the document level, but we extend this analysis to sub-document levels through a detailed analysis of affective relationships between entities extracted from a document. Instead of pure sentiment analysis, which is just positive or negative, we explore nuances of affective meaning in 22 affect categories. Our affect propagation algorithm automatically calculates and displays extracted affective relationships among entities in graphical form in our prototype (TEAMSTER), starting with seed lists of affect terms. Several useful metrics are defined to infer underlying group processes by aggregating affective relationships discovered in a text. Our approach has been validated with annotated documents from the MPQA corpus, achieving a performance gain of 74% over comparable random guessers.

  5. Computer simulation of damage processes during ion implantation

    Kang, H.J.; Shimizu, R.; Saito, T.; Yamakawa, H.


    A new version for the marlowe code, which enables dynamic simulation of damage processes during ion implantation to be performed, has been developed. This simulation code is based on uses of the Ziegler--Biersack--Littmark potential [in Proceedings of the International Engineering Congress on Ion Sources and Ion-Assisted Technology, edited by T. Takagi (Ionic Co., Tokyo, 1983), p. 1861] for elastic scattering and Firsov's equation [O. B. Firsov, Sov. Phys. JETP 61, 1453 (1971)] for electron stopping

  6. Massively Parallel Signal Processing using the Graphics Processing Unit for Real-Time Brain-Computer Interface Feature Extraction.

    Wilson, J Adam; Williams, Justin C


    The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain-computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  7. Quantum Computation-Based Image Representation, Processing Operations and Their Applications

    Fei Yan


    Full Text Available A flexible representation of quantum images (FRQI was proposed to facilitate the extension of classical (non-quantum-like image processing applications to the quantum computing domain. The representation encodes a quantum image in the form of a normalized state, which captures information about colors and their corresponding positions in the images. Since its conception, a handful of processing transformations have been formulated, among which are the geometric transformations on quantum images (GTQI and the CTQI that are focused on the color information of the images. In addition, extensions and applications of FRQI representation, such as multi-channel representation for quantum images (MCQI, quantum image data searching, watermarking strategies for quantum images, a framework to produce movies on quantum computers and a blueprint for quantum video encryption and decryption have also been suggested. These proposals extend classical-like image and video processing applications to the quantum computing domain and offer a significant speed-up with low computational resources in comparison to performing the same tasks on traditional computing devices. Each of the algorithms and the mathematical foundations for their execution were simulated using classical computing resources, and their results were analyzed alongside other classical computing equivalents. The work presented in this review is intended to serve as the epitome of advances made in FRQI quantum image processing over the past five years and to simulate further interest geared towards the realization of some secure and efficient image and video processing applications on quantum computers.

  8. Computational spectrotemporal auditory model with applications to acoustical information processing

    Chi, Tai-Shih

    A computational spectrotemporal auditory model based on neurophysiological findings in early auditory and cortical stages is described. The model provides a unified multiresolution representation of the spectral and temporal features of sound likely critical in the perception of timbre. Several types of complex stimuli are used to demonstrate the spectrotemporal information preserved by the model. Shown by these examples, this two stage model reflects the apparent progressive loss of temporal dynamics along the auditory pathway from the rapid phase-locking (several kHz in auditory nerve), to moderate rates of synchrony (several hundred Hz in midbrain), to much lower rates of modulations in the cortex (around 30 Hz). To complete this model, several projection-based reconstruction algorithms are implemented to resynthesize the sound from the representations with reduced dynamics. One particular application of this model is to assess speech intelligibility. The spectro-temporal Modulation Transfer Functions (MTF) of this model is investigated and shown to be consistent with the salient trends in the human MTFs (derived from human detection thresholds) which exhibit a lowpass function with respect to both spectral and temporal dimensions, with 50% bandwidths of about 16 Hz and 2 cycles/octave. Therefore, the model is used to demonstrate the potential relevance of these MTFs to the assessment of speech intelligibility in noise and reverberant conditions. Another useful feature is the phase singularity emerged in the scale space generated by this multiscale auditory model. The singularity is shown to have certain robust properties and carry the crucial information about the spectral profile. Such claim is justified by perceptually tolerable resynthesized sounds from the nonconvex singularity set. In addition, the singularity set is demonstrated to encode the pitch and formants at different scales. These properties make the singularity set very suitable for traditional

  9. Computer-Aided Model Based Analysis for Design and Operation of a Copolymerization Process

    Lopez-Arenas, Maria Teresa; Sales-Cruz, Alfonso Mauricio; Gani, Rafiqul


    . This will allow analysis of the process behaviour, contribute to a better understanding of the polymerization process, help to avoid unsafe conditions of operation, and to develop operational and optimizing control strategies. In this work, through a computer-aided modeling system ICAS-MoT, two first......The advances in computer science and computational algorithms for process modelling, process simulation, numerical methods and design/synthesis algorithms, makes it advantageous and helpful to employ computer-aided modelling systems and tools for integrated process analysis. This is illustrated......-principles models have been investigated with respect to design and operational issues for solution copolymerization reactors in general, and for the methyl methacrylate/vinyl acetate system in particular. The Model 1 is taken from literature and is commonly used for low conversion region, while the Model 2 has...

  10. Computer algorithm for analyzing and processing borehole strainmeter data

    Langbein, John O.


    The newly installed Plate Boundary Observatory (PBO) strainmeters record signals from tectonic activity, Earth tides, and atmospheric pressure. Important information about tectonic processes may occur at amplitudes at and below tidal strains and pressure loading. If incorrect assumptions are made regarding the background noise in the strain data, then the estimates of tectonic signal amplitudes may be incorrect. Furthermore, the use of simplifying assumptions that data are uncorrelated can lead to incorrect results and pressure loading and tides may not be completely removed from the raw data. Instead, any algorithm used to process strainmeter data must incorporate the strong temporal correlations that are inherent with these data. The technique described here uses least squares but employs data covariance that describes the temporal correlation of strainmeter data. There are several advantages to this method since many parameters are estimated simultaneously. These parameters include: (1) functional terms that describe the underlying error model, (2) the tidal terms, (3) the pressure loading term(s), (4) amplitudes of offsets, either those from earthquakes or from the instrument, (5) rate and changes in rate, and (6) the amplitudes and time constants of either logarithmic or exponential curves that can characterize postseismic deformation or diffusion of fluids near the strainmeter. With the proper error model, realistic estimates of the standard errors of the various parameters are obtained; this is especially critical in determining the statistical significance of a suspected, tectonic strain signal. The program also provides a method of tracking the various adjustments required to process strainmeter data. In addition, the program provides several plots to assist with identifying either tectonic signals or other signals that may need to be removed before any geophysical signal can be identified.

  11. Computer simulation of processes in the dead–end furnace

    Zavorin, A S; Khaustov, S A; Zaharushkin, Russia N A


    We study turbulent combustion of natural gas in the reverse flame of fire–tube boiler simulated with the ANSYS Fluent 12.1.4 engineering simulation software. Aerodynamic structure and volumetric pressure fields of the flame were calculated. The results are presented in graphical form. The effect of the twist parameter for a drag coefficient of dead–end furnace was estimated. Finite element method was used for simulating the following processes: the combustion of methane in air oxygen, radiant and convective heat transfer, turbulence. Complete geometric model of the dead–end furnace based on boiler drawings was considered

  12. Digital image processing and analysis human and computer vision applications with CVIPtools

    Umbaugh, Scott E


    Section I Introduction to Digital Image Processing and AnalysisDigital Image Processing and AnalysisOverviewImage Analysis and Computer VisionImage Processing and Human VisionKey PointsExercisesReferencesFurther ReadingComputer Imaging SystemsImaging Systems OverviewImage Formation and SensingCVIPtools SoftwareImage RepresentationKey PointsExercisesSupplementary ExercisesReferencesFurther ReadingSection II Digital Image Analysis and Computer VisionIntroduction to Digital Image AnalysisIntroductionPreprocessingBinary Image AnalysisKey PointsExercisesSupplementary ExercisesReferencesFurther Read

  13. Modeling rainfall-runoff process using soft computing techniques

    Kisi, Ozgur; Shiri, Jalal; Tombul, Mustafa


    Rainfall-runoff process was modeled for a small catchment in Turkey, using 4 years (1987-1991) of measurements of independent variables of rainfall and runoff values. The models used in the study were Artificial Neural Networks (ANNs), Adaptive Neuro-Fuzzy Inference System (ANFIS) and Gene Expression Programming (GEP) which are Artificial Intelligence (AI) approaches. The applied models were trained and tested using various combinations of the independent variables. The goodness of fit for the model was evaluated in terms of the coefficient of determination (R2), root mean square error (RMSE), mean absolute error (MAE), coefficient of efficiency (CE) and scatter index (SI). A comparison was also made between these models and traditional Multi Linear Regression (MLR) model. The study provides evidence that GEP (with RMSE=17.82 l/s, MAE=6.61 l/s, CE=0.72 and R2=0.978) is capable of modeling rainfall-runoff process and is a viable alternative to other applied artificial intelligence and MLR time-series methods.

  14. An investigation into the organisation and structural design of multi-computer process-control systems

    Gertenbach, W.P.


    A multi-computer system for the collection of data and control of distributed processes has been developed. The structure and organisation of this system, a study of the general theory of systems and of modularity was used as a basis for an investigation into the organisation and structured design of multi-computer process-control systems. A multi-dimensional model of multi-computer process-control systems was developed. In this model a strict separation was made between organisational properties of multi-computer process-control systems and implementation dependant properties. The model was based on the principles of hierarchical analysis and modularity. Several notions of hierarchy were found necessary to describe fully the organisation of multi-computer systems. A new concept, that of interconnection abstraction was identified. This concept is an extrapolation of implementation techniques in the hardware implementation area to the software implementation area. A synthesis procedure which relies heavily on the above described analysis of multi-computer process-control systems is proposed. The above mentioned model, and a set of performance factors which depend on a set of identified design criteria, were used to constrain the set of possible solutions to the multi-computer process-control system synthesis-procedure

  15. Computer-Aided Prototyping Systems (CAPS) within the software acquisition process: a case study

    Ellis, Mary Kay


    Approved for public release; distribution is unlimited This thesis provides a case study which examines the benefits derived from the practice of computer-aided prototyping within the software acquisition process. An experimental prototyping systems currently in research is the Computer Aided Prototyping System (CAPS) managed under the Computer Science department of the Naval Postgraduate School, Monterey, California. This thesis determines the qualitative value which may be realized by ...

  16. Monitoring Biological Modes in a Bioreactor Process by Computer Simulation

    Samia Semcheddine


    Full Text Available This paper deals with the general framework of fermentation system modeling and monitoring, focusing on the fermentation of Escherichia coli. Our main objective is to develop an algorithm for the online detection of acetate production during the culture of recombinant proteins. The analysis the fermentation process shows that it behaves like a hybrid dynamic system with commutation (since it can be represented by 5 nonlinear models. We present a strategy of fault detection based on residual generation for detecting the different actual biological modes. The residual generation is based on nonlinear analytical redundancy relations. The simulation results show that the several modes that are occulted during the bacteria cultivation can be detected by residuals using a nonlinear dynamic model and a reduced instrumentation.


    I. Fisk


    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...


    I. Fisk


      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  19. Massively parallel signal processing using the graphics processing unit for real-time brain-computer interface feature extraction

    J. Adam Wilson


    Full Text Available The clock speeds of modern computer processors have nearly plateaued in the past five years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card (GPU was developed for real-time neural signal processing of a brain-computer interface (BCI. The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter, followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally-intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a CPU-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  20. Computational integration of the phases and procedures of calibration processes for radioprotection

    Santos, Gleice R. dos; Thiago, Bibiana dos S.; Rocha, Felicia D.G.; Santos, Gelson P. dos; Potiens, Maria da Penha A.; Vivolo, Vitor


    This work proceed the computational integration of the processes phases by using only a single computational software, from the entrance of the instrument at the Instrument Calibration Laboratory (LCI-IPEN) to the conclusion of calibration procedures. So, the initial information such as trade mark, model, manufacturer, owner, and the calibration records are digitized once until the calibration certificate emission

  1. Process-Based Development of Competence Models to Computer Science Education

    Zendler, Andreas; Seitz, Cornelia; Klaudt, Dieter


    A process model ("cpm.4.CSE") is introduced that allows the development of competence models in computer science education related to curricular requirements. It includes eight subprocesses: (a) determine competence concept, (b) determine competence areas, (c) identify computer science concepts, (d) assign competence dimensions to…

  2. A computational approach for fluid queues driven by truncated birth-death processes.

    Lenin, R.B.; Parthasarathy, P.R.


    In this paper, we analyze fluid queues driven by truncated birth-death processes with general birth and death rates. We compute the equilibrium distribution of the content of the fluid buffer by providing efficient numerical procedures to compute the eigenvalues and the eigenvectors of the

  3. A Method for Identifying Contours in Processing Digital Images from Computer Tomograph

    Roşu, Şerban; Pater, Flavius; Costea, Dan; Munteanu, Mihnea; Roşu, Doina; Fratila, Mihaela


    The first step in digital processing of two-dimensional computed tomography images is to identify the contour of component elements. This paper deals with the collective work of specialists in medicine and applied mathematics in computer science on elaborating new algorithms and methods in medical 2D and 3D imagery.

  4. An Analysis of Creative Process Learning in Computer Game Activities through Player Experiences

    Inchamnan, Wilawan


    This research investigates the extent to which creative processes can be fostered through computer gaming. It focuses on creative components in games that have been specifically designed for educational purposes: Digital Game Based Learning (DGBL). A behavior analysis for measuring the creative potential of computer game activities and learning…

  5. Efficient Buffer Capacity and Scheduler Setting Computation for Soft Real-Time Stream Processing Applications

    Bekooij, Marco; Bekooij, Marco Jan Gerrit; Wiggers, M.H.; van Meerbergen, Jef


    Soft real-time applications that process data streams can often be intuitively described as dataflow process networks. In this paper we present a novel analysis technique to compute conservative estimates of the required buffer capacities in such process networks. With the same analysis technique

  6. In-line instrumentation and computer-controlled process supervision in reprocessing

    Mache, H.R.; Groll, P.

    Measuring equipment is needed for continuous monitoring of concentration in radioactive process solutions. A review is given of existing in-line apparatus and of computer-controlled data processing. A process control system is described for TAMARA, a model extraction facility for the U/HNO 3 /TBP system

  7. Designing scheduling concept and computer support in the food processing industries

    van Donk, DP; van Wezel, W; Gaalman, G; Bititci, US; Carrie, AS


    Food processing industries cope with a specific production process and a dynamic market. Scheduling the production process is thus important in being competitive. This paper proposes a hierarchical concept for structuring the scheduling and describes the (computer) support needed for this concept.

  8. Spatial Processing of Urban Acoustic Wave Fields from High-Performance Computations

    Ketcham, Stephen A; Wilson, D. K; Cudney, Harley H; Parker, Michael W


    .... The objective of this work is to develop spatial processing techniques for acoustic wave propagation data from three-dimensional high-performance computations to quantify scattering due to urban...

  9. Program software for the automated processing of gravity and magnetic survey data for the Mir computer

    Lyubimov, G.A.


    A presentation is made of the content of program software for the automated processing of gravity and magnetic survey data for the small Mir-1 and Mir-2 computers as worked out on the Voronezh geophysical expedition.

  10. An improved, computer-based, on-line gamma monitor for plutonium anion exchange process control

    Pope, N.G.; Marsh, S.F.


    An improved, low-cost, computer-based system has replaced a previously developed on-line gamma monitor. Both instruments continuously profile uranium, plutonium, and americium in the nitrate anion exchange process used to recover and purify plutonium at the Los Alamos Plutonium Facility. The latest system incorporates a personal computer that provides full-feature multichannel analyzer (MCA) capabilities by means of a single-slot, plug-in integrated circuit board. In addition to controlling all MCA functions, the computer program continuously corrects for gain shift and performs all other data processing functions. This Plutonium Recovery Operations Gamma Ray Energy Spectrometer System (PROGRESS) provides on-line process operational data essential for efficient operation. By identifying abnormal conditions in real time, it allows operators to take corrective actions promptly. The decision-making capability of the computer will be of increasing value as we implement automated process-control functions in the future. 4 refs., 6 figs

  11. A Generic Software Development Process Refined from Best Practices for Cloud Computing

    Soojin Park


    Full Text Available Cloud computing has emerged as more than just a piece of technology, it is rather a new IT paradigm. The philosophy behind cloud computing shares its view with green computing where computing environments and resources are not as subjects to own but as subjects of sustained use. However, converting currently used IT services to Software as a Service (SaaS cloud computing environments introduces several new risks. To mitigate such risks, existing software development processes must undergo significant remodeling. This study analyzes actual cases of SaaS cloud computing environment adoption as a way to derive four new best practices for software development and incorporates the identified best practices for currently-in-use processes. Furthermore, this study presents a design for generic software development processes that implement the proposed best practices. The design for the generic process has been applied to reinforce the weak points found in SaaS cloud service development practices used by eight enterprises currently developing or operating actual SaaS cloud computing services. Lastly, this study evaluates the applicability of the proposed SaaS cloud oriented development process through analyzing the feedback data collected from actual application to the development of a SaaS cloud service Astation.


    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...


    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...


    I. Fisk


    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...


    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...


    I. Fisk


    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  17. Multimedia information processing in the SWAN mobile networked computing system

    Agrawal, Prathima; Hyden, Eoin; Krzyzanowsji, Paul; Srivastava, Mani B.; Trotter, John


    Anytime anywhere wireless access to databases, such as medical and inventory records, can simplify workflow management in a business, and reduce or even eliminate the cost of moving paper documents. Moreover, continual progress in wireless access technology promises to provide per-user bandwidths of the order of a few Mbps, at least in indoor environments. When combined with the emerging high-speed integrated service wired networks, it enables ubiquitous and tetherless access to and processing of multimedia information by mobile users. To leverage on this synergy an indoor wireless network based on room-sized cells and multimedia mobile end-points is being developed at AT&T Bell Laboratories. This research network, called SWAN (Seamless Wireless ATM Networking), allows users carrying multimedia end-points such as PDAs, laptops, and portable multimedia terminals, to seamlessly roam while accessing multimedia data streams from the wired backbone network. A distinguishing feature of the SWAN network is its use of end-to-end ATM connectivity as opposed to the connectionless mobile-IP connectivity used by present day wireless data LANs. This choice allows the wireless resource in a cell to be intelligently allocated amongst various ATM virtual circuits according to their quality of service requirements. But an efficient implementation of ATM in a wireless environment requires a proper mobile network architecture. In particular, the wireless link and medium-access layers need to be cognizant of the ATM traffic, while the ATM layers need to be cognizant of the mobility enabled by the wireless layers. This paper presents an overview of SWAN's network architecture, briefly discusses the issues in making ATM mobile and wireless, and describes initial multimedia applications for SWAN.

  18. Processing of evaluated neutron data files in ENDF format on personal computers

    Vertes, P.


    A computer code package - FDMXPC - has been developed for processing evaluated data files in ENDF format. The earlier version of this package is supplemented with modules performing calculations using Reich-Moore and Adler-Adler resonance parameters. The processing of evaluated neutron data files by personal computers requires special programming considerations outlined in this report. The scope of the FDMXPC program system is demonstrated by means of numerical examples. (author). 5 refs, 4 figs, 4 tabs

  19. Algorithm-structured computer arrays and networks architectures and processes for images, percepts, models, information

    Uhr, Leonard


    Computer Science and Applied Mathematics: Algorithm-Structured Computer Arrays and Networks: Architectures and Processes for Images, Percepts, Models, Information examines the parallel-array, pipeline, and other network multi-computers.This book describes and explores arrays and networks, those built, being designed, or proposed. The problems of developing higher-level languages for systems and designing algorithm, program, data flow, and computer structure are also discussed. This text likewise describes several sequences of successively more general attempts to combine the power of arrays wi

  20. The Impact Of Cloud Computing Technology On The Audit Process And The Audit Profession

    Yati Nurhajati


    In the future cloud computing audits will become increasingly The use of that technology has influenced of the audit process and be a new challenge for both external and the Internal Auditors to understand IT and learn how to use cloud computing and cloud services that hire in cloud service provider CSP and considering the risks of cloud computing and how to audit cloud computing by risk based audit approach. The wide range of unique risks and depend on the type and model of the cloud soluti...

  1. Space shuttle program: Shuttle Avionics Integration Laboratory. Volume 7: Logistics management plan


    The logistics management plan for the shuttle avionics integration laboratory defines the organization, disciplines, and methodology for managing and controlling logistics support. Those elements requiring management include maintainability and reliability, maintenance planning, support and test equipment, supply support, transportation and handling, technical data, facilities, personnel and training, funding, and management data.

  2. Computational Modeling and High Performance Computing in Advanced Materials Processing, Synthesis, and Design


    crack growth in both the Ni and Ni-Al at higher maximum applied strain during cyclic loading. Plastic deformation was found to dominate crack...I interlaminar fracture of composite laminates incorporating with ultra-thin fibrous sheets, Journal of Reinforced Plastics and Composites, Article...process and appHcations of electrospun fibers. Journal of Electrostatics, 35, 151-160, (1995). [18] Taylor G.: Disintegration of water drops in an


    Contributions from I. Fisk


    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  4. A software package to process an INIS magnetic tape on the VAX computer

    Omar, A.A.; Mohamed, F.A.


    This paper presents a software package whose function is to process the magnetic tapes distributed by the Atomic Energy Agency, on the VAX computers. These tapes contain abstracts of papers in the different branches of nuclear field and is supplied from the international Nuclear Information system (INIS). Two goals are aimed from this paper. First it gives a procedure to process any foreign magnetic tape on the VAX computers. Second, it solves the problem of reading the INIS tapes on a non IBM computer and thus allowing the specialists to gain from the large amount of information contained in these tapes. 11 fig

  5. Using a progress computer for the direct acquisition and processing of radiation protection data

    Barz, H.G.; Borchardt, K.D.; Hacke, J.; Kirschfeld, K.E.; Kluppak, B.


    A process computer will be used in the Hahn-Meitner-Institute to rationalize radiation protection measures. Appr. 150 transmitters are to be connected with this computer. Especially the radiation measuring devices of a nuclear reactor, of hot cells, and of a heavy ion accelerator, as well as the emission- and environment monitoring systems will be connected. The advantages of this method are described: central data acquisition, central alarm and stoppage information, data processing of certain measurement values, possibility of quick disturbance analysis. Furthermore the authors report about the preparations already finished, particularly about data transmission of digital and analog values to the computer. (orig./HP) [de

  6. Microcomputers, desk calculators and process computers for use in radiation protection

    Burgkhardt, B.; Nolte, G.; Schollmeier, W.; Rau, G.


    The goals achievable, or to be pursued, in radiation protection measurement and evaluation by using computers are explained. As there is a large variety of computers available offering a likewise large variety, of performances, use of a computer is justified even for minor measuring and evaluation tasks. The subdivision into: Microcomputers as an installed part of measuring equipment; measuring and evaluation systems with desk calculators; measuring and evaluation systems with process computers is done to explain the importance and extent of the measuring or evaluation tasks and the computing devices suitable for the various purposes. The special requirements to be met in order to fulfill the different tasks are discussed, both in terms of hardware and software and in terms of skill and knowledge of the personnel, and are illustrated by an example showing the usefulness of computers in radiation protection. (orig./HP) [de

  7. A model for understanding and learning of the game process of computer games

    Larsen, Lasse Juel; Majgaard, Gunver

    This abstract focuses on the computer game design process in the education of engineers at the university level. We present a model for understanding the different layers in the game design process, and an articulation of their intricate interconnectedness. Our motivation is propelled by our daily...... teaching practice of game design. We have observed a need for a design model that quickly can create an easily understandable overview over something as complex as the design processes of computer games. This posed a problem: how do we present a broad overview of the game design process and at the same...... time make sure that the students learn to act and reflect like game designers? We fell our game design model managed to just that end. Our model entails a guideline for the computer game design process in its entirety, and at same time distributes clear and easy understandable insight to a particular...

  8. Spacecraft Avionics Software Development Then and Now: Different but the Same

    Mangieri, Mark L.; Garman, John (Jack); Vice, Jason


    NASA has always been in the business of balancing new technologies and techniques to achieve human space travel objectives. NASA s historic Software Production Facility (SPF) was developed to serve complex avionics software solutions during an era dominated by mainframes, tape drives, and lower level programming languages. These systems have proven themselves resilient enough to serve the Shuttle Orbiter Avionics life cycle for decades. The SPF and its predecessor the Software Development Lab (SDL) at NASA s Johnson Space Center (JSC) hosted flight software (FSW) engineering, development, simulation, and test. It was active from the beginning of Shuttle Orbiter development in 1972 through the end of the shuttle program in the summer of 2011 almost 40 years. NASA s Kedalion engineering analysis lab is on the forefront of validating and using many contemporary avionics HW/SW development and integration techniques, which represent new paradigms to NASA s heritage culture in avionics software engineering. Kedalion has validated many of the Orion project s HW/SW engineering techniques borrowed from the adjacent commercial aircraft avionics environment, inserting new techniques and skills into the Multi-Purpose Crew Vehicle (MPCV) Orion program. Using contemporary agile techniques, COTS products, early rapid prototyping, in-house expertise and tools, and customer collaboration, NASA has adopted a cost effective paradigm that is currently serving Orion effectively. This paper will explore and contrast differences in technology employed over the years of NASA s space program, due largely to technological advances in hardware and software systems, while acknowledging that the basic software engineering and integration paradigms share many similarities.

  9. Software of the BESM-6 computer for automatic image processing from liquid-hydrogen bubble chambers

    Grebenikov, E.A.; Kiosa, M.N.; Kobzarev, K.K.; Kuznetsova, N.A.; Mironov, S.V.; Nasonova, L.P.


    A set of programs, which is used in ''road guidance'' mode on the BESM-6 computer to process picture information taken in liquid hydrogen bubble chambers is discussed. This mode allows the system to process data from an automatic scanner (AS) taking into account the results of manual scanning. The system hardware includes: an automatic scanner, an M-6000 mini-controller and a BESM-6 computer. Software is functionally divided into the following units: computation of event mask parameters and generation . of data files controlling the AS; front-end processing of data coming from the AS; filtering of track data; simulation of AS operation and gauging of the AS reference system. To speed up the overall performance, programs which receive and decode data, coming from the AS via the M-6000 controller and the data link to the BESM-6 computer, are written in machine language

  10. Formal Verification Method for Configuration of Integrated Modular Avionics System Using MARTE

    Lisong Wang


    Full Text Available The configuration information of Integrated Modular Avionics (IMA system includes almost all details of whole system architecture, which is used to configure the hardware interfaces, operating system, and interactions among applications to make an IMA system work correctly and reliably. It is very important to ensure the correctness and integrity of the configuration in the IMA system design phase. In this paper, we focus on modelling and verification of configuration information of IMA/ARINC653 system based on MARTE (Modelling and Analysis for Real-time and Embedded Systems. Firstly, we define semantic mapping from key concepts of configuration (such as modules, partitions, memory, process, and communications to components of MARTE element and propose a method for model transformation between XML-formatted configuration information and MARTE models. Then we present a formal verification framework for ARINC653 system configuration based on theorem proof techniques, including construction of corresponding REAL theorems according to the semantics of those key components of configuration information and formal verification of theorems for the properties of IMA, such as time constraints, spatial isolation, and health monitoring. After that, a special issue of schedulability analysis of ARINC653 system is studied. We design a hierarchical scheduling strategy with consideration of characters of the ARINC653 system, and a scheduling analyzer MAST-2 is used to implement hierarchical schedule analysis. Lastly, we design a prototype tool, called Configuration Checker for ARINC653 (CC653, and two case studies show that the methods proposed in this paper are feasible and efficient.

  11. The Impact Of Cloud Computing Technology On The Audit Process And The Audit Profession

    Yati Nurhajati


    Full Text Available In the future cloud computing audits will become increasingly The use of that technology has influenced of the audit process and be a new challenge for both external and the Internal Auditors to understand IT and learn how to use cloud computing and cloud services that hire in cloud service provider CSP and considering the risks of cloud computing and how to audit cloud computing by risk based audit approach. The wide range of unique risks and depend on the type and model of the cloud solution the uniqueness of the client environmentand the specifics of data or an application make this an complicated subject. The internal audit function is well positioned through its role as a guarantor function of the organization to assist management and the board of the Committee to identify and consider the risks in using cloud computing technology for internal audit can help determine whether the risk has been managed appropriately in a cloud computing environment. Assesses the current impact of cloud computing technology on the audit process and discusses the implications of cloud computing future technological trends for the auditing profession . More specifically Provides a summary of how that information technology has impacted the audit framework.

  12. Preliminary Study on the Enhancement of Reconstruction Speed for Emission Computed Tomography Using Parallel Processing

    Park, Min Jae; Lee, Jae Sung; Kim, Soo Mee; Kang, Ji Yeon; Lee, Dong Soo; Park, Kwang Suk


    Conventional image reconstruction uses simplified physical models of projection. However, real physics, for example 3D reconstruction, takes too long time to process all the data in clinic and is unable in a common reconstruction machine because of the large memory for complex physical models. We suggest the realistic distributed memory model of fast-reconstruction using parallel processing on personal computers to enable large-scale technologies. The preliminary tests for the possibility on virtual machines and various performance test on commercial super computer, Tachyon were performed. Expectation maximization algorithm with common 2D projection and realistic 3D line of response were tested. Since the process time was getting slower (max 6 times) after a certain iteration, optimization for compiler was performed to maximize the efficiency of parallelization. Parallel processing of a program on multiple computers was available on Linux with MPICH and NFS. We verified that differences between parallel processed image and single processed image at the same iterations were under the significant digits of floating point number, about 6 bit. Double processors showed good efficiency (1.96 times) of parallel computing. Delay phenomenon was solved by vectorization method using SSE. Through the study, realistic parallel computing system in clinic was established to be able to reconstruct by plenty of memory using the realistic physical models which was impossible to simplify

  13. Analyzing Team Based Engineering Design Process in Computer Supported Collaborative Learning

    Lee, Dong-Kuk; Lee, Eun-Sang


    The engineering design process has been largely implemented in a collaborative project format. Recently, technological advancement has helped collaborative problem solving processes such as engineering design to have efficient implementation using computers or online technology. In this study, we investigated college students' interaction and…


    The use of Computer-Aided Process Engineering (CAPE) and process simulation tools has become established industry practice to predict simulation software, new opportunities are available for the creation of a wide range of ancillary tools that can be used from within multiple sim...


    Computer-Aided Process Engineering has become established in industry as a design tool. With the establishment of the CAPE-OPEN software specifications for process simulation environments. CAPE-OPEN provides a set of "middleware" standards that enable software developers to acces...

  16. Data processing of X-ray fluorescence analysis using an electronic computer

    Yakubovich, A.L.; Przhiyalovskij, S.M.; Tsameryan, G.N.; Golubnichij, G.V.; Nikitin, S.A.


    Considered are problems of data processing of multi-element (for 17 elements) X-ray fluorescence analysis of tungsten and molybdenum ores. The analysis was carried out using silicon-lithium spectrometer with the energy resolution of about 300 eV and a 1024-channel analyzer. A characteristic radiation of elements was excited with two 109 Cd radioisotope sources, their general activity being 10 mCi. The period of measurements was 400 s. The data obtained were processed with a computer using the ''Proba-1'' and ''Proba-2'' programs. Data processing algorithms and computer calculation results are presented

  17. A low-cost vector processor boosting compute-intensive image processing operations

    Adorf, Hans-Martin


    Low-cost vector processing (VP) is within reach of everyone seriously engaged in scientific computing. The advent of affordable add-on VP-boards for standard workstations complemented by mathematical/statistical libraries is beginning to impact compute-intensive tasks such as image processing. A case in point in the restoration of distorted images from the Hubble Space Telescope. A low-cost implementation is presented of the standard Tarasko-Richardson-Lucy restoration algorithm on an Intel i860-based VP-board which is seamlessly interfaced to a commercial, interactive image processing system. First experience is reported (including some benchmarks for standalone FFT's) and some conclusions are drawn.

  18. Computer Aided Methodology for Simultaneous Synthesis, Design & Analysis of Chemical Products-Processes

    d'Anterroches, Loïc; Gani, Rafiqul


    A new combined methodology for computer aided molecular design and process flowsheet design is presented. The methodology is based on the group contribution approach for prediction of molecular properties and design of molecules. Using the same principles, process groups have been developed...... a wide range of problems. In this paper, only the computer aided flowsheet design related features are presented....... together with their corresponding flowsheet property models. To represent the process flowsheets in the same way as molecules, a unique but simple notation system has been developed. The methodology has been converted into a prototype software, which has been tested with several case studies covering...


    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  20. Integration of distributed plant process computer systems to nuclear power generation facilities

    Bogard, T.; Finlay, K.


    Many operating nuclear power generation facilities are replacing their plant process computer. Such replacement projects are driven by equipment obsolescence issues and associated objectives to improve plant operability, increase plant information access, improve man machine interface characteristics, and reduce operation and maintenance costs. This paper describes a few recently completed and on-going replacement projects with emphasis upon the application integrated distributed plant process computer systems. By presenting a few recent projects, the variations of distributed systems design show how various configurations can address needs for flexibility, open architecture, and integration of technological advancements in instrumentation and control technology. Architectural considerations for optimal integration of the plant process computer and plant process instrumentation ampersand control are evident from variations of design features

  1. Mathematics of shape description a morphological approach to image processing and computer graphics

    Ghosh, Pijush K


    Image processing problems are often not well defined because real images are contaminated with noise and other uncertain factors. In Mathematics of Shape Description, the authors take a mathematical approach to address these problems using the morphological and set-theoretic approach to image processing and computer graphics by presenting a simple shape model using two basic shape operators called Minkowski addition and decomposition. This book is ideal for professional researchers and engineers in Information Processing, Image Measurement, Shape Description, Shape Representation and Computer Graphics. Post-graduate and advanced undergraduate students in pure and applied mathematics, computer sciences, robotics and engineering will also benefit from this book.  Key FeaturesExplains the fundamental and advanced relationships between algebraic system and shape description through the set-theoretic approachPromotes interaction of image processing geochronology and mathematics in the field of algebraic geometryP...

  2. A computer-aided software-tool for sustainable process synthesis-intensification

    Kumar Tula, Anjan; Babi, Deenesh K.; Bottlaender, Jack


    and determine within the design space, the more sustainable processes. In this paper, an integrated computer-aided software-tool that searches the design space for hybrid/intensified more sustainable process options is presented. Embedded within the software architecture are process synthesis...... operations as well as reported hybrid/intensified unit operations is large and can be difficult to manually navigate in order to determine the best process flowsheet for the production of a desired chemical product. Therefore, it is beneficial to utilize computer-aided methods and tools to enumerate, analyze...... constraints while also matching the design targets, they are therefore more sustainable than the base case. The application of the software-tool to the production of biodiesel is presented, highlighting the main features of the computer-aided, multi-stage, multi-scale methods that are able to determine more...


    I. Fisk


    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  4. Application of a B ampersand W developed computer aided pictorial process planning system to CQMS for manufacturing process control

    Johanson, D.C.; VandeBogart, J.E.


    Babcock ampersand Wilcox (B ampersand W) will utilize its internally developed Computer Aided Pictorial Process Planning or CAPPP (pronounced open-quotes cap cubedclose quotes) system to create a paperless manufacturing environment for the Collider Quadruple Magnets (CQM). The CAPPP system consists of networked personal computer hardware and software used to: (1) generate and maintain the documents necessary for product fabrication, (2) communicate the information contained in these documents to the production floor, and (3) obtain quality assurance and manufacturing feedback information from the production floor. The purpose of this paper is to describe the various components of the CAPPP system and explain their applicability to product fabrication, specifically quality assurance functions

  5. Process control in conventional power plants. The use of computer systems

    Schievink, A; Woehrle, G


    To process information man can use his knowledge and his experience. Both these means however, permit only slow flows of information (about 25 bit/s) to be processed. The flow of information in a modern 700-MW-coal power station that the staff has to face is about 5000 bit per second, i.e. 200 times as much as a single human brain can process. One therefore needs modern computer-controlled process control systems which support the staff in recognizing and processing the complicated and rapid processes in such a way that the servicing staff is efficiently supported. The computer-man interface is ergonomically improved by visual display units.

  6. A computer-aided approach for achieving sustainable process design by process intensification

    Anantasarn, Nateetorn; Suriyapraphadilok, Uthaiporn; Babi, Deenesh Kavi


    to generate flowsheet alternatives that satisfy the design targets thereby, minimizing and/or eliminating the process hot-spots. The application of the framework is highlighted through the production of para-xylene via toluene methylation where more sustainable flowsheet alternatives that consist of hybrid......Process intensification can be applied to achieve sustainable process design. In this paper, a systematic, 3-stage synthesis-intensification framework is applied to achieve more sustainable design. In stage 1, the synthesis stage, an objective function and design constraints are defined and a base...... case is synthesized. In stage 2, the design and analysis stage, the base case is analyzed using economic and environmental analyses to identify process hot-spots that are translated into design targets. In stage 3, the innovation design stage, phenomena-based process intensification is performed...

  7. Research on the Reliability Analysis of the Integrated Modular Avionics System Based on the AADL Error Model

    Peng Wang


    Full Text Available In recent years, the integrated modular avionics (IMA concept has been introduced to replace the traditional federated avionics. Different avionics functions are hosted in a shared IMA platform, and IMA adopts partition technologies to provide a logical isolation among different functions. The IMA architecture can provide more sophisticated and powerful avionics functionality; meanwhile, the failure propagation patterns in IMA are more complex. The feature of resource sharing introduces some unintended interconnections among different functions, which makes the failure propagation modes more complex. Therefore, this paper proposes an architecture analysis and design language- (AADL- based method to establish the reliability model of IMA platform. The single software and hardware error behavior in IMA system is modeled. The corresponding AADL error model of failure propagation among components, between software and hardware, is given. Finally, the display function of IMA platform is taken as an example to illustrate the effectiveness of the proposed method.

  8. An Analysis of Creative Process Learning in Computer Game Activities Through Player Experiences

    Wilawan Inchamnan


    This research investigates the extent to which creative processes can be fostered through computer gaming. It focuses on creative components in games that have been specifically designed for educational purposes: Digital Game Based Learning (DGBL). A behavior analysis for measuring the creative potential of computer game activities and learning outcomes is described. Creative components were measured by examining task motivation and domain-relevant and creativity-relevant skill factors. The r...

  9. Statistical test data selection for reliability evalution of process computer software

    Volkmann, K.P.; Hoermann, H.; Ehrenberger, W.


    The paper presents a concept for converting knowledge about the characteristics of process states into practicable procedures for the statistical selection of test cases in testing process computer software. Process states are defined as vectors whose components consist of values of input variables lying in discrete positions or within given limits. Two approaches for test data selection, based on knowledge about cases of demand, are outlined referring to a purely probabilistic method and to the mathematics of stratified sampling. (orig.) [de

  10. A practical link between medical and computer groups in image data processing

    Ollivier, J Y


    An acquisition and processing system of scintigraphic images should not be exclusively constructed for a computer specialist. Primarily it should be designed to be easily and quickly handled by a nurse or a doctor and be programmed by the doctor or the computer specialist. This consideration led Intertechnique to construct the CINE 200 system. In fact, the CINE 200 includes a computer and so offers the programming possibilities which are the tools of the computer specialist, even more it was conceived especially for clinic use and offers some functions which cannot be carried out by classical computer and some standard peripherals. In addition, the CINE 200 allows the doctor who is not a computer specialist to familiarize himself with this science by the progressive levels of language, the first level being a link of simple processing on images or curves, the second being an interpretative language identical to BASIC, very easy to learn. Before showing the offered facilities for the doctor and the computer specialist by the CINE 200, its characteristics are briefly reviewed.

  11. Large Data at Small Universities: Astronomical processing using a computer classroom

    Fuller, Nathaniel James; Clarkson, William I.; Fluharty, Bill; Belanger, Zach; Dage, Kristen


    The use of large computing clusters for astronomy research is becoming more commonplace as datasets expand, but access to these required resources is sometimes difficult for research groups working at smaller Universities. As an alternative to purchasing processing time on an off-site computing cluster, or purchasing dedicated hardware, we show how one can easily build a crude on-site cluster by utilizing idle cycles on instructional computers in computer-lab classrooms. Since these computers are maintained as part of the educational mission of the University, the resource impact on the investigator is generally low.By using open source Python routines, it is possible to have a large number of desktop computers working together via a local network to sort through large data sets. By running traditional analysis routines in an “embarrassingly parallel” manner, gains in speed are accomplished without requiring the investigator to learn how to write routines using highly specialized methodology. We demonstrate this concept here applied to 1. photometry of large-format images and 2. Statistical significance-tests for X-ray lightcurve analysis. In these scenarios, we see a speed-up factor which scales almost linearly with the number of cores in the cluster. Additionally, we show that the usage of the cluster does not severely limit performance for a local user, and indeed the processing can be performed while the computers are in use for classroom purposes.

  12. Accessible high performance computing solutions for near real-time image processing for time critical applications

    Bielski, Conrad; Lemoine, Guido; Syryczynski, Jacek


    High Performance Computing (HPC) hardware solutions such as grid computing and General Processing on a Graphics Processing Unit (GPGPU) are now accessible to users with general computing needs. Grid computing infrastructures in the form of computing clusters or blades are becoming common place and GPGPU solutions that leverage the processing power of the video card are quickly being integrated into personal workstations. Our interest in these HPC technologies stems from the need to produce near real-time maps from a combination of pre- and post-event satellite imagery in support of post-disaster management. Faster processing provides a twofold gain in this situation: 1. critical information can be provided faster and 2. more elaborate automated processing can be performed prior to providing the critical information. In our particular case, we test the use of the PANTEX index which is based on analysis of image textural measures extracted using anisotropic, rotation-invariant GLCM statistics. The use of this index, applied in a moving window, has been shown to successfully identify built-up areas in remotely sensed imagery. Built-up index image masks are important input to the structuring of damage assessment interpretation because they help optimise the workload. The performance of computing the PANTEX workflow is compared on two different HPC hardware architectures: (1) a blade server with 4 blades, each having dual quad-core CPUs and (2) a CUDA enabled GPU workstation. The reference platform is a dual CPU-quad core workstation and the PANTEX workflow total computing time is measured. Furthermore, as part of a qualitative evaluation, the differences in setting up and configuring various hardware solutions and the related software coding effort is presented.

  13. Computer-based system for processing geophysical data obtained from boreholes

    Richter, J.M.


    A diverse set of computer programs has been developed at the Lawrence Livermore National Laboratory (LLNL) to process geophysical data obtained from boreholes. These programs support such services as digitizing analog records, reading and processing raw data, cataloging and storing processed data, retrieving selected data for analysis, and generating data plots on several different devices. A variety of geophysical data types are accommodated, including both wireline logs and laboratory analyses of downhole samples. Many processing tasks are handled by means of a single, flexible, general-purpose data-manipulation program. Separate programs are available for processing data from density, gravity, velocity, and epithermal neutron logs

  14. Stream computing for biomedical signal processing: A QRS complex detection case-study.

    Murphy, B M; O'Driscoll, C; Boylan, G B; Lightbody, G; Marnane, W P


    Recent developments in "Big Data" have brought significant gains in the ability to process large amounts of data on commodity server hardware. Stream computing is a relatively new paradigm in this area, addressing the need to process data in real time with very low latency. While this approach has been developed for dealing with large scale data from the world of business, security and finance, there is a natural overlap with clinical needs for physiological signal processing. In this work we present a case study of streams processing applied to a typical physiological signal processing problem: QRS detection from ECG data.

  15. Optimal nonlinear information processing capacity in delay-based reservoir computers

    Grigoryeva, Lyudmila; Henriques, Julie; Larger, Laurent; Ortega, Juan-Pablo


    Reservoir computing is a recently introduced brain-inspired machine learning paradigm capable of excellent performances in the processing of empirical data. We focus in a particular kind of time-delay based reservoir computers that have been physically implemented using optical and electronic systems and have shown unprecedented data processing rates. Reservoir computing is well-known for the ease of the associated training scheme but also for the problematic sensitivity of its performance to architecture parameters. This article addresses the reservoir design problem, which remains the biggest challenge in the applicability of this information processing scheme. More specifically, we use the information available regarding the optimal reservoir working regimes to construct a functional link between the reservoir parameters and its performance. This function is used to explore various properties of the device and to choose the optimal reservoir architecture, thus replacing the tedious and time consuming parameter scannings used so far in the literature.

  16. Integration of adaptive process control with computational simulation for spin-forming

    Raboin, P. J. LLNL


    Improvements in spin-forming capabilities through upgrades to a metrology and machine control system and advances in numerical simulation techniques were studied in a two year project funded by Laboratory Directed Research and Development (LDRD) at Lawrence Livermore National Laboratory. Numerical analyses were benchmarked with spin-forming experiments and computational speeds increased sufficiently to now permit actual part forming simulations. Extensive modeling activities examined the simulation speeds and capabilities of several metal forming computer codes for modeling flat plate and cylindrical spin-forming geometries. Shape memory research created the first numerical model to describe this highly unusual deformation behavior in Uranium alloys. A spin-forming metrology assessment led to sensor and data acquisition improvements that will facilitate future process accuracy enhancements, such as a metrology frame. Finally, software improvements (SmartCAM) to the manufacturing process numerically integrate the part models to the spin-forming process and to computational simulations

  17. A learnable parallel processing architecture towards unity of memory and computing.

    Li, H; Gao, B; Chen, Z; Zhao, Y; Huang, P; Ye, H; Liu, L; Liu, X; Kang, J


    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named "iMemComp", where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped "iMemComp" with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on "iMemComp" can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.

  18. A learnable parallel processing architecture towards unity of memory and computing

    Li, H.; Gao, B.; Chen, Z.; Zhao, Y.; Huang, P.; Ye, H.; Liu, L.; Liu, X.; Kang, J.


    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named “iMemComp”, where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped “iMemComp” with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on “iMemComp” can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.

  19. Importance of Cognitive and Affective Processes when Working with a Computer

    Blaž Trbižan


    Full Text Available Research Question (RQ: Why and how to measure human emotions when working and learning with a computer? Are machines (computers, robots implementing such binary records, where there is a simulation of cognitive phenomena and their processes, or do they actually reflect, therefore, able to think?Purpose: Show the importance of cognitive and affective processes of computer and ICT usage, both in learning and in daily work tasks.Method: Comparative method, where scientific findings were compared and based on these conclusions were drawn.Results: An individual has an active role and the use of ICT enables, through the processes of reflection and exchanges of views, for an individual to resolve problems and consequently is able to achieve excellent results at both the personal (educational level and in business. In learning and working with computers, individuals needinternal motivation. Internal motivation can be increased with positive affective processes that also positively influence cognitive processes.Organization:Knowledge of generational characteristics is currently becoming a competitive advantage of organizations. Younger generations are growing up with computers and both teachers and managers have to beaware and accommodate their teaching and business processes to the requirements of ICT.Society: In the 21st century we live in a knowledge society that is unconditionally connected and dependent on the development of information technology. Digital literacy is an everyday concept that society also is aware of and training programmes are being offered on computer literacy for all generations.Originality: The paper presents a concise synthesis of research and authors points of views recorded over the last 25 years and these are combined with our own conclusions based on observations.Limitations/Future Research:The fundamental limitation is that this is a comparative research study that compares the views and conclusions of different authors

  20. Digi-Clima Grid: image processing and distributed computing for recovering historical climate data

    Sergio Nesmachnow


    Full Text Available This article describes the Digi-Clima Grid project, whose main goals are to design and implement semi-automatic techniques for digitalizing and recovering historical climate records applying parallel computing techniques over distributed computing infrastructures. The specific tool developed for image processing is described, and the implementation over grid and cloud infrastructures is reported. A experimental analysis over institutional and volunteer-based grid/cloud distributed systems demonstrate that the proposed approach is an efficient tool for recovering historical climate data. The parallel implementations allow to distribute the processing load, achieving accurate speedup values.

  1. An application of the process computer and CRT display system in BWR nuclear power station

    Goto, Seiichiro; Aoki, Retsu; Kawahara, Haruo; Sato, Takahisa


    A color CRT display system was combined with a process computer in some BWR nuclear power plants in Japan. Although the present control system uses the CRT display system only as an output device of the process computer, it has various advantages over conventional control panel as an efficient plant-operator interface. Various graphic displays are classified into four categories. The first is operational guide which includes the display of control rod worth minimizer and that of rod block monitor. The second is the display of the results of core performance calculation which include axial and radial distributions of power output, exit quality, channel flow rate, CHFR (critical heat flux ratio), FLPD (fraction of linear power density), etc. The third is the display of process variables and corresponding computational values. The readings of LPRM, control rod position and the process data concerning turbines and feed water system are included in this category. The fourth category includes the differential axial power distribution between base power distribution (obtained from TIP) and the reading of each LPRM detector, and the display of various input parameters being used by the process computer. Many photographs are presented to show examples of those applications. (Aoki, K.)

  2. Computation of large covariance matrices by SAMMY on graphical processing units and multicore CPUs

    Arbanas, G.; Dunn, M.E.; Wiarda, D., E-mail:, E-mail:, E-mail: [Oak Ridge National Laboratory, Oak Ridge, TN (United States)


    Computational power of Graphical Processing Units and multicore CPUs was harnessed by the nuclear data evaluation code SAMMY to speed up computations of large Resonance Parameter Covariance Matrices (RPCMs). This was accomplished by linking SAMMY to vendor-optimized implementations of the matrix-matrix multiplication subroutine of the Basic Linear Algebra Library to compute the most time-consuming step. The {sup 235}U RPCM computed previously using a triple-nested loop was re-computed using the NVIDIA implementation of the subroutine on a single Tesla Fermi Graphical Processing Unit, and also using the Intel's Math Kernel Library implementation on two different multicore CPU systems. A multiplication of two matrices of dimensions 16,000×20,000 that had previously taken days, took approximately one minute on the GPU. Comparable performance was achieved on a dual six-core CPU system. The magnitude of the speed-up suggests that these, or similar, combinations of hardware and libraries may be useful for large matrix operations in SAMMY. Uniform interfaces of standard linear algebra libraries make them a promising candidate for a programming framework of a new generation of SAMMY for the emerging heterogeneous computing platforms. (author)

  3. Computation of large covariance matrices by SAMMY on graphical processing units and multicore CPUs

    Arbanas, G.; Dunn, M.E.; Wiarda, D.


    Computational power of Graphical Processing Units and multicore CPUs was harnessed by the nuclear data evaluation code SAMMY to speed up computations of large Resonance Parameter Covariance Matrices (RPCMs). This was accomplished by linking SAMMY to vendor-optimized implementations of the matrix-matrix multiplication subroutine of the Basic Linear Algebra Library to compute the most time-consuming step. The 235 U RPCM computed previously using a triple-nested loop was re-computed using the NVIDIA implementation of the subroutine on a single Tesla Fermi Graphical Processing Unit, and also using the Intel's Math Kernel Library implementation on two different multicore CPU systems. A multiplication of two matrices of dimensions 16,000×20,000 that had previously taken days, took approximately one minute on the GPU. Comparable performance was achieved on a dual six-core CPU system. The magnitude of the speed-up suggests that these, or similar, combinations of hardware and libraries may be useful for large matrix operations in SAMMY. Uniform interfaces of standard linear algebra libraries make them a promising candidate for a programming framework of a new generation of SAMMY for the emerging heterogeneous computing platforms. (author)

  4. Spacecraft guidance, navigation, and control requirements for an intelligent plug-n-play avionics (PAPA) architecture

    Kulkarni, Nilesh; Krishnakumar, Kalmaje


    The objective of this research is to design an intelligent plug-n-play avionics system that provides a reconfigurable platform for supporting the guidance, navigation and control (GN&C) requirements for different elements of the space exploration mission. The focus of this study is to look at the specific requirements for a spacecraft that needs to go from earth to moon and back. In this regard we will identify the different GN&C problems in various phases of flight that need to be addressed for designing such a plug-n-play avionics system. The Apollo and the Space Shuttle programs provide rich literature in terms of understanding some of the general GN&C requirements for a space vehicle. The relevant literature is reviewed which helps in narrowing down the different GN&C algorithms that need to be supported along with their individual requirements.

  5. Data processing with PC-9801 micro-computer for HCN laser scattering experiments

    Iwasaki, T.; Okajima, S.; Kawahata, K.; Tetsuka, T.; Fujita, J.


    In order to process the data of HCN laser scattering experiments, a micro-computer software has been developed and applied to the measurements of density fluctuations in the JIPP T-IIU tokamak plasma. The data processing system consists of a spectrum analyzer, SM-2100A Signal Analyzer (IWATSU ELECTRIC CO., LTD.), PC-9801m3 micro-computer, a CRT-display and a dot-printer. The output signals from the spectrum analyzer are A/D converted, and stored on a mini-floppy-disk equipped to the signal analyzer. The software to process the data is composed of system-programs and several user-programs. The real time data processing is carried out for every shot of plasma at 4 minutes interval by the micro-computer connected with the signal analyzer through a GP-IB interface. The time evolutions of the frequency spectrum of the density fluctuations are displayed on the CRT attached to the micro-computer and printed out on a printer-sheet. In the case of the data processing after experiments, the data stored on the floppy-disk of the signal analyzer are read out by using a floppy-disk unit attached to the micro-computer. After computation with the user-programs, the results, such as monitored signal, frequency spectra, wave number spectra and the time evolutions of the spectrum, are displayed and printed out. In this technical report, the system, the software and the directions for use are described. (author)


    Svitlana G. Lytvynova


    Full Text Available The article analyzes the historical aspect of the formation of computer modeling as one of the perspective directions of educational process development. The notion of “system of computer modeling”, conceptual model of system of computer modeling (SCMod, its components (mathematical, animation, graphic, strategic, functions, principles and purposes of use are grounded. The features of the organization of students work using SCMod, individual and group work, the formation of subject competencies are described; the aspect of students’ motivation to learning is considered. It is established that educational institutions can use SCMod at different levels and stages of training and in different contexts, which consist of interrelated physical, social, cultural and technological aspects. It is determined that the use of SCMod in general secondary school would increase the capacity of teachers to improve the training of students in natural and mathematical subjects and contribute to the individualization of the learning process, in order to meet the pace, educational interests and capabilities of each particular student. It is substantiated that the use of SCMod in the study of natural-mathematical subjects contributes to the formation of subject competencies, develops the skills of analysis and decision-making, increases the level of digital communication, develops vigilance, raises the level of knowledge, increases the duration of attention of students. Further research requires the justification of the process of forming students’ competencies in natural-mathematical subjects and designing cognitive tasks using SCMod.

  7. FPGA Compute Acceleration for High-Throughput Data Processing in High-Energy Physics Experiments

    CERN. Geneva


    The upgrades of the four large experiments of the LHC at CERN in the coming years will result in a huge increase of data bandwidth for each experiment which needs to be processed very efficiently. For example the LHCb experiment will upgrade its detector 2019/2020 to a 'triggerless' readout scheme, where all of the readout electronics and several sub-detector parts will be replaced. The new readout electronics will be able to readout the detector at 40MHz. This increases the data bandwidth from the detector down to the event filter farm to 40TBit/s, which must be processed to select the interesting proton-proton collisions for later storage. The architecture of such a computing farm, which can process this amount of data as efficiently as possible, is a challenging task and several compute accelerator technologies are being considered.    In the high performance computing sector more and more FPGA compute accelerators are being used to improve the compute performance and reduce the...

  8. GPU-Based FFT Computation for Multi-Gigabit WirelessHD Baseband Processing

    Nicholas Hinitt


    Full Text Available The next generation Graphics Processing Units (GPUs are being considered for non-graphics applications. Millimeter wave (60 Ghz wireless networks that are capable of multi-gigabit per second (Gbps transfer rates require a significant baseband throughput. In this work, we consider the baseband of WirelessHD, a 60 GHz communications system, which can provide a data rate of up to 3.8 Gbps over a short range wireless link. Thus, we explore the feasibility of achieving gigabit baseband throughput using the GPUs. One of the most computationally intensive functions commonly used in baseband communications, the Fast Fourier Transform (FFT algorithm, is implemented on an NVIDIA GPU using their general-purpose computing platform called the Compute Unified Device Architecture (CUDA. The paper, first, investigates the implementation of an FFT algorithm using the GPU hardware and exploiting the computational capability available. It then outlines the limitations discovered and the methods used to overcome these challenges. Finally a new algorithm to compute FFT is proposed, which reduces interprocessor communication. It is further optimized by improving memory access, enabling the processing rate to exceed 4 Gbps, achieving a processing time of a 512-point FFT in less than 200 ns using a two-GPU solution.

  9. Neural and Computational Mechanisms of Action Processing: Interaction between Visual and Motor Representations.

    Giese, Martin A; Rizzolatti, Giacomo


    Action recognition has received enormous interest in the field of neuroscience over the last two decades. In spite of this interest, the knowledge in terms of fundamental neural mechanisms that provide constraints for underlying computations remains rather limited. This fact stands in contrast with a wide variety of speculative theories about how action recognition might work. This review focuses on new fundamental electrophysiological results in monkeys, which provide constraints for the detailed underlying computations. In addition, we review models for action recognition and processing that have concrete mathematical implementations, as opposed to conceptual models. We think that only such implemented models can be meaningfully linked quantitatively to physiological data and have a potential to narrow down the many possible computational explanations for action recognition. In addition, only concrete implementations allow judging whether postulated computational concepts have a feasible implementation in terms of realistic neural circuits. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Role of computed tomography in the integral diagnostic process of paranasal cavities tumors

    Lazarova, I.


    Results are reported of computed tomographic examination of 129 patients from 3 to 74 years of age, on clinical grounds suspected of having, or histologically verified, tumors of the paranasal cavities. Axial and/or coronary scanning (depending on the case) was performed on computed tomograph Tomoscan-310, according to previously selected programs. Computed tomography was evaluated with regard to the possibility for diagnosing tumors of the paranasal sinuses and its role in furnishing additional information in these diseases. The clearcut differentiation on the computed tomograms both of the bone structures and of the soft tissue - muscles, vessels, connective tissue and fatty tissue spaces - is emphasized. The clinical significance of this special X-ray method of examination in the preoperative period by demonstrating the different directions in which the tumors spread and the possibility for adequate planning of the radiotherapeutic field and posttherapeutic follow-up the pathologic process are pointed out. 5 figs., 5 refs

  11. Personal computer interface for temmperature measuring in the cutting process with turning

    Trajchevski, Neven; Filipovski, Velimir; Kuzinonovski, Mikolaj


    The computer development aided reserch systems in the investigations of the characteristics of the surface layar forms conditions for decreasing of the measuring uncertainty. Especially important is the fact that the usage of open and self made measuring systems accomplishes the demands for a total control of the research process. This paper describes an original personal computer interface which is used in the newly built computer aided reserrch system for temperatute measuring in the machining with turning. This interface consists of optically-coupled linear isolation amplifier and an analog to digital (A/D) converter. It is designed for measuring of the themo- voltage that is a generated from the natural thermocouple workpiece-cutting tool. That is achived by digitalizing the value of the thermo-voltage in data which is transmitted to the personal computer. The interface realization is a result of the research activity of the faculty of Mechanical Engineering and the Faculty of Electrical Engineering in Skopje.

  12. Digital avionics systems - Overview of FAA/NASA/industry-wide briefing

    Larsen, William E.; Carro, Anthony


    The effects of incorporating digital technology into the design of aircraft on the airworthiness criteria and certification procedures for aircraft are investigated. FAA research programs aimed at providing data for the functional assessment of aircraft which use digital systems for avionics and flight control functions are discussed. The need to establish testing, assurance assessment, and configuration management technologies to insure the reliability of digital systems is discussed; consideration is given to design verification, system performance/robustness, and validation technology.

  13. Digital Systems Validation Handbook. Volume 2. Chapter 18. Avionic Data Bus Integration Technology


    interaction between a digital data bus and an avionic system. Very Large Scale Integration (VLSI) ICs and multiversion software, which make up digital...1984, the Sperry Corporation developed a fault tolerant system which employed multiversion programming, voting, and monitoring for error detection and...formulate all the significant behavior of a system. MULTIVERSION PROGRAMMING. N-version programming. N-VERSION PROGRAMMING. The independent coding of a

  14. NI Based System for Seu Testing of Memory Chips for Avionics

    Boruzdina Anna


    Full Text Available This paper presents the results of implementation of National Instrument based system for Single Event Upset testing of memory chips into neutron generator experimental facility, which used for SEU tests for avionics purposes. Basic SEU testing algorithm with error correction and constant errors detection is presented. The issues of radiation shielding of NI based system are discussed and solved. The examples of experimental results show the applicability of the presented system for SEU memory testing under neutrons influence.

  15. Honeywell Modular Automation System Computer Software Documentation for the Magnesium Hydroxide Precipitation Process

    STUBBS, A.M.


    The purpose of this Computer Software Document (CSWD) is to provide configuration control of the Honeywell Modular Automation System (MAS) in use at the Plutonium Finishing Plant (PFP) for the Magnesium Hydroxide Precipitation Process in Rm 230C/234-5Z. The magnesium hydroxide process control software Rev 0 is being updated to include control programming for a second hot plate. The process control programming was performed by the system administrator. Software testing for the additional hot plate was performed per PFP Job Control Work Package 2Z-00-1703. The software testing was verified by Quality Control to comply with OSD-Z-184-00044, Magnesium Hydroxide Precipitation Process

  16. Memory device sensitivity trends in aircraft's environment; Evolution de la sensibilite de composants memoires en altitude avion

    Bouchet, T.; Fourtine, S. [Aerospatiale-Matra Airbus, 31 - Toulouse (France); Calvet, M.C. [Aerospatiale-Matra Lanceur, 78 - Les Mureaux (France)


    The authors present the SEU (single event upset) sensitivity of 31 SRAM (static random access memory) and 8 DRAM (dynamic random access memory) according to their technologies. 2 methods have been used to compute the SEU rate: the NCS (neutron cross section) method and the BGR (burst generation rate) method, the physics data required by both methods have been either found in scientific literature or directly measured. The use of new technologies implies a quicker time response through a dramatic reduction of chip size and of the amount of energy representing 1 bit. The reduction of size makes less particles are likely to interact with the chip but the reduction of the critical charge implies that these interactions are more likely to damage the chip. The SEU sensitivity is then parted between these 2 opposed trends. Results show that for technologies beyond 0,18 {mu}m these 2 trends balance roughly. Nevertheless the feedback experience shows that the number of errors is increasing. This is due to the fact that avionics requires more and more memory to perform numerical functions, the number of bits is increasing so is the risk of errors. As far as SEU is concerned, RAM devices are less and less sensitive comparatively for 1 bit, and DRAM seem to be less sensitive than SRAM. (A.C.)

  17. Development of COMPAS, computer aided process flowsheet design and analysis system of nuclear fuel reprocessing

    Homma, Shunji; Sakamoto, Susumu; Takanashi, Mitsuhiro; Nammo, Akihiko; Satoh, Yoshihiro; Soejima, Takayuki; Koga, Jiro; Matsumoto, Shiro


    A computer aided process flowsheet design and analysis system, COMPAS has been developed in order to carry out the flowsheet calculation on the process flow diagram of nuclear fuel reprocessing. All of equipments, such as dissolver, mixer-settler, and so on, in the process flowsheet diagram are graphically visualized as icon on a bitmap display of UNIX workstation. Drawing of a flowsheet can be carried out easily by the mouse operation. Not only a published numerical simulation code but also a user's original one can be used on the COMPAS. Specifications of the equipment and the concentration of components in the stream displayed as tables can be edited by a computer user. Results of calculation can be also displayed graphically. Two examples show that the COMPAS is applicable to decide operating conditions of Purex process and to analyze extraction behavior in a mixer-settler extractor. (author)


    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  19. The transfer of computer processed pictures for nuclear medicine to cassette VTR

    Komaya, Akio; Takahashi, Kazue; Suzuki, Toshi


    With the increasing clinical importance of data-processing computers in nuclear medicine, the applications are now widely established. As for the output methods and output devices of data, processed pictures, and animation pictures, contrivance is necessary for the easy appreciation and utilization of the information obtained. In the cine-mode display of heart wall motion in particular, it is desirable to reproduce conveniently the output images as animated for image reading at any time or place. The apparatus for this purpose has been completed by using an ordinary home-use cassette VTR and a video monitor. The computer output pictures as nuclear medicine data are recorded in the VTR. Recording and reprocuction are possible only by a few additional components and some adjustments. Animation pictures such as the cine-mode display of heart wall motion can be conveniently reproduced for image reading, away from computers. (J.P.N.)

  20. Transfer of computer processed pictures for nuclear medicine to cassette VTR

    Komaya, A; Takahashi, K; Suzuki, T [Yamagata Univ. (Japan)


    With the increasing clinical importance of data-processing computers in nuclear medicine, the applications are now widely established. As for the output methods and output devices of data, processed pictures, and animation pictures, contrivance is necessary for the easy appreciation and utilization of the information obtained. In the cine-mode display of heart wall motion in particular, it is desirable to reproduce conveniently the output images as animated for image reading at any time or place. The apparatus for this purpose has been completed by using an ordinary home-use cassette VTR and a video monitor. The computer output pictures as nuclear medicine data are recorded in the VTR. Recording and reprocuction are possible only by a few additional components and some adjustments. Animation pictures such as the cine-mode display of heart wall motion can be conveniently reproduced for image reading, away from computers.

  1. Seismic proving test of process computer systems with a seismic floor isolation system

    Fujimoto, S.; Niwa, H.; Kondo, H.


    The authors have carried out seismic proving tests for process computer systems as a Nuclear Power Engineering Corporation (NUPEC) project sponsored by the Ministry of International Trade and Industry (MITI). This paper presents the seismic test results for evaluating functional capabilities of process computer systems with a seismic floor isolation system. The seismic floor isolation system to isolate the horizontal motion was composed of a floor frame (13 m x 13 m), ball bearing units, and spring-damper units. A series of seismic excitation tests was carried out using a large-scale shaking table of NUPEC. From the test results, the functional capabilities during large earthquakes of computer systems with a seismic floor isolation system were verified

  2. Genomic signal processing methods for computation of alignment-free distances from DNA sequences.

    Borrayo, Ernesto; Mendizabal-Ruiz, E Gerardo; Vélez-Pérez, Hugo; Romo-Vázquez, Rebeca; Mendizabal, Adriana P; Morales, J Alejandro


    Genomic signal processing (GSP) refers to the use of digital signal processing (DSP) tools for analyzing genomic data such as DNA sequences. A possible application of GSP that has not been fully explored is the computation of the distance between a pair of sequences. In this work we present GAFD, a novel GSP alignment-free distance computation method. We introduce a DNA sequence-to-signal mapping function based on the employment of doublet values, which increases the number of possible amplitude values for the generated signal. Additionally, we explore the use of three DSP distance metrics as descriptors for categorizing DNA signal fragments. Our results indicate the feasibility of employing GAFD for computing sequence distances and the use of descriptors for characterizing DNA fragments.

  3. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.


    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  4. Dynamic Computation of Change Operations in Version Management of Business Process Models

    Küster, Jochen Malte; Gerth, Christian; Engels, Gregor

    Version management of business process models requires that changes can be resolved by applying change operations. In order to give a user maximal freedom concerning the application order of change operations, position parameters of change operations must be computed dynamically during change resolution. In such an approach, change operations with computed position parameters must be applicable on the model and dependencies and conflicts of change operations must be taken into account because otherwise invalid models can be constructed. In this paper, we study the concept of partially specified change operations where parameters are computed dynamically. We provide a formalization for partially specified change operations using graph transformation and provide a concept for their applicability. Based on this, we study potential dependencies and conflicts of change operations and show how these can be taken into account within change resolution. Using our approach, a user can resolve changes of business process models without being unnecessarily restricted to a certain order.

  5. All-optical quantum computing with a hybrid solid-state processing unit

    Pei Pei; Zhang Fengyang; Li Chong; Song Heshan


    We develop an architecture of a hybrid quantum solid-state processing unit for universal quantum computing. The architecture allows distant and nonidentical solid-state qubits in distinct physical systems to interact and work collaboratively. All the quantum computing procedures are controlled by optical methods using classical fields and cavity QED. Our methods have a prominent advantage of the insensitivity to dissipation process benefiting from the virtual excitation of subsystems. Moreover, the quantum nondemolition measurements and state transfer for the solid-state qubits are proposed. The architecture opens promising perspectives for implementing scalable quantum computation in a broader sense that different solid-state systems can merge and be integrated into one quantum processor afterward.

  6. 77 FR 65580 - Certain Wireless Communication Devices, Portable Music and Data Processing Devices, Computers...


    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-856] Certain Wireless Communication Devices, Portable Music and Data Processing Devices, Computers, and Components Thereof AGENCY: U.S. International Trade Commission. ACTION: Notice. SUMMARY: Notice is hereby given that the U.S. International...

  7. An intercomparison of computer assisted date processing and display methods in radioisotope scintigraphy using mathematical tumours

    Houston, A.S.; Macleod, M.A.


    Several computer assisted processing and display methods are evaluated using a series of 100 normal brain scintigrams, 50 of which have had single 'mathematical tumours' superimposed. Using a standard rating system, or in some cases quantitative estimation, LROC curves are generated for each method and compared. (author)

  8. Using the calculational simulating complexes when making the computer process control systems for NPP

    Zimakov, V.N.; Chernykh, V.P.


    The problems on creating calculational-simulating (CSC) and their application by developing the program and program-technical means for computer-aided process control systems at NPP are considered. The abo- ve complex is based on the all-mode real time mathematical model, functioning at a special complex of computerized means

  9. The computer-based process information system for the 5 MW THR

    Zhang Liangju; Zhang Youhua; Liu Xu; An Zhencai; Li Baoxiang


    The computer-based process information system has effectively improved the interface between operation person and the reactor, and has been successfully used in reactor operation environment. This article presents the design strategy, functions realized in the system and some advanced techniques used in the system construction and software development

  10. Measuring the impact of computer resource quality on the software development process and product

    Mcgarry, Frank; Valett, Jon; Hall, Dana


    The availability and quality of computer resources during the software development process was speculated to have measurable, significant impact on the efficiency of the development process and the quality of the resulting product. Environment components such as the types of tools, machine responsiveness, and quantity of direct access storage may play a major role in the effort to produce the product and in its subsequent quality as measured by factors such as reliability and ease of maintenance. During the past six years, the NASA Goddard Space Flight Center has conducted experiments with software projects in an attempt to better understand the impact of software development methodologies, environments, and general technologies on the software process and product. Data was extracted and examined from nearly 50 software development projects. All were related to support of satellite flight dynamics ground-based computations. The relationship between computer resources and the software development process and product as exemplified by the subject NASA data was examined. Based upon the results, a number of computer resource-related implications are provided.

  11. A Coding System for Qualitative Studies of the Information-Seeking Process in Computer Science Research

    Moral, Cristian; de Antonio, Angelica; Ferre, Xavier; Lara, Graciela


    Introduction: In this article we propose a qualitative analysis tool--a coding system--that can support the formalisation of the information-seeking process in a specific field: research in computer science. Method: In order to elaborate the coding system, we have conducted a set of qualitative studies, more specifically a focus group and some…

  12. Computer Use and Its Effect on the Memory Process in Young and Adults

    Alliprandini, Paula Mariza Zedu; Straub, Sandra Luzia Wrobel; Brugnera, Elisangela; de Oliveira, Tânia Pitombo; Souza, Isabela Augusta Andrade


    This work investigates the effect of computer use in the memory process in young and adults under the Perceptual and Memory experimental conditions. The memory condition involved the phases acquisition of information and recovery, on time intervals (2 min, 24 hours and 1 week) on situations of pre and post-test (before and after the participants…

  13. High Performance Parallel Processing Project: Industrial computing initiative. Progress reports for fiscal year 1995

    Koniges, A.


    This project is a package of 11 individual CRADA`s plus hardware. This innovative project established a three-year multi-party collaboration that is significantly accelerating the availability of commercial massively parallel processing computing software technology to U.S. government, academic, and industrial end-users. This report contains individual presentations from nine principal investigators along with overall program information.

  14. Global optimization for integrated design and control of computationally expensive process models

    Egea, J.A.; Vries, D.; Alonso, A.A.; Banga, J.R.


    The problem of integrated design and control optimization of process plants is discussed in this paper. We consider it as a nonlinear programming problem subject to differential-algebraic constraints. This class of problems is frequently multimodal and "costly" (i.e., computationally expensive to

  15. Pipeline leak detection and location by on-line-correlation with a process computer

    Siebert, H.; Isermann, R.


    A method for leak detection using a correlation technique in pipelines is described. For leak detection and also for leak localisation and estimation of the leak flow recursive estimation algorithms are used. The efficiency of the methods is demonstrated with a process computer and a pipeline model operating on-line. It is shown that very small leaks can be detected. (orig.) [de

  16. A computational approach for a fluid queue driven by a truncated birth-death process

    Lenin, R.B.; Parthasarathy, P.R.


    In this paper, we consider a fluid queue driven by a truncated birth-death process with general birth and death rates. We find the equilibrium distribution of the content of the fluid buffer by computing the eigenvalues and eigenvectors of an associated real tridiagonal matrix. We provide efficient

  17. Computational models of music perception and cognition II: Domain-specific music processing

    Purwins, Hendrik; Grachten, Maarten; Herrera, Perfecto; Hazan, Amaury; Marxer, Ricard; Serra, Xavier


    In Part I [Purwins H, Herrera P, Grachten M, Hazan A, Marxer R, Serra X. Computational models of music perception and cognition I: The perceptual and cognitive processing chain. Physics of Life Reviews 2008, in press, doi:10.1016/j.plrev.2008.03.004], we addressed the study of cognitive processes that underlie auditory perception of music, and their neural correlates. The aim of the present paper is to summarize empirical findings from music cognition research that are relevant to three prominent music theoretic domains: rhythm, melody, and tonality. Attention is paid to how cognitive processes like category formation, stimulus grouping, and expectation can account for the music theoretic key concepts in these domains, such as beat, meter, voice, consonance. We give an overview of computational models that have been proposed in the literature for a variety of music processing tasks related to rhythm, melody, and tonality. Although the present state-of-the-art in computational modeling of music cognition definitely provides valuable resources for testing specific hypotheses and theories, we observe the need for models that integrate the various aspects of music perception and cognition into a single framework. Such models should be able to account for aspects that until now have only rarely been addressed in computational models of music cognition, like the active nature of perception and the development of cognitive capacities from infancy to adulthood.

  18. Fast covariance estimation for innovations computed from a spatial Gibbs point process

    Coeurjolly, Jean-Francois; Rubak, Ege

    In this paper, we derive an exact formula for the covariance of two innovations computed from a spatial Gibbs point process and suggest a fast method for estimating this covariance. We show how this methodology can be used to estimate the asymptotic covariance matrix of the maximum pseudo...

  19. The Strategy Blueprint : A Strategy Process Computer-Aided Design Tool

    Aldea, Adina Ioana; Febriani, Tania Rizki; Daneva, Maya; Iacob, Maria Eugenia


    Strategy has always been a main concern of organizations because it dictates their direction, and therefore determines their success. Thus, organizations need to have adequate support to guide them through their strategy formulation process. The goal of this research is to develop a computer-based

  20. Map Design for Computer Processing: Literature Review and DMA Product Critique.


    outcome. Using a program 0 Use only a narrow border of layer tint on each side called " Seurat ," gridded elevation data is processed by of the contour line...Massachusetts., unpublished. sity Cartographers 6, pp. 40-45. Dutton, Geoffrey (1981bj The Seurat Program. Computer French, Robert J. (1954). Pattern

  1. Computer Processing 10-20-30. Teacher's Manual. Senior High School Teacher Resource Manual.

    Fisher, Mel; Lautt, Ray

    Designed to help teachers meet the program objectives for the computer processing curriculum for senior high schools in the province of Alberta, Canada, this resource manual includes the following sections: (1) program objectives; (2) a flowchart of curriculum modules; (3) suggestions for short- and long-range planning; (4) sample lesson plans;…

  2. Computing the correlation between catalyst composition and its performance in the catalysed process

    Holeňa, Martin; Steinfeldt, N.; Baerns, M.; Štefka, David


    Roč. 43, 10 August (2012), s. 55-67 ISSN 0098-1354 R&D Projects: GA ČR GA201/08/0802 Institutional support: RVO:67985807 Keywords : catalysed process * catalyst performance * correlation measures * estimating correlation value * analysis of variance * regression trees Subject RIV: IN - Informatics, Computer Science Impact factor: 2.091, year: 2012

  3. Visual analysis of inter-process communication for large-scale parallel computing.

    Muelder, Chris; Gygi, Francois; Ma, Kwan-Liu


    In serial computation, program profiling is often helpful for optimization of key sections of code. When moving to parallel computation, not only does the code execution need to be considered but also communication between the different processes which can induce delays that are detrimental to performance. As the number of processes increases, so does the impact of the communication delays on performance. For large-scale parallel applications, it is critical to understand how the communication impacts performance in order to make the code more efficient. There are several tools available for visualizing program execution and communications on parallel systems. These tools generally provide either views which statistically summarize the entire program execution or process-centric views. However, process-centric visualizations do not scale well as the number of processes gets very large. In particular, the most common representation of parallel processes is a Gantt char t with a row for each process. As the number of processes increases, these charts can become difficult to work with and can even exceed screen resolution. We propose a new visualization approach that affords more scalability and then demonstrate it on systems running with up to 16,384 processes.

  4. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A


    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software

  5. Software designs of image processing tasks with incremental refinement of computation.

    Anastasia, Davide; Andreopoulos, Yiannis


    Software realizations of computationally-demanding image processing tasks (e.g., image transforms and convolution) do not currently provide graceful degradation when their clock-cycles budgets are reduced, e.g., when delay deadlines are imposed in a multitasking environment to meet throughput requirements. This is an important obstacle in the quest for full utilization of modern programmable platforms' capabilities since worst-case considerations must be in place for reasonable quality of results. In this paper, we propose (and make available online) platform-independent software designs performing bitplane-based computation combined with an incremental packing framework in order to realize block transforms, 2-D convolution and frame-by-frame block matching. The proposed framework realizes incremental computation: progressive processing of input-source increments improves the output quality monotonically. Comparisons with the equivalent nonincremental software realization of each algorithm reveal that, for the same precision of the result, the proposed approach can lead to comparable or faster execution, while it can be arbitrarily terminated and provide the result up to the computed precision. Application examples with region-of-interest based incremental computation, task scheduling per frame, and energy-distortion scalability verify that our proposal provides significant performance scalability with graceful degradation.

  6. Application of parallel computing to seismic damage process simulation of an arch dam

    Zhong Hong; Lin Gao; Li Jianbo


    The simulation of damage process of high arch dam subjected to strong earthquake shocks is significant to the evaluation of its performance and seismic safety, considering the catastrophic effect of dam failure. However, such numerical simulation requires rigorous computational capacity. Conventional serial computing falls short of that and parallel computing is a fairly promising solution to this problem. The parallel finite element code PDPAD was developed for the damage prediction of arch dams utilizing the damage model with inheterogeneity of concrete considered. Developed with programming language Fortran, the code uses a master/slave mode for programming, domain decomposition method for allocation of tasks, MPI (Message Passing Interface) for communication and solvers from AZTEC library for solution of large-scale equations. Speedup test showed that the performance of PDPAD was quite satisfactory. The code was employed to study the damage process of a being-built arch dam on a 4-node PC Cluster, with more than one million degrees of freedom considered. The obtained damage mode was quite similar to that of shaking table test, indicating that the proposed procedure and parallel code PDPAD has a good potential in simulating seismic damage mode of arch dams. With the rapidly growing need for massive computation emerged from engineering problems, parallel computing will find more and more applications in pertinent areas.

  7. Integration of a neuroimaging processing pipeline into a pan-canadian computing grid

    Lavoie-Courchesne, S; Chouinard-Decorte, F; Doyon, J; Bellec, P; Rioux, P; Sherif, T; Rousseau, M-E; Das, S; Adalat, R; Evans, A C; Craddock, C; Margulies, D; Chu, C; Lyttelton, O


    The ethos of the neuroimaging field is quickly moving towards the open sharing of resources, including both imaging databases and processing tools. As a neuroimaging database represents a large volume of datasets and as neuroimaging processing pipelines are composed of heterogeneous, computationally intensive tools, such open sharing raises specific computational challenges. This motivates the design of novel dedicated computing infrastructures. This paper describes an interface between PSOM, a code-oriented pipeline development framework, and CBRAIN, a web-oriented platform for grid computing. This interface was used to integrate a PSOM-compliant pipeline for preprocessing of structural and functional magnetic resonance imaging into CBRAIN. We further tested the capacity of our infrastructure to handle a real large-scale project. A neuroimaging database including close to 1000 subjects was preprocessed using our interface and publicly released to help the participants of the ADHD-200 international competition. This successful experiment demonstrated that our integrated grid-computing platform is a powerful solution for high-throughput pipeline analysis in the field of neuroimaging.

  8. Domain Immersion Technique And Free Surface Computations Applied To Extrusion And Mixing Processes

    Valette, Rudy; Vergnes, Bruno; Basset, Olivier; Coupez, Thierry


    This work focuses on the development of numerical techniques devoted to the simulation of mixing processes of complex fluids such as twin-screw extrusion or batch mixing. In mixing process simulation, the absence of symmetry of the moving boundaries (the screws or the rotors) implies that their rigid body motion has to be taken into account by using a special treatment. We therefore use a mesh immersion technique (MIT), which consists in using a P1+/P1-based (MINI-element) mixed finite element method for solving the velocity-pressure problem and then solving the problem in the whole barrel cavity by imposing a rigid motion (rotation) to nodes found located inside the so called immersed domain, each subdomain (screw, rotor) being represented by a surface CAD mesh (or its mathematical equation in simple cases). The independent meshes are immersed into a unique backgound computational mesh by computing the distance function to their boundaries. Intersections of meshes are accounted for, allowing to compute a fill factor usable as for the VOF methodology. This technique, combined with the use of parallel computing, allows to compute the time-dependent flow of generalized Newtonian fluids including yield stress fluids in a complex system such as a twin screw extruder, including moving free surfaces, which are treated by a "level set" and Hamilton-Jacobi method.

  9. Computer system for the beam line data processing at JT-60 prototype neutral beam injector

    Horiike, Hiroshi; Kawai, Mikito; Ohara, Yoshihiro


    The present report describes the hard and soft wares of the data acquisition computer system for the prototype neutral injector unit for JT-60. In order to operate the unit, more than hundreds of signals of the beam line components have to be measured. These are mainly differential thermometers for the coolant waters and thermocouples for the beam dump components but not include those for the cryo system. Since the unit operates in a series of pulses, the measurement should be conducted very quickly in order to ensure the simultaneity of large number of the measured data. The present system actualize fast data acquisition using a small computer of 128 kB and measuring instruments connected through the bus. The system is connected to the JAERI computer center since the data capacity is fairly large to completely process them by the small computer. Therefore the measured data can be transferred to the computer center to calculate there, and the results can be received. After the system was completed the computer quickly print out the power flow data, which needed much work to calculate with hands. This system was very useful. It enhanced the experiments at the unit and reduced the labor. It enables us to early demonstrate the rated operation of the unit and to accurately estimate such operation data of the JT-60 NBI as the injection power. (author)

  10. Applying analytic hierarchy process to assess healthcare-oriented cloud computing service systems.

    Liao, Wen-Hwa; Qiu, Wan-Li


    Numerous differences exist between the healthcare industry and other industries. Difficulties in the business operation of the healthcare industry have continually increased because of the volatility and importance of health care, changes to and requirements of health insurance policies, and the statuses of healthcare providers, which are typically considered not-for-profit organizations. Moreover, because of the financial risks associated with constant changes in healthcare payment methods and constantly evolving information technology, healthcare organizations must continually adjust their business operation objectives; therefore, cloud computing presents both a challenge and an opportunity. As a response to aging populations and the prevalence of the Internet in fast-paced contemporary societies, cloud computing can be used to facilitate the task of balancing the quality and costs of health care. To evaluate cloud computing service systems for use in health care, providing decision makers with a comprehensive assessment method for prioritizing decision-making factors is highly beneficial. Hence, this study applied the analytic hierarchy process, compared items related to cloud computing and health care, executed a questionnaire survey, and then classified the critical factors influencing healthcare cloud computing service systems on the basis of statistical analyses of the questionnaire results. The results indicate that the primary factor affecting the design or implementation of optimal cloud computing healthcare service systems is cost effectiveness, with the secondary factors being practical considerations such as software design and system architecture.

  11. Processing-Efficient Distributed Adaptive RLS Filtering for Computationally Constrained Platforms

    Noor M. Khan


    Full Text Available In this paper, a novel processing-efficient architecture of a group of inexpensive and computationally incapable small platforms is proposed for a parallely distributed adaptive signal processing (PDASP operation. The proposed architecture runs computationally expensive procedures like complex adaptive recursive least square (RLS algorithm cooperatively. The proposed PDASP architecture operates properly even if perfect time alignment among the participating platforms is not available. An RLS algorithm with the application of MIMO channel estimation is deployed on the proposed architecture. Complexity and processing time of the PDASP scheme with MIMO RLS algorithm are compared with sequentially operated MIMO RLS algorithm and liner Kalman filter. It is observed that PDASP scheme exhibits much lesser computational complexity parallely than the sequential MIMO RLS algorithm as well as Kalman filter. Moreover, the proposed architecture provides an improvement of 95.83% and 82.29% decreased processing time parallely compared to the sequentially operated Kalman filter and MIMO RLS algorithm for low doppler rate, respectively. Likewise, for high doppler rate, the proposed architecture entails an improvement of 94.12% and 77.28% decreased processing time compared to the Kalman and RLS algorithms, respectively.

  12. The Effects of Computer-Assisted Instruction of Simple Circuits on Experimental Process Skills

    Şeyma ULUKÖK


    Full Text Available The experimental and control groups were composed of 30 sophomores majoring in Classroom Teaching for this study investigating the effects of computer-assisted instruction of simple circuits on the development of experimental process skills. The instruction includes experiments and studies about simple circuits and its elements (serial, parallel, and mixed conncetions of resistors covered in Science and Technology Laboratory II course curriculum. In this study where quantitative and qualitative methods were used together, the control list developed by the researchers was used to collect data. Results showed that experimental process skills of sophomores in experimental group were more developed than that of those in control group. Thus, it can be said that computer-assisted instruction has a positive impact on the development of experimental process skills of students.

  13. Post-processing computational fluid dynamic simulations of gas turbine combustor

    Sturgess, G.J.; Inko-Tariah, W.P.C.; James, R.H.


    The flowfield in combustors for gas turbine engines is extremely complex. Numerical simulation of such flowfields using computational fluid dynamics techniques has much to offer the design and development engineer. It is a difficult task, but it is one which is now being attempted routinely in the industry. The results of such simulations yield enormous amounts of information from which the responsible engineer has to synthesize a comprehensive understanding of the complete flowfield and the processes contained therein. The complex picture so constructed must be distilled down to the essential information upon which rational development decisions can be made. The only way this can be accomplished successfully is by extensive post-processing of the calculation. Post processing of a simulation relies heavily on computer graphics, and requires the enhancement provided by color. The application of one such post-processor is presented, and the strengths and weaknesses of various display techniques are illustrated

  14. New FORTRAN computer programs to acquire and process isotopic mass-spectrometric data

    Smith, D.H.


    The computer programs described in New Computer Programs to Acquire and Process Isotopic Mass Spectrometric Data have been revised. This report describes in some detail the operation of these programs, which acquire and process isotopic mass spectrometric data. Both functional and overall design aspects are addressed. The three basic program units - file manipulation, data acquisition, and data processing - are discussed in turn. Step-by-step instructions are included where appropriate, and each subsection is described in enough detail to give a clear picture of its function. Organization of file structure, which is central to the entire concept, is extensively discussed with the help of numerous tables. Appendices contain flow charts and outline file structure to help a programmer unfamiliar with the programs to alter them with a minimum of lost time

  15. Off-line data processing and display for computed tomographic images (EMI brain)

    Takizawa, Masaomi; Maruyama, Kiyoshi; Yano, Kesato; Takenaka, Eiichi.


    Processing and multi-format display for the CT (EMI) scan data have been tried by using an off-line small computer and an analog memory. Four or six CT images after processing are displayed on the CRT by a small computer with a 16 kilo-words core memory and an analog memory. Multi-format display of the CT image can be selected as follows; multi-slice display, continuative multi-window display, separate multi-window display, and multi-window level display. Electronic zooming for the real size viewing can give magnified CT image with one of displayed images if necessary. Image substraction, edge enhancement, smoothing, non-linear gray scale display, and synthesized image for the plane tomography reconstracted by the normal CT scan data, have been tried by the off-line data processing. A possibility for an effective application of the data base with CT image was obtained by these trials. (auth.)

  16. Computer Aided Design and Analysis of Separation Processes with Electrolyte Systems

    A methodology for computer aided design and analysis of separation processes involving electrolyte systems is presented. The methodology consists of three main parts. The thermodynamic part "creates" the problem specific property model package, which is a collection of pure component and mixture...... property models. The design and analysis part generates process (flowsheet) alternatives, evaluates/analyses feasibility of separation and provides a visual operation path for the desired separation. The simulation part consists of a simulation/calculation engine that allows the screening and validation...... of process alternatives. For the simulation part, a general multi-purpose, multi-phase separation model has been developed and integrated to an existing computer aided system. Application of the design and analysis methodology is highlighted through two illustrative case studies....

  17. Computer Aided Design and Analysis of Separation Processes with Electrolyte Systems

    Takano, Kiyoteru; Gani, Rafiqul; Kolar, P.


    A methodology for computer aided design and analysis of separation processes involving electrolyte systems is presented. The methodology consists of three main parts. The thermodynamic part 'creates' the problem specific property model package, which is a collection of pure component and mixture...... property models. The design and analysis part generates process (flowsheet) alternatives, evaluates/analyses feasibility of separation and provides a visual operation path for the desired separation. The simulation part consists of a simulation/calculation engine that allows the screening and validation...... of process alternatives. For the simulation part, a general multi-purpose, multi-phase separation model has been developed and integrated to an existing computer aided system. Application of the design and analysis methodology is highlighted through two illustrative case studies, (C) 2000 Elsevier Science...

  18. Two-parametric model of electron beam in computational dosimetry for radiation processing

    Lazurik, V.M.; Lazurik, V.T.; Popov, G.; Zimek, Z.


    Computer simulation of irradiation process of various materials with electron beam (EB) can be applied to correct and control the performances of radiation processing installations. Electron beam energy measurements methods are described in the international standards. The obtained results of measurements can be extended by implementation computational dosimetry. Authors have developed the computational method for determination of EB energy on the base of two-parametric fitting of semi-empirical model for the depth dose distribution initiated by mono-energetic electron beam. The analysis of number experiments show that described method can effectively consider random displacements arising from the use of aluminum wedge with a continuous strip of dosimetric film and minimize the magnitude uncertainty value of the electron energy evaluation, calculated from the experimental data. Two-parametric fitting method is proposed for determination of the electron beam model parameters. These model parameters are as follow: E 0 – energy mono-energetic and mono-directional electron source, X 0 – the thickness of the aluminum layer, located in front of irradiated object. That allows obtain baseline data related to the characteristic of the electron beam, which can be later on applied for computer modeling of the irradiation process. Model parameters which are defined in the international standards (like E p – the most probably energy and R p – practical range) can be linked with characteristics of two-parametric model (E 0 , X 0 ), which allows to simulate the electron irradiation process. The obtained data from semi-empirical model were checked together with the set of experimental results. The proposed two-parametric model for electron beam energy evaluation and estimation of accuracy for computational dosimetry methods on the base of developed model are discussed. - Highlights: • Experimental and computational methods of electron energy evaluation. • Development

  19. A Web-based computer system supporting information access, exchange and management during building processes

    Sørensen, Lars Schiøtt


    During the last two decades, a number of research efforts have been made in the field of computing systmes related to the building construction industry. Most of the projects have focused on a part of the entire design process and have typically been limited to a specific domain. This paper prese...... presents a newly developed computer system based on the World Wide Web on the Internet. The focus is on the simplicity of the systems structure and on an intuitive and user friendly interface...

  20. Stochastic approach for round-off error analysis in computing application to signal processing algorithms

    Vignes, J.


    Any result of algorithms provided by a computer always contains an error resulting from floating-point arithmetic round-off error propagation. Furthermore signal processing algorithms are also generally performed with data containing errors. The permutation-perturbation method, also known under the name CESTAC (controle et estimation stochastique d'arrondi de calcul) is a very efficient practical method for evaluating these errors and consequently for estimating the exact significant decimal figures of any result of algorithms performed on a computer. The stochastic approach of this method, its probabilistic proof, and the perfect agreement between the theoretical and practical aspects are described in this paper [fr

  1. Computer aided process control equipment at the Karlsruhe reprocessing pilot plant, WAK

    Winter, R.; Finsterwalder, L.; Gutzeit, G.; Reif, J.; Stollenwerk, A.H.; Weinbrecht, E.; Weishaupt, M.


    A computer aided process control system has been installed at the Karlsruhe Spent Fuel Reprocessing Plant, WAK. All necessary process control data of the first extraction cycle is collected via a data collection system and is displayed in suitable ways on a screen for the operator in charge of the unit. To aid verification of displayed data, various measurements are associated to each other using balance type process modeling. Thus, deviation of flowsheet conditions and malfunctioning of measuring equipment are easily detected. (orig.) [de

  2. The process monitoring computer system an integrated operations and safeguards surveillance system

    Liester, N.A.


    The use of the Process Monitoring Computer System (PMCS) at the Idaho Chemical Processing Plant (ICPP) relating to Operations and Safeguards concerns is discussed. Measures taken to assure the reliability of the system data are outlined along with the measures taken to assure the continuous availability of that data for use within the ICPP. The integration of process and safeguards information for use by the differing organizations is discussed. The PMCS successfully demonstrates the idea of remote Safeguards surveillance and the need for sharing of common information between different support organizations in an operating plant

  3. Computational Fluid Dynamics Modelling of Hydraulics and Sedimentation in Process Reactors During Aeration Tank Settling

    Dam Jensen, Mette; Ingildsen, Pernille; Rasmussen, Michael R.


    Aeration Tank Settling is a control method alowing settling in the process tank during high hydraulic load. The control method is patented. Aeration Tank Settling has been applied in several waste water treatment plant's using present design of the process tanks. Some process tank designs have...... shown to be more effective than others. To improve the design of less effective plants Computational Fluid Dynamics (CFD) modelling of hydraulics and sedimentation has been applied. The paper discusses the results at one particular plant experiencing problems with partly short-circuiting of the inlet...

  4. Goal selection versus process control in a brain-computer interface based on sensorimotor rhythms.

    Royer, Audrey S; He, Bin


    In a brain-computer interface (BCI) utilizing a process control strategy, the signal from the cortex is used to control the fine motor details normally handled by other parts of the brain. In a BCI utilizing a goal selection strategy, the signal from the cortex is used to determine the overall end goal of the user, and the BCI controls the fine motor details. A BCI based on goal selection may be an easier and more natural system than one based on process control. Although goal selection in theory may surpass process control, the two have never been directly compared, as we are reporting here. Eight young healthy human subjects participated in the present study, three trained and five naïve in BCI usage. Scalp-recorded electroencephalograms (EEG) were used to control a computer cursor during five different paradigms. The paradigms were similar in their underlying signal processing and used the same control signal. However, three were based on goal selection, and two on process control. For both the trained and naïve populations, goal selection had more hits per run, was faster, more accurate (for seven out of eight subjects) and had a higher information transfer rate than process control. Goal selection outperformed process control in every measure studied in the present investigation.

  5. Anatomic evaluation of the xiphoid process with 64-row multidetector computed tomography

    Akin, Kayihan; Kosehan, Dilek; Topcu, Adem; Koktener, Asli


    The aim of this study was to evaluate the interindividual variations of the xiphoid process in a wide adult group using 64-row multidetector computed tomography (MDCT). Included in the study were 500 consecutive patients who underwent coronary computed tomography angiography. Multiplanar reconstruction (MPR), maximum intensity projection (MIP) images on coronal and sagittal planes, and three-dimensional volume rendering (VR) reconstruction images were obtained and used for the evaluation of the anatomic features of the xiphoid process. The xiphoid process was present in all patients. The xiphoid process was deviated ventrally in 327 patients (65.4%). In 11 of these 327 patients (2.2%), ventral curving at the end of the xiphoid process resembled a hook. The xiphoid process was aligned in the same axis as the sternal corpus in 166 patients (33.2%). The tip of the xiphoid process was curved dorsally like a hook in three patients (0.6%). In four patients (0.8%), the xiphoid process exhibited a reverse S shape. Xiphoidal endings were single in 313 (62.6%) patients, double in 164 (32.8%), or triple in 23 (4.6%). Ossification of the cartilaginous xiphoid process was fully completed in 254 patients (50.8 %). In total, 171 patients (34.2%) had only one xiphoidal foramen and 45 patients (9%) had two or more foramina. Sternoxiphoidal fusion was present in 214 of the patients (42.8%). Significant interindividual variations were detected in the xiphoid process. Excellent anatomic evaluation capacity of MDCT facilitates the detection of variations of the xiphoid process as well as the whole ribcage. (orig.)

  6. Study on 'Safety qualification of process computers used in safety systems of nuclear power plants'

    Bertsche, K.; Hoermann, E.


    The study aims at developing safety standards for hardware and software of computer systems which are increasingly used also for important safety systems in nuclear power plants. The survey of the present state-of-the-art of safety requirements and specifications for safety-relevant systems and, additionally, for process computer systems has been compiled from national and foreign rules. In the Federal Republic of Germany the KTA safety guides and the BMI/BMU safety criteria have to be observed. For the design of future computer-aided systems in nuclear power plants it will be necessary to apply the guidelines in [DIN-880] and [DKE-714] together with [DIN-192]. With the aid of a risk graph the various functions of a system, or of a subsystem, can be evaluated with regard to their significance for safety engineering. (orig./HP) [de

  7. Computer-aided modeling for efficient and innovative product-process engineering

    Heitzig, Martina

    Model-based computer aided product-process engineering has attained increased importance in a number of industries, including pharmaceuticals, petrochemicals, fine chemicals, polymers, biotechnology, food, energy and water. This trend is set to continue due to the substantial benefits computer...... in chemical and biochemical engineering have been solved to illustrate the application of the generic modelling methodology, the computeraided modelling framework and the developed software tool.......-aided methods provide. The key prerequisite of computer-aided productprocess engineering is however the availability of models of different types, forms and application modes. The development of the models required for the systems under investigation tends to be a challenging, time-consuming and therefore cost...

  8. Application of computational fluid dynamics for the optimization of homogenization processes in wine tanks

    Müller Jonas


    Full Text Available Mixing processes for modern wine-making occur repeatedly during fermentation (e.g. yeast addition, wine fermen- tation additives, as well as after fermentation (e.g. blending, dosage, sulfur additions. In large fermentation vessels or when mixing fluids of different viscosities, an inadequate mixing process can lead to considerable costs and problems (inhomogeneous product, development of layers in the tank, waste of energy, clogging of filters. Considering advancements in computational fluid dynamics (CFD in the last few years and the computational power of computers nowadays, most large-scale wineries would be able to conduct mixing simulations using their own tank and agitator configurations in order to evaluate their efficiency and the necessary power input based on mathematical modeling. Regardless, most companies still rely on estimations and empirical values which are neither validated nor optimized. The free open-source CFD software OpenFOAM (v.2.3.1 is used to simulate flows in wine tanks. Different agitator types, different propeller geometries and rotational speeds can be modeled and compared amongst each other in the process. Moreover, fluid properties of different wine additives can be modeled. During opti- cal post-processing using the open-source software ParaView (v.4.3 the progression of homogenization can be visualized and poorly mixed regions in the tank are revealed.

  9. Radiometric installations for automatic control of industrial processes and some possibilities of the specialized computers application

    Kuzino, S.; Shandru, P.


    It is noted that application of radioisotope devices in circuits for automation of some industrial processes permits to obtain the on-line information about some parameters of these processes. This information being passed to a computer, controlling the process, permits to obtain and maintain some optimum technological perameters of this process. Some elements of the automation stem projecting are given from the poin of wiev of the radiometric devices tuning, calibration of the radiometric devices with the purpose to get a digital answer in the on-line regime with the preset accuracy and thrustworthyness levels for supplying them to the controlling computer; determination of the system's reaction on the base of the preset statistical criteria; development, on the base of the data obtained from the computer, of an algorithm for the functional checking of radiometric devices' characteristics, - stability and reproductibility of readings in the operation regime as well as determination of the value threshold of an answer, depending on the measured parameter [ru

  10. Computationally based methodology for reengineering the high-level waste planning process at SRS

    Paul, P.K.; Gregory, M.V.; Wells, M.N.


    The Savannah River Site (SRS) has started processing its legacy of 34 million gallons of high-level radioactive waste into its final disposable form. The SRS high-level waste (HLW) complex consists of 51 waste storage tanks, 3 evaporators, 6 waste treatment operations, and 2 waste disposal facilities. It is estimated that processing wastes to clean up all tanks will take 30+ yr of operation. Integrating all the highly interactive facility operations through the entire life cycle in an optimal fashion-while meeting all the budgetary, regulatory, and operational constraints and priorities-is a complex and challenging planning task. The waste complex operating plan for the entire time span is periodically published as an SRS report. A computationally based integrated methodology has been developed that has streamlined the planning process while showing how to run the operations at economically and operationally optimal conditions. The integrated computational model replaced a host of disconnected spreadsheet calculations and the analysts' trial-and-error solutions using various scenario choices. This paper presents the important features of the integrated computational methodology and highlights the parameters that are core components of the planning process


    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  12. Computing the Density Matrix in Electronic Structure Theory on Graphics Processing Units.

    Cawkwell, M J; Sanville, E J; Mniszewski, S M; Niklasson, Anders M N


    The self-consistent solution of a Schrödinger-like equation for the density matrix is a critical and computationally demanding step in quantum-based models of interatomic bonding. This step was tackled historically via the diagonalization of the Hamiltonian. We have investigated the performance and accuracy of the second-order spectral projection (SP2) algorithm for the computation of the density matrix via a recursive expansion of the Fermi operator in a series of generalized matrix-matrix multiplications. We demonstrate that owing to its simplicity, the SP2 algorithm [Niklasson, A. M. N. Phys. Rev. B2002, 66, 155115] is exceptionally well suited to implementation on graphics processing units (GPUs). The performance in double and single precision arithmetic of a hybrid GPU/central processing unit (CPU) and full GPU implementation of the SP2 algorithm exceed those of a CPU-only implementation of the SP2 algorithm and traditional matrix diagonalization when the dimensions of the matrices exceed about 2000 × 2000. Padding schemes for arrays allocated in the GPU memory that optimize the performance of the CUBLAS implementations of the level 3 BLAS DGEMM and SGEMM subroutines for generalized matrix-matrix multiplications are described in detail. The analysis of the relative performance of the hybrid CPU/GPU and full GPU implementations indicate that the transfer of arrays between the GPU and CPU constitutes only a small fraction of the total computation time. The errors measured in the self-consistent density matrices computed using the SP2 algorithm are generally smaller than those measured in matrices computed via diagonalization. Furthermore, the errors in the density matrices computed using the SP2 algorithm do not exhibit any dependence of system size, whereas the errors increase linearly with the number of orbitals when diagonalization is employed.

  13. Visual perception can account for the close relation between numerosity processing and computational fluency.

    Zhou, Xinlin; Wei, Wei; Zhang, Yiyun; Cui, Jiaxin; Chen, Chuansheng


    Studies have shown that numerosity processing (e.g., comparison of numbers of dots in two dot arrays) is significantly correlated with arithmetic performance. Researchers have attributed this association to the fact that both tasks share magnitude processing. The current investigation tested an alternative hypothesis, which states that visual perceptual ability (as measured by a figure-matching task) can account for the close relation between numerosity processing and arithmetic performance (computational fluency). Four hundred and twenty four third- to fifth-grade children (220 boys and 204 girls, 8.0-11.0 years old; 120 third graders, 146 fourth graders, and 158 fifth graders) were recruited from two schools (one urban and one suburban) in Beijing, China. Six classes were randomly selected from each school, and all students in each selected class participated in the study. All children were given a series of cognitive and mathematical tests, including numerosity comparison, figure matching, forward verbal working memory, visual tracing, non-verbal matrices reasoning, mental rotation, choice reaction time, arithmetic tests and curriculum-based mathematical achievement test. Results showed that figure-matching ability had higher correlations with numerosity processing and computational fluency than did other cognitive factors (e.g., forward verbal working memory, visual tracing, non-verbal matrix reasoning, mental rotation, and choice reaction time). More important, hierarchical multiple regression showed that figure matching ability accounted for the well-established association between numerosity processing and computational fluency. In support of the visual perception hypothesis, the results suggest that visual perceptual ability, rather than magnitude processing, may be the shared component of numerosity processing and arithmetic performance.

  14. Accuracy of detecting stenotic changes on coronary cineangiograms using computer image processing

    Sugahara, Tetsuo; Kimura, Koji; Maeda, Hirofumi.


    To accurately interprets stenotic changes on coronary cineangiograms, an automatic method of detecting stenotic lesion using computer image processing was developed. First, tracing of artery was performed. The vessel edges were then determined by unilateral Gaussian fitting. The stenotic change was detected on the basis of the reference diameter estimated by Hough transformation. This method was evaluated in 132 segments of 27 arteries in 18 patients. Three observers carried out visual interpretation and computer-aided interpretation. The rate of detection by visual interpretation was 6.1, 28.8 and 20.5%, and by computer-aided interpretation, 39.4, 39.4 and 45.5%. With computer-aided interpretation, the agreement between any two observers on lesions and non-lesions was 40.2% and 59.8%, respectively. Therefore, visual interpretation tended to underestimate the stenotic changes on coronary cineangiograms. We think that computer-aided interpretation increase the reliability of diagnosis on coronary cineangiograms. (author)

  15. NADAC and MERGE: computer codes for processing neutron activation analysis data

    Heft, R.E.; Martin, W.E.


    Absolute disintegration rates of specific radioactive products induced by neutron irradition of a sample are determined by spectrometric analysis of gamma-ray emissions. Nuclide identification and quantification is carried out by a complex computer code GAMANAL (described elsewhere). The output of GAMANAL is processed by NADAC, a computer code that converts the data on observed distintegration rates to data on the elemental composition of the original sample. Computations by NADAC are on an absolute basis in that stored nuclear parameters are used rather than the difference between the observed disintegration rate and the rate obtained by concurrent irradiation of elemental standards. The NADAC code provides for the computation of complex cases including those involving interrupted irradiations, parent and daughter decay situations where the daughter may also be produced independently, nuclides with very short half-lives compared to counting interval, and those involving interference by competing neutron-induced reactions. The NADAC output consists of a printed report, which summarizes analytical results, and a card-image file, which can be used as input to another computer code MERGE. The purpose of MERGE is to combine the results of multiple analyses and produce a single final answer, based on all available information, for each element found

  16. Computational simulation of the biomass gasification process in a fluidized bed reactor

    Rojas Mazaira, Leorlen Y.; Gamez Rodriguez, Abel; Andrade Gregori, Maria Dolores; Armas Cardona, Raul


    In an agro-industrial country as Cuba many residues of cultivation like the rice and the cane of sugar take place, besides the forest residues in wooded extensions. Is an interesting application for all this biomass, the gasification technology, by its high efficiency and its positive environmental impact. The computer simulation appears like a useful tool in the researches of parameters of operation of a gas- emitting, because it reduces the number of experiments to realise and the cost of the researches. In the work the importance of the application of the computer simulation is emphasized to anticipate the hydrodynamic behavior of fluidized bed and of the process of combustion of the biomass for different residues and different conditions of operation. A model using CFD for the simulation of the process of combustion in a gas- emitting of biomass sets out of fluidized bed, the hydrodynamic parameters of the multiphasic flow from the elaboration of a computer simulator that allows to form and to vary the geometry of the reactor, as well as the influence of the variation of magnitudes are characterized such as: speed, diameter of the sand and equivalent reason. Experimental results in cylindrical channels appear, to complete the study of the computer simulation realised in 2D. (author)

  17. Data processing system with a micro-computer for high magnetic field tokamak, TRIAM-1

    Kawasaki, Shoji; Nakamura, Kazuo; Nakamura, Yukio; Hiraki, Naoharu; Toi, Kazuo


    A data processing system was designed and constructed for the purpose of analyzing the data of the high magnetic field tokamak TRIAM-1. The system consists of a 10-channel A-D converter, a 20 K byte memory (RAM), an address bus control circuit, a data bus control circuit, a timing pulse and control signal generator, a D-A converter, a micro-computer, and a power source. The memory can be used as a CPU memory except at the time of sampling and data output. The out-put devices of the system are an X-Y recorder and an oscilloscope. The computer is composed of a CPU, a memory and an I/O part. The memory size can be extended. A cassette tape recorder is provided to keep the programs of the computer. An interface circuit between the computer and the tape recorder was designed and constructed. An electric discharge printer as an I/O device can be connected. From TRIAM-1, the signals of magnetic probes, plasma current, vertical field coil current, and one-turn loop voltage are fed into the processing system. The plasma displacement calculated from these signals is shown by one of I/O devices. The results of test run showed good performance. (Kato, T.)

  18. Multidisciplinary Design Optimisation (MDO) Methods: Their Synergy with Computer Technology in the Design Process

    Sobieszczanski-Sobieski, Jaroslaw


    The paper identifies speed, agility, human interface, generation of sensitivity information, task decomposition, and data transmission (including storage) as important attributes for a computer environment to have in order to support engineering design effectively. It is argued that when examined in terms of these attributes the presently available environment can be shown to be inadequate. A radical improvement is needed, and it may be achieved by combining new methods that have recently emerged from multidisciplinary design optimisation (MDO) with massively parallel processing computer technology. The caveat is that, for successful use of that technology in engineering computing, new paradigms for computing will have to be developed - specifically, innovative algorithms that are intrinsically parallel so that their performance scales up linearly with the number of processors. It may be speculated that the idea of simulating a complex behaviour by interaction of a large number of very simple models may be an inspiration for the above algorithms; the cellular automata are an example. Because of the long lead time needed to develop and mature new paradigms, development should begin now, even though the widespread availability of massively parallel processing is still a few years away.

  19. Multidisciplinary Design Optimization (MDO) Methods: Their Synergy with Computer Technology in Design Process

    Sobieszczanski-Sobieski, Jaroslaw


    The paper identifies speed, agility, human interface, generation of sensitivity information, task decomposition, and data transmission (including storage) as important attributes for a computer environment to have in order to support engineering design effectively. It is argued that when examined in terms of these attributes the presently available environment can be shown to be inadequate a radical improvement is needed, and it may be achieved by combining new methods that have recently emerged from multidisciplinary design optimization (MDO) with massively parallel processing computer technology. The caveat is that, for successful use of that technology in engineering computing, new paradigms for computing will have to be developed - specifically, innovative algorithms that are intrinsically parallel so that their performance scales up linearly with the number of processors. It may be speculated that the idea of simulating a complex behavior by interaction of a large number of very simple models may be an inspiration for the above algorithms, the cellular automata are an example. Because of the long lead time needed to develop and mature new paradigms, development should be now, even though the widespread availability of massively parallel processing is still a few years away.

  20. Data processing system with a micro-computer for high magnetic field tokamak, TRIAM-1

    Kawasaki, S; Nakamura, K; Nakamura, Y; Hiraki, N; Toi, K [Kyushu Univ., Fukuoka (Japan). Research Inst. for Applied Mechanics


    A data processing system was designed and constructed for the purpose of analyzing the data of the high magnetic field tokamak TRIAM-1. The system consists of a 10-channel A-D converter, a 20 K byte memory (RAM), an address bus control circuit, a data bus control circuit, a timing pulse and control signal generator, a D-A converter, a micro-computer, and a power source. The memory can be used as a CPU memory except at the time of sampling and data output. The out-put devices of the system are an X-Y recorder and an oscilloscope. The computer is composed of a CPU, a memory and an I/O part. The memory size can be extended. A cassette tape recorder is provided to keep the programs of the computer. An interface circuit between the computer and the tape recorder was designed and constructed. An electric discharge printer as an I/O device can be connected. From TRIAM-1, the signals of magnetic probes, plasma current, vertical field coil current, and one-turn loop voltage are fed into the processing system. The plasma displacement calculated from these signals is shown by one of I/O devices. The results of test run showed good performance.

  1. Advanced computational workflow for the multi-scale modeling of the bone metabolic processes.

    Dao, Tien Tuan


    Multi-scale modeling of the musculoskeletal system plays an essential role in the deep understanding of complex mechanisms underlying the biological phenomena and processes such as bone metabolic processes. Current multi-scale models suffer from the isolation of sub-models at each anatomical scale. The objective of this present work was to develop a new fully integrated computational workflow for simulating bone metabolic processes at multi-scale levels. Organ-level model employs multi-body dynamics to estimate body boundary and loading conditions from body kinematics. Tissue-level model uses finite element method to estimate the tissue deformation and mechanical loading under body loading conditions. Finally, cell-level model includes bone remodeling mechanism through an agent-based simulation under tissue loading. A case study on the bone remodeling process located on the human jaw was performed and presented. The developed multi-scale model of the human jaw was validated using the literature-based data at each anatomical level. Simulation outcomes fall within the literature-based ranges of values for estimated muscle force, tissue loading and cell dynamics during bone remodeling process. This study opens perspectives for accurately simulating bone metabolic processes using a fully integrated computational workflow leading to a better understanding of the musculoskeletal system function from multiple length scales as well as to provide new informative data for clinical decision support and industrial applications.

  2. Optimal Selection Method of Process Patents for Technology Transfer Using Fuzzy Linguistic Computing

    Gangfeng Wang


    Full Text Available Under the open innovation paradigm, technology transfer of process patents is one of the most important mechanisms for manufacturing companies to implement process innovation and enhance the competitive edge. To achieve promising technology transfers, we need to evaluate the feasibility of process patents and optimally select the most appropriate patent according to the actual manufacturing situation. Hence, this paper proposes an optimal selection method of process patents using multiple criteria decision-making and 2-tuple fuzzy linguistic computing to avoid information loss during the processes of evaluation integration. An evaluation index system for technology transfer feasibility of process patents is designed initially. Then, fuzzy linguistic computing approach is applied to aggregate the evaluations of criteria weights for each criterion and corresponding subcriteria. Furthermore, performance ratings for subcriteria and fuzzy aggregated ratings of criteria are calculated. Thus, we obtain the overall technology transfer feasibility of patent alternatives. Finally, a case study of aeroengine turbine manufacturing is presented to demonstrate the applicability of the proposed method.

  3. Using High Performance Computing to Examine the Processes of Neurogenesis Underlying Pattern Separation/Completion of Episodic Information.

    Aimone, James Bradley [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Betty, Rita [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)


    Using High Performance Computing to Examine the Processes of Neurogenesis Underlying Pattern Separation/Completion of Episodic Information - Sandia researchers developed novel methods and metrics for studying the computational function of neurogenesis, thus generating substantial impact to the neuroscience and neural computing communities. This work could benefit applications in machine learning and other analysis activities.

  4. Processing data communications events by awakening threads in parallel active messaging interface of a parallel computer

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.


    Processing data communications events in a parallel active messaging interface (`PAMI`) of a parallel computer that includes compute nodes that execute a parallel application, with the PAMI including data communications endpoints, and the endpoints are coupled for data communications through the PAMI and through other data communications resources, including determining by an advance function that there are no actionable data communications events pending for its context, placing by the advance function its thread of execution into a wait state, waiting for a subsequent data communications event for the context; responsive to occurrence of a subsequent data communications event for the context, awakening by the thread from the wait state; and processing by the advance function the subsequent data communications event now pending for the context.

  5. Integration of a browser based operator manual in the system environment of a process computer system

    Weber, Andreas; Erfle, Robert; Feinkohl, Dirk


    The integration of a browser based operator manual in the system environment of a process computer system is an optimization of the operating procedure in the control room and a safety enhancement due to faster and error-free access to the manual contents. Several requirements by the authorities have to be fulfilled: the operating manual has to be available as hard copy, the format has to be true to original, protection against manipulation has to be provided, the manual content of the browser-based version and the hard copy have to identical, and the display presentation has to be consistent with ergonomic principals. The integration of the on-line manual in the surveillance process computer system provides the operator with the relevant comments to the surveillance signal. The described integration of the on-line manual is an optimization of the operator's everyday job with respect to ergonomics and safety (human performance).

  6. Efficient Processing of Continuous Skyline Query over Smarter Traffic Data Stream for Cloud Computing

    Wang Hanning


    Full Text Available The analyzing and processing of multisource real-time transportation data stream lay a foundation for the smart transportation's sensibility, interconnection, integration, and real-time decision making. Strong computing ability and valid mass data management mode provided by the cloud computing, is feasible for handling Skyline continuous query in the mass distributed uncertain transportation data stream. In this paper, we gave architecture of layered smart transportation about data processing, and we formalized the description about continuous query over smart transportation data Skyline. Besides, we proposed mMR-SUDS algorithm (Skyline query algorithm of uncertain transportation stream data based on micro-batchinMap Reduce based on sliding window division and architecture.

  7. A Cloud-Computing Service for Environmental Geophysics and Seismic Data Processing

    Heilmann, B. Z.; Maggi, P.; Piras, A.; Satta, G.; Deidda, G. P.; Bonomi, E.


    Cloud computing is establishing worldwide as a new high performance computing paradigm that offers formidable possibilities to industry and science. The presented cloud-computing portal, part of the Grida3 project, provides an innovative approach to seismic data processing by combining open-source state-of-the-art processing software and cloud-computing technology, making possible the effective use of distributed computation and data management with administratively distant resources. We substituted the user-side demanding hardware and software requirements by remote access to high-performance grid-computing facilities. As a result, data processing can be done quasi in real-time being ubiquitously controlled via Internet by a user-friendly web-browser interface. Besides the obvious advantages over locally installed seismic-processing packages, the presented cloud-computing solution creates completely new possibilities for scientific education, collaboration, and presentation of reproducible results. The web-browser interface of our portal is based on the commercially supported grid portal EnginFrame, an open framework based on Java, XML, and Web Services. We selected the hosted applications with the objective to allow the construction of typical 2D time-domain seismic-imaging workflows as used for environmental studies and, originally, for hydrocarbon exploration. For data visualization and pre-processing, we chose the free software package Seismic Un*x. We ported tools for trace balancing, amplitude gaining, muting, frequency filtering, dip filtering, deconvolution and rendering, with a customized choice of options as services onto the cloud-computing portal. For structural imaging and velocity-model building, we developed a grid version of the Common-Reflection-Surface stack, a data-driven imaging method that requires no user interaction at run time such as manual picking in prestack volumes or velocity spectra. Due to its high level of automation, CRS stacking

  8. PCM- data processing - description of the program TAPEDUMP for the computer HP-2100 S

    Ziegler, G.


    The assembler program TADEDUMP has the task, to preprocess PCM-data for further processing with a large computer and to put them in an output routine on magnetic tape with a measurement specific header. In the preprocessing the data can be reduced by selection and averaging. In the case of certain reading errors the data are discarded, but the synchronization is reestablished. (orig.) [de

  9. ENDF/B Pre-Processing Codes: Implementing and testing on a Personal Computer

    McLaughlin, P.K.


    This document describes the contents of the diskettes containing the ENDF/B Pre-Processing codes by D.E. Cullen, and example data for use in implementing and testing these codes on a Personal Computer of the type IBM-PC/AT. Upon request the codes are available from the IAEA Nuclear Data Section, free of charge, on a series of 7 diskettes. (author)

  10. Transaction processing in the common node of a distributed function laboratory computer system

    Stubblefield, F.W.; Dimmler, D.G.


    A computer network architecture consisting of a common node processor for managing peripherals and files and a number of private node processors for laboratory experiment control is briefly reviewed. Central to the problem of private node-common node communication is the concept of a transaction. The collection of procedures and the data structure associated with a transaction are described. The common node properties assigned to a transaction and procedures required for its complete processing are discussed. (U.S.)

  11. A computer interface for processing multi-parameter data of multiple event types

    Katayama, I.; Ogata, H.


    A logic circuit called a 'Raw Data Processor (RDP)' which functions as an interface between ADCs and the PDP-11 computer has been developed at RCNP, Osaka University for general use. It enables data processing simultaneously for numbers of events of various types up to 16, and an arbitrary combination of ADCs of any number up to 14 can be assigned to each event type by means of a pinboard matrix. The details of the RDP and its application are described. (orig.)

  12. An integrated computer aided system for integrated design of chemical processes

    Gani, Rafiqul; Hytoft, Glen; Jaksland, Cecilia


    In this paper, an Integrated Computer Aided System (ICAS), which is particularly suitable for solving problems related to integrated design of chemical processes; is presented. ICAS features include a model generator (generation of problem specific models including model simplification and model ...... form the basis for the toolboxes. The available features of ICAS are highlighted through a case study involving the separation of binary azeotropic mixtures. (C) 1997 Elsevier Science Ltd....

  13. Computer processing of nuclear material data in the German Democratic Republic - as of August 1980

    Burmester, M.; Helming, M.


    A description is given of the computer-based processing of safeguards information within the frame of the State System of Accounting for and Control of Nuclear Material. Software includes the programmes ICR, PILMBR, LISTE, POL, DELE and SIP which produce the required reports to the IAEA on magnetic type and in the form of printouts, and provide a series of relevant information and data essentially facilitating the fulfilment of national obligations in the field of nuclear material control. (author)

  14. Possibilities and importance of using computer games and simulations in educational process

    Danilović Mirčeta S.


    The paper discusses if it is possible and appropriate to use simulations (simulation games) and traditional games in the process of education. It is stressed that the terms "game" and "simulation" can and should be taken in a broader sense, although they are chiefly investigated herein as video-computer games and simulations. Any activity combining the properties of game (competition, rules, players) and the properties of simulation (i.e. operational presentation of reality) should be underst...

  15. The Strategy Blueprint: A Strategy Process Computer-Aided Design Tool

    Aldea, Adina Ioana; Febriani, Tania Rizki; Daneva, Maya; Iacob, Maria Eugenia


    Strategy has always been a main concern of organizations because it dictates their direction, and therefore determines their success. Thus, organizations need to have adequate support to guide them through their strategy formulation process. The goal of this research is to develop a computer-based tool, known as ‘the Strategy Blueprint’, consisting of a combination of nine strategy techniques, which can help organizations define the most suitable strategy, based on the internal and external f...

  16. Automation of a cryogenic facility by commercial process-control computer

    Sondericker, J.H.; Campbell, D.; Zantopp, D.


    To insure that Brookhaven's superconducting magnets are reliable and their field quality meets accelerator requirements, each magnet is pre-tested at operating conditions after construction. MAGCOOL, the production magnet test facility, was designed to perform these tests, having the capacity to test ten magnets per five day week. This paper describes the control aspects of MAGCOOL and the advantages afforded the designers by the implementation of a commercial process control computer system

  17. Birth/birth-death processes and their computable transition probabilities with biological applications.

    Ho, Lam Si Tung; Xu, Jason; Crawford, Forrest W; Minin, Vladimir N; Suchard, Marc A


    Birth-death processes track the size of a univariate population, but many biological systems involve interaction between populations, necessitating models for two or more populations simultaneously. A lack of efficient methods for evaluating finite-time transition probabilities of bivariate processes, however, has restricted statistical inference in these models. Researchers rely on computationally expensive methods such as matrix exponentiation or Monte Carlo approximation, restricting likelihood-based inference to small systems, or indirect methods such as approximate Bayesian computation. In this paper, we introduce the birth/birth-death process, a tractable bivariate extension of the birth-death process, where rates are allowed to be nonlinear. We develop an efficient algorithm to calculate its transition probabilities using a continued fraction representation of their Laplace transforms. Next, we identify several exemplary models arising in molecular epidemiology, macro-parasite evolution, and infectious disease modeling that fall within this class, and demonstrate advantages of our proposed method over existing approaches to inference in these models. Notably, the ubiquitous stochastic susceptible-infectious-removed (SIR) model falls within this class, and we emphasize that computable transition probabilities newly enable direct inference of parameters in the SIR model. We also propose a very fast method for approximating the transition probabilities under the SIR model via a novel branching process simplification, and compare it to the continued fraction representation method with application to the 17th century plague in Eyam. Although the two methods produce similar maximum a posteriori estimates, the branching process approximation fails to capture the correlation structure in the joint posterior distribution.

  18. Mission Management Computer and Sequencing Hardware for RLV-TD HEX-01 Mission

    Gupta, Sukrat; Raj, Remya; Mathew, Asha Mary; Koshy, Anna Priya; Paramasivam, R.; Mookiah, T.


    Reusable Launch Vehicle-Technology Demonstrator Hypersonic Experiment (RLV-TD HEX-01) mission posed some unique challenges in the design and development of avionics hardware. This work presents the details of mission critical avionics hardware mainly Mission Management Computer (MMC) and sequencing hardware. The Navigation, Guidance and Control (NGC) chain for RLV-TD is dual redundant with cross-strapped Remote Terminals (RTs) interfaced through MIL-STD-1553B bus. MMC is Bus Controller on the 1553 bus, which does the function of GPS aided navigation, guidance, digital autopilot and sequencing for the RLV-TD launch vehicle in different periodicities (10, 20, 500 ms). Digital autopilot execution in MMC with a periodicity of 10 ms (in ascent phase) is introduced for the first time and successfully demonstrated in the flight. MMC is built around Intel i960 processor and has inbuilt fault tolerance features like ECC for memories. Fault Detection and Isolation schemes are implemented to isolate the failed MMC. The sequencing hardware comprises Stage Processing System (SPS) and Command Execution Module (CEM). SPS is `RT' on the 1553 bus which receives the sequencing and control related commands from MMCs and posts to downstream modules after proper error handling for final execution. SPS is designed as a high reliability system by incorporating various fault tolerance and fault detection features. CEM is a relay based module for sequence command execution.

  19. A review of combined experimental and computational procedures for assessing biopolymer structure-process-property relationships.

    Gronau, Greta; Krishnaji, Sreevidhya T; Kinahan, Michelle E; Giesa, Tristan; Wong, Joyce Y; Kaplan, David L; Buehler, Markus J


    Tailored biomaterials with tunable functional properties are desirable for many applications ranging from drug delivery to regenerative medicine. To improve the predictability of biopolymer materials functionality, multiple design parameters need to be considered, along with appropriate models. In this article we review the state of the art of synthesis and processing related to the design of biopolymers, with an emphasis on the integration of bottom-up computational modeling in the design process. We consider three prominent examples of well-studied biopolymer materials - elastin, silk, and collagen - and assess their hierarchical structure, intriguing functional properties and categorize existing approaches to study these materials. We find that an integrated design approach in which both experiments and computational modeling are used has rarely been applied for these materials due to difficulties in relating insights gained on different length- and time-scales. In this context, multiscale engineering offers a powerful means to accelerate the biomaterials design process for the development of tailored materials that suit the needs posed by the various applications. The combined use of experimental and computational tools has a very broad applicability not only in the field of biopolymers, but can be exploited to tailor the properties of other polymers and composite materials in general. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Cobit system in the audit processes of the systems of computer systems

    Julio Jhovany Santacruz Espinoza


    Full Text Available The present research work has been carried out to show the benefits of the use of the COBIT system in the auditing processes of the computer systems, the problem is related to: How does it affect the process of audits in the institutions, use of the COBIT system? The main objective is to identify the incidence of the use of the COBIT system in the auditing process used by computer systems within both public and private organizations; In order to achieve our stated objectives of the research will be developed first with the conceptualization of key terms for an easy understanding of the subject, as a conclusion: we can say the COBIT system allows to identify the methodology by using information from the IT departments, to determine the resources of the (IT Information Technology, specified in the COBIT system, such as files, programs, computer networks, including personnel that use or manipulate the information, with the purpose of providing information that the organization or company requires to achieve its objectives.