Sample records for netcentric computing systems

  1. A WiMAX Networked UAV Telemetry System for Net-Centric Remote Sensing and Range Surveillance Project (United States)

    National Aeronautics and Space Administration — A WiMAX networked UAV Telemetry System (WNUTS) is designed for net-centric remote sensing and launch range surveillance applications. WNUTS integrates a MIMO powered...

  2. Autonomous Information Unit for Fine-Grain Data Access Control and Information Protection in a Net-Centric System (United States)

    Chow, Edward T.; Woo, Simon S.; James, Mark; Paloulian, George K.


    As communication and networking technologies advance, networks will become highly complex and heterogeneous, interconnecting different network domains. There is a need to provide user authentication and data protection in order to further facilitate critical mission operations, especially in the tactical and mission-critical net-centric networking environment. The Autonomous Information Unit (AIU) technology was designed to provide the fine-grain data access and user control in a net-centric system-testing environment to meet these objectives. The AIU is a fundamental capability designed to enable fine-grain data access and user control in the cross-domain networking environments, where an AIU is composed of the mission data, metadata, and policy. An AIU provides a mechanism to establish trust among deployed AIUs based on recombining shared secrets, authentication and verify users with a username, X.509 certificate, enclave information, and classification level. AIU achieves data protection through (1) splitting data into multiple information pieces using the Shamir's secret sharing algorithm, (2) encrypting each individual information piece using military-grade AES-256 encryption, and (3) randomizing the position of the encrypted data based on the unbiased and memory efficient in-place Fisher-Yates shuffle method. Therefore, it becomes virtually impossible for attackers to compromise data since attackers need to obtain all distributed information as well as the encryption key and the random seeds to properly arrange the data. In addition, since policy can be associated with data in the AIU, different user access and data control strategies can be included. The AIU technology can greatly enhance information assurance and security management in the bandwidth-limited and ad hoc net-centric environments. In addition, AIU technology can be applicable to general complex network domains and applications where distributed user authentication and data protection are

  3. Net-Centric Warfare 2.0: Cloud Computing and the New Age of War (United States)


    program that is as pervasive as the Microsoft equivalents for the PC.62 As of 2009, there is no way to know who will win the cloud computing...the Employees, (2002), 80 Andrew Wenger , "Data Protection with SaaS," Communications News 45, no. 9...accessed Wenger , Andrew. 2008. Data Protection with Saas. Communications News 45 (9):30-30. Wilson, Greg. "EC2: Commoditized Computing." January 5

  4. A Reference Framework of Netcentric Principles for NEC Concept Development and Experimentation

    NARCIS (Netherlands)

    Keus, H.E.; Jense, G.J.


    The starting point for the development of future Network Enabled Capabilities (NEC) is a well founded description of the operational environment, net-centric concepts of operations, and netcentric systems. We have started to define a reference framework for NEC concept development to help guide −

  5. AN-CASE NET-CENTRIC modeling and simulation (United States)

    Baskinger, Patricia J.; Chruscicki, Mary Carol; Turck, Kurt


    The objective of mission training exercises is to immerse the trainees into an environment that enables them to train like they would fight. The integration of modeling and simulation environments that can seamlessly leverage Live systems, and Virtual or Constructive models (LVC) as they are available offers a flexible and cost effective solution to extending the "war-gaming" environment to a realistic mission experience while evolving the development of the net-centric enterprise. From concept to full production, the impact of new capabilities on the infrastructure and concept of operations, can be assessed in the context of the enterprise, while also exposing them to the warfighter. Training is extended to tomorrow's tools, processes, and Tactics, Techniques and Procedures (TTPs). This paper addresses the challenges of a net-centric modeling and simulation environment that is capable of representing a net-centric enterprise. An overview of the Air Force Research Laboratory's (AFRL) Airborne Networking Component Architecture Simulation Environment (AN-CASE) is provide as well as a discussion on how it is being used to assess technologies for the purpose of experimenting with new infrastructure mechanisms that enhance the scalability and reliability of the distributed mission operations environment.

  6. The JSpOC Mission System (JMS) Common Data Model: Foundation for Net-Centric Interoperability for Space Situational Awareness (United States)

    Hutchison, M.; Kolarik, K.; Waters, J.


    The space situational awareness (SSA) data we access and use through existing SSA systems is largely provided in formats which cannot be readily understood by other systems (SSA or otherwise) without translation. As a result, while the data is useful for some known set of users, for other users it is not discoverable (no way to know it is there), accessible (if you did know, there is no way to electronically obtain the data) or machine-understandable (even if you did have access, the data exists in a format which cannot be readily ingested by your existing systems). Much of this existing data is unstructured, stored in non-standard formats which feed legacy systems. Data terms are not always unique, and calculations performed using legacy functions plugged into a service-oriented backbone can produce inconsistent results. The promise of data which is interoperable across systems and applications depends on a common data model as an underlying foundation for sharing information on a machine-to-machine basis. M2M interoperability is fundamental to performance, reducing or eliminating time-consuming translation and accelerating delivery to end users for final expert human analysis in support of mission fulfillment. A data model is common when it can be used by multiple programs and projects within a domain (e.g., C2 SSA). Model construction begins with known requirements and includes the development of conceptual and logical representations of the data. The final piece of the model is an implementable physical representation (e.g., XML schema) which can be used by developers to build working software components and systems. The JMS Common Data Model v1.0 was derived over six years from the National SSA Mission Threads under the direction of AFSPC/A5CN. The subsequent model became the A5CN approved JMS Requirements Model. The resulting logical and physical models have been registered in the DoD Metadata Registry under the C2 SSA Namespace and will be made available

  7. Pedigree management and assessment in a net-centric environment (United States)

    Gioioso, Marisa M.; McCullough, S. Daryl; Cormier, Jennifer P.; Marceau, Carla; Joyce, Robert A.


    Modern Defense strategy and execution is increasingly net-centric, making more information available more quickly. In this environment, the intelligence agent or warfighter must distinguish decision-quality information from potentially inaccurate, or even conflicting, pieces of information from multiple sources - often in time-critical situations. The Pedigree Management and Assessment Framework (PMAF) enables the publisher of information to record standard provenance metadata about the source, manner of collection, and the chain of modification of information as it passed through processing and/or assessment. In addition, the publisher can define and include other metadata relevant to quality assessment, such as domain-specific metadata about sensor accuracy or the organizational structure of agencies. PMAF stores this potentially enormous amount of metadata compactly and presents it to the user in an intuitive graphical format, together with PMAF-generated assessments that enable the user to quickly estimate information quality. PMAF has been created for a net-centric information management system; it can access pedigree information across communities of interest (COIs) and across network boundaries and will also be implemented in a Web Services environment.

  8. Need-to-know vs. need-to-share: the net-centric dilemma (United States)

    Levy, Renato; Lyell, Margaret


    In Net-centric operations the timely flow of the correct information to the mission partners is fundamental for the success of the endeavor. Yet, as we strive to work in multi-agencies and multi-national coalitions it is important to control the flow of information. This is the information assurance net-centric dilemma. How to speed the flow of information while keeping the necessary access boundaries? Current multi-level security and role base access strategies and their derivatives control the flow of data, but fail to implement higher levels of information policy. We propose an architecture capable of supporting the solution of the Net-Centric dilemma. This architecture, distributed and scalable, is compatible with Air Force's Metadata Environment initiative (MDE). In the proposed architecture the metadata tagged data items are used to construct a semantic map of how the information items are associated. Using this map, policy can be applied to information items. Provided the policy is logically based, reasoners can be used to identify not only if the person soliciting the data item has rights to receive it but also what kind of information can be derived from this data based on information retrieved previously. The full architecture includes the determination of which information can be relayed or not at any given time, as well as all the required mechanisms for enforcement including identification of potential intentional fraudulent actions. The proposed architecture is extensible and does not require any specific policy language or reasoner to be effective. Multiple approaches can be simultaneously present in the system.

  9. Computer programming and computer systems

    CERN Document Server

    Hassitt, Anthony


    Computer Programming and Computer Systems imparts a "reading knowledge? of computer systems.This book describes the aspects of machine-language programming, monitor systems, computer hardware, and advanced programming that every thorough programmer should be acquainted with. This text discusses the automatic electronic digital computers, symbolic language, Reverse Polish Notation, and Fortran into assembly language. The routine for reading blocked tapes, dimension statements in subroutines, general-purpose input routine, and efficient use of memory are also elaborated.This publication is inten

  10. Attacks on computer systems

    Directory of Open Access Journals (Sweden)

    Dejan V. Vuletić


    Full Text Available Computer systems are a critical component of the human society in the 21st century. Economic sector, defense, security, energy, telecommunications, industrial production, finance and other vital infrastructure depend on computer systems that operate at local, national or global scales. A particular problem is that, due to the rapid development of ICT and the unstoppable growth of its application in all spheres of the human society, their vulnerability and exposure to very serious potential dangers increase. This paper analyzes some typical attacks on computer systems.

  11. Resilient computer system design

    CERN Document Server

    Castano, Victor


    This book presents a paradigm for designing new generation resilient and evolving computer systems, including their key concepts, elements of supportive theory, methods of analysis and synthesis of ICT with new properties of evolving functioning, as well as implementation schemes and their prototyping. The book explains why new ICT applications require a complete redesign of computer systems to address challenges of extreme reliability, high performance, and power efficiency. The authors present a comprehensive treatment for designing the next generation of computers, especially addressing safety-critical, autonomous, real time, military, banking, and wearable health care systems.   §  Describes design solutions for new computer system - evolving reconfigurable architecture (ERA) that is free from drawbacks inherent in current ICT and related engineering models §  Pursues simplicity, reliability, scalability principles of design implemented through redundancy and re-configurability; targeted for energy-,...

  12. Wearable computer technology for dismounted applications (United States)

    Daniels, Reginald


    Small computing devices which rival the compact size of traditional personal digital assistants (PDA) have recently established a market niche. These computing devices are small enough to be considered unobtrusive for humans to wear. The computing devices are also powerful enough to run full multi-tasking general purpose operating systems. This paper will explore the wearable computer information system for dismounted applications recently fielded for ground-based US Air Force use. The environments that the information systems are used in will be reviewed, as well as a description of the net-centric, ground-based warrior. The paper will conclude with a discussion regarding the importance of intuitive, usable, and unobtrusive operator interfaces for dismounted operators.

  13. Computer network defense system (United States)

    Urias, Vincent; Stout, William M. S.; Loverro, Caleb


    A method and apparatus for protecting virtual machines. A computer system creates a copy of a group of the virtual machines in an operating network in a deception network to form a group of cloned virtual machines in the deception network when the group of the virtual machines is accessed by an adversary. The computer system creates an emulation of components from the operating network in the deception network. The components are accessible by the group of the cloned virtual machines as if the group of the cloned virtual machines was in the operating network. The computer system moves network connections for the group of the virtual machines in the operating network used by the adversary from the group of the virtual machines in the operating network to the group of the cloned virtual machines, enabling protecting the group of the virtual machines from actions performed by the adversary.

  14. Computer system operation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Young Jae; Lee, Hae Cho; Lee, Ho Yeun; Kim, Young Taek; Lee, Sung Kyu; Park, Jeong Suk; Nam, Ji Wha; Kim, Soon Kon; Yang, Sung Un; Sohn, Jae Min; Moon, Soon Sung; Park, Bong Sik; Lee, Byung Heon; Park, Sun Hee; Kim, Jin Hee; Hwang, Hyeoi Sun; Lee, Hee Ja; Hwang, In A. [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)


    The report described the operation and the trouble shooting of main computer and KAERINet. The results of the project are as follows; 1. The operation and trouble shooting of the main computer system. (Cyber 170-875, Cyber 960-31, VAX 6320, VAX 11/780). 2. The operation and trouble shooting of the KAERINet. (PC to host connection, host to host connection, file transfer, electronic-mail, X.25, CATV etc.). 3. The development of applications -Electronic Document Approval and Delivery System, Installation the ORACLE Utility Program. 22 tabs., 12 figs. (Author) .new.

  15. Computational systems chemical biology. (United States)

    Oprea, Tudor I; May, Elebeoba E; Leitão, Andrei; Tropsha, Alexander


    There is a critical need for improving the level of chemistry awareness in systems biology. The data and information related to modulation of genes and proteins by small molecules continue to accumulate at the same time as simulation tools in systems biology and whole body physiologically based pharmacokinetics (PBPK) continue to evolve. We called this emerging area at the interface between chemical biology and systems biology systems chemical biology (SCB) (Nat Chem Biol 3: 447-450, 2007).The overarching goal of computational SCB is to develop tools for integrated chemical-biological data acquisition, filtering and processing, by taking into account relevant information related to interactions between proteins and small molecules, possible metabolic transformations of small molecules, as well as associated information related to genes, networks, small molecules, and, where applicable, mutants and variants of those proteins. There is yet an unmet need to develop an integrated in silico pharmacology/systems biology continuum that embeds drug-target-clinical outcome (DTCO) triplets, a capability that is vital to the future of chemical biology, pharmacology, and systems biology. Through the development of the SCB approach, scientists will be able to start addressing, in an integrated simulation environment, questions that make the best use of our ever-growing chemical and biological data repositories at the system-wide level. This chapter reviews some of the major research concepts and describes key components that constitute the emerging area of computational systems chemical biology.

  16. Computer security for the computer systems manager


    Helling, William D.


    Approved for public release; distribution is unlimited This thesis is a primer on the subject of computer security. It is written for the use of computer systems managers and addresses basic concepts of computer security and risk analysis. An example of the techniques employed by a typical military data processing center is included in the form of the written results of an actual on-site survey. Computer security is defined in the context of its scope and an analysis is made of those ...

  17. Computer memory management system (United States)

    Kirk, III, Whitson John


    A computer memory management system utilizing a memory structure system of "intelligent" pointers in which information related to the use status of the memory structure is designed into the pointer. Through this pointer system, The present invention provides essentially automatic memory management (often referred to as garbage collection) by allowing relationships between objects to have definite memory management behavior by use of coding protocol which describes when relationships should be maintained and when the relationships should be broken. In one aspect, the present invention system allows automatic breaking of strong links to facilitate object garbage collection, coupled with relationship adjectives which define deletion of associated objects. In another aspect, The present invention includes simple-to-use infinite undo/redo functionality in that it has the capability, through a simple function call, to undo all of the changes made to a data model since the previous `valid state` was noted.

  18. The Computational Sensorimotor Systems Laboratory (United States)

    Federal Laboratory Consortium — The Computational Sensorimotor Systems Lab focuses on the exploration, analysis, modeling and implementation of biological sensorimotor systems for both scientific...

  19. Secure computing on reconfigurable systems

    NARCIS (Netherlands)

    Fernandes Chaves, R.J.


    This thesis proposes a Secure Computing Module (SCM) for reconfigurable computing systems. SC provides a protected and reliable computational environment, where data security and protection against malicious attacks to the system is assured. SC is strongly based on encryption algorithms and on the

  20. Computer systems a programmer's perspective

    CERN Document Server

    Bryant, Randal E


    Computer systems: A Programmer’s Perspective explains the underlying elements common among all computer systems and how they affect general application performance. Written from the programmer’s perspective, this book strives to teach readers how understanding basic elements of computer systems and executing real practice can lead them to create better programs. Spanning across computer science themes such as hardware architecture, the operating system, and systems software, the Third Edition serves as a comprehensive introduction to programming. This book strives to create programmers who understand all elements of computer systems and will be able to engage in any application of the field--from fixing faulty software, to writing more capable programs, to avoiding common flaws. It lays the groundwork for readers to delve into more intensive topics such as computer architecture, embedded systems, and cybersecurity. This book focuses on systems that execute an x86-64 machine code, and recommends th...

  1. Threats to Computer Systems (United States)


    subjects and objects of attacks contribute to the uniqueness of computer-related crime. For example, as the cashless , checkless society approaches...advancing computer tech- nology and security methods, and proliferation of computers in bringing about the paperless society . The universal use of...organizations do to society . Jerry Schneider, one of the known perpetrators, said that he was motivated to perform his acts to make money, for the

  2. Resource Management in Computing Systems


    Amani, Payam


    Resource management is an essential building block of any modern computer and communication network. In this thesis, the results of our research in the following two tracks are summarized in four papers. The first track includes three papers and covers modeling, prediction and control for multi-tier computing systems. In the first paper, a NARX-based multi-step-ahead response time predictor for single server queuing systems is presented which can be applied to CPU-constrained computing system...

  3. Capability-based computer systems

    CERN Document Server

    Levy, Henry M


    Capability-Based Computer Systems focuses on computer programs and their capabilities. The text first elaborates capability- and object-based system concepts, including capability-based systems, object-based approach, and summary. The book then describes early descriptor architectures and explains the Burroughs B5000, Rice University Computer, and Basic Language Machine. The text also focuses on early capability architectures. Dennis and Van Horn's Supervisor; CAL-TSS System; MIT PDP-1 Timesharing System; and Chicago Magic Number Machine are discussed. The book then describes Plessey System 25

  4. Risks in Networked Computer Systems


    Klingsheim, André N.


    Networked computer systems yield great value to businesses and governments, but also create risks. The eight papers in this thesis highlight vulnerabilities in computer systems that lead to security and privacy risks. A broad range of systems is discussed in this thesis: Norwegian online banking systems, the Norwegian Automated Teller Machine (ATM) system during the 90's, mobile phones, web applications, and wireless networks. One paper also comments on legal risks to bank cust...

  5. Computer Security Systems Enable Access. (United States)

    Riggen, Gary


    A good security system enables access and protects information from damage or tampering, but the most important aspects of a security system aren't technical. A security procedures manual addresses the human element of computer security. (MLW)

  6. Energy efficient distributed computing systems

    CERN Document Server

    Lee, Young-Choon


    The energy consumption issue in distributed computing systems raises various monetary, environmental and system performance concerns. Electricity consumption in the US doubled from 2000 to 2005.  From a financial and environmental standpoint, reducing the consumption of electricity is important, yet these reforms must not lead to performance degradation of the computing systems.  These contradicting constraints create a suite of complex problems that need to be resolved in order to lead to 'greener' distributed computing systems.  This book brings together a group of outsta

  7. Enabling a GeoWeb with Net-Centric Fusion on a Discrete Global Grid (Invited) (United States)

    Peterson, P. R.


    There is a pressing expectation for general real-time access to multi-source geo-spatial content in support of evidence-based decision-making. Earth location promises to be a decentralized organizational structure for such net-centric decision support systems - the GeoWeb, Digital Earth, GEOINT2, and Planetary Skin are some terms in use. However, these platforms assume a critical provision for access and integration of multi-sources of geo-data on-demand and unassisted by the unanticipated unsophisticated end-use decision-maker. How can this occur when geo-data integration is a complex time-consuming problem? We present a solution. A discrete global grid system (DGGS) incorporates an Earth partitioning that acts as a unifying structure for encoding and integrating/fusing multi-source location-based information necessary for this class of location-based platforms. As a global reference model the DGGS is uniform over the entire planet at any resolution - from continents to bird baths. The DGGS provides fast, seamless assimilation of new, numerous, and disparate geo-data sources regardless of scale, origin, resolution, legacy formats, datum, or projection - allowing any content to reside at its own level of granularity at any location on the globe. The DGGS renders data fused, ubiquitous, searchable, and ready for analysis. Testbed development of a DGGS using the optimized Icosahedral Snyder Equal Area aperture 3 Hexagonal grid (ISEA3H) demonstrated solutions to challenging aspects of multi-source data exploitation and decision support within military geospatial-intelligence. The ISEA3H tessellation is optimized to use the fine increments and the close packed equal area partitioning properties of a square root three (hexagonal) subdivision. The investigations advanced the ISEA3H grid development to include cell indexing, quantization strategy and numeric functions required for a formal digital Earth reference model (DERM). Notably, the global index that was selected

  8. Choosing the right computer system. (United States)

    Freydberg, B K; Seltzer, S M; Walker, B


    We are living in a world where virtually any information you desire can be acquired in a matter of moments with the click of a mouse. The computer is a ubiquitous fixture in elementary schools, universities, small companies, large companies, and homes. Many dental offices have incorporated computers as an integral part of their management systems. However, the role of the computer is expanding in the dental office as new hardware and software advancements emerge. The growing popularity of digital radiography and photography is making the possibility of a completely digital patient record more desirable. The trend for expanding the role of dental office computer systems is reflected in the increased number of companies that offer computer packages. The purchase of one of these new systems represents a significant commitment on the part of the dentist and staff. Not only do the systems have a substantial price tag, but they require a great deal of time and effort to become fully integrated into the daily office routine. To help the reader gain some clarity on the blur of new hardware and software available, I have enlisted the help of three recognized authorities on the subject of office organization and computer systems. This article is not intended to provide a ranking of features and shortcomings of specific products that are available, but rather to present a process by which the reader might be able to make better choices when selecting or upgrading a computer system.

  9. Students "Hacking" School Computer Systems (United States)

    Stover, Del


    This article deals with students hacking school computer systems. School districts are getting tough with students "hacking" into school computers to change grades, poke through files, or just pit their high-tech skills against district security. Dozens of students have been prosecuted recently under state laws on identity theft and unauthorized…

  10. User computer system pilot project

    Energy Technology Data Exchange (ETDEWEB)

    Eimutis, E.C.


    The User Computer System (UCS) is a general purpose unclassified, nonproduction system for Mound users. The UCS pilot project was successfully completed, and the system currently has more than 250 users. Over 100 tables were installed on the UCS for use by subscribers, including tables containing data on employees, budgets, and purchasing. In addition, a UCS training course was developed and implemented.

  11. Operating systems. [of computers (United States)

    Denning, P. J.; Brown, R. L.


    A counter operating system creates a hierarchy of levels of abstraction, so that at a given level all details concerning lower levels can be ignored. This hierarchical structure separates functions according to their complexity, characteristic time scale, and level of abstraction. The lowest levels include the system's hardware; concepts associated explicitly with the coordination of multiple tasks appear at intermediate levels, which conduct 'primitive processes'. Software semaphore is the mechanism controlling primitive processes that must be synchronized. At higher levels lie, in rising order, the access to the secondary storage devices of a particular machine, a 'virtual memory' scheme for managing the main and secondary memories, communication between processes by way of a mechanism called a 'pipe', access to external input and output devices, and a hierarchy of directories cataloguing the hardware and software objects to which access must be controlled.

  12. Computer System Design System-on-Chip

    CERN Document Server

    Flynn, Michael J


    The next generation of computer system designers will be less concerned about details of processors and memories, and more concerned about the elements of a system tailored to particular applications. These designers will have a fundamental knowledge of processors and other elements in the system, but the success of their design will depend on the skills in making system-level tradeoffs that optimize the cost, performance and other attributes to meet application requirements. This book provides a new treatment of computer system design, particularly for System-on-Chip (SOC), which addresses th

  13. Survivable Avionics Computer System. (United States)


    T. HALL AFWAL/AAA-1 SCONTRACT F33-615-80-C-1014 SRI Project 1314 E~J Approved by: CHARLES J. SHOENS, Director Systems Techniques Laboratory DAVID A...4 U3.3.3. U3.3.3. 3.3, ~l3.U OXZrZrZr -Z~ 31Zr31Zr I- o - NOIA . 3,3. 3,3. 3𔃽, 3~3, -- Zr -Zr A .4 .4 3, 3, 3, t-~ 3, ~3,3, .03~ .03, 3,3.4 3,3

  14. Management Information System & Computer Applications


    Sreeramana Aithal


    The book contains following Chapters : Chapter 1 : Introduction to Management Information Systems, Chapter 2 : Structure of MIS, Chapter 3 : Planning for MIS, Chapter 4 : Introduction to Computers Chapter 5 : Decision Making Process in MIS Chapter 6 : Approaches for System Development Chapter 7 : Form Design Chapter 8 : Charting Techniques Chapter 9 : System Analysis & Design Chapter 10 : Applications of MIS in Functional Areas Chapter 11 : System Implement...

  15. The ALICE Magnetic System Computation.

    CERN Document Server

    Klempt, W; CERN. Geneva; Swoboda, Detlef


    In this note we present the first results from the ALICE magnetic system computation performed in the 3-dimensional way with the Vector Fields TOSCA code (version 6.5) [1]. To make the calculations we have used the IBM RISC System 6000-370 and 6000-550 machines combined in the CERN PaRC UNIX cluster.

  16. Computational Intelligence for Engineering Systems

    CERN Document Server

    Madureira, A; Vale, Zita


    "Computational Intelligence for Engineering Systems" provides an overview and original analysis of new developments and advances in several areas of computational intelligence. Computational Intelligence have become the road-map for engineers to develop and analyze novel techniques to solve problems in basic sciences (such as physics, chemistry and biology) and engineering, environmental, life and social sciences. The contributions are written by international experts, who provide up-to-date aspects of the topics discussed and present recent, original insights into their own experien

  17. Opportunity for Realizing Ideal Computing System using Cloud Computing Model


    Sreeramana Aithal; Vaikunth Pai T


    An ideal computing system is a computing system with ideal characteristics. The major components and their performance characteristics of such hypothetical system can be studied as a model with predicted input, output, system and environmental characteristics using the identified objectives of computing which can be used in any platform, any type of computing system, and for application automation, without making modifications in the form of structure, hardware, and software coding by an exte...


    African Journals Online (AJOL)

    COMPUTER ASSISTED INVENTORY CONTROL SYSTEM. Alebachew Dessalegn and R. N. Roy. Department of Mechanical Engineering. Addis Ababa University. ABSTRACT. The basic purpose of holding inventories is to provide an essential decoupling between demand and unequal flow rate of materials in a supp~v ...

  19. Computational models of complex systems

    CERN Document Server

    Dabbaghian, Vahid


    Computational and mathematical models provide us with the opportunities to investigate the complexities of real world problems. They allow us to apply our best analytical methods to define problems in a clearly mathematical manner and exhaustively test our solutions before committing expensive resources. This is made possible by assuming parameter(s) in a bounded environment, allowing for controllable experimentation, not always possible in live scenarios. For example, simulation of computational models allows the testing of theories in a manner that is both fundamentally deductive and experimental in nature. The main ingredients for such research ideas come from multiple disciplines and the importance of interdisciplinary research is well recognized by the scientific community. This book provides a window to the novel endeavours of the research communities to present their works by highlighting the value of computational modelling as a research tool when investigating complex systems. We hope that the reader...

  20. Computer Networks A Systems Approach

    CERN Document Server

    Peterson, Larry L


    This best-selling and classic book teaches you the key principles of computer networks with examples drawn from the real world of network and protocol design. Using the Internet as the primary example, the authors explain various protocols and networking technologies. Their systems-oriented approach encourages you to think about how individual network components fit into a larger, complex system of interactions. Whatever your perspective, whether it be that of an application developer, network administrator, or a designer of network equipment or protocols, you will come away with a "big pictur

  1. Automated validation of a computer operating system (United States)

    Dervage, M. M.; Milberg, B. A.


    Programs apply selected input/output loads to complex computer operating system and measure performance of that system under such loads. Technique lends itself to checkout of computer software designed to monitor automated complex industrial systems.

  2. Computer aided training system development

    Energy Technology Data Exchange (ETDEWEB)

    Midkiff, G.N. (Advanced Technology Engineering Systems, Inc., Savannah, GA (US))


    The first three phases of Training System Development (TSD) -- job and task analysis, curriculum design, and training material development -- are time consuming and labor intensive. The use of personal computers with a combination of commercial and custom-designed software resulted in a significant reduction in the man-hours required to complete these phases for a Health Physics Technician Training Program at a nuclear power station. This paper reports that each step in the training program project involved the use of personal computers: job survey data were compiled with a statistical package, task analysis was performed with custom software designed to interface with a commercial database management program. Job Performance Measures (tests) were generated by a custom program from data in the task analysis database, and training materials were drafted, edited, and produced using commercial word processing software.


    Wette, M. R.


    Many developers of software and algorithms for control system design have recognized that current tools have limits in both flexibility and efficiency. Many forces drive the development of new tools including the desire to make complex system modeling design and analysis easier and the need for quicker turnaround time in analysis and design. Other considerations include the desire to make use of advanced computer architectures to help in control system design, adopt new methodologies in control, and integrate design processes (e.g., structure, control, optics). CAESY was developed to provide a means to evaluate methods for dealing with user needs in computer-aided control system design. It is an interpreter for performing engineering calculations and incorporates features of both Ada and MATLAB. It is designed to be reasonably flexible and powerful. CAESY includes internally defined functions and procedures, as well as user defined ones. Support for matrix calculations is provided in the same manner as MATLAB. However, the development of CAESY is a research project, and while it provides some features which are not found in commercially sold tools, it does not exhibit the robustness that many commercially developed tools provide. CAESY is written in C-language for use on Sun4 series computers running SunOS 4.1.1 and later. The program is designed to optionally use the LAPACK math library. The LAPACK math routines are available through anonymous ftp from CAESY requires 4Mb of RAM for execution. The standard distribution medium is a .25 inch streaming magnetic tape cartridge (QIC-24) in UNIX tar format. CAESY was developed in 1993 and is a copyrighted work with all copyright vested in NASA.

  4. Semantic enrichment of multi-intelligence data within a net-centric environment (United States)

    Hull, Richard D.; Lashine, Larry; Jenkins, Don


    The challenges of predictive battlespace awareness and transformation of TCPED to TPPU processes in a netcentric environment are numerous and complex. One of these challenges is how to post the information with the right metadata so that it can be effectively discovered and used in an ad hoc manner. We have been working on the development of a semantic enrichment capability that provides concept and relationship extraction and automatic metadata tagging of multi-INT sensor data. Specifically, this process maps multi-source data to concepts and relationships specified within a semantic model (ontology). We are using semantic enrichment for development of data fusion services to support Army and Air Force programs. This paper presents an example of using the semantic enrichment architecture for concept and relationship extraction from USMTF data. The process of semantic enrichment adds semantic metadata tags to the original data enabling advanced correlation and fusion. A geospatial user interface leverages the semantically-enriched data to provide powerful search, correlation, and fusion capabilities.

  5. Automated Computer Access Request System (United States)

    Snook, Bryan E.


    The Automated Computer Access Request (AutoCAR) system is a Web-based account provisioning application that replaces the time-consuming paper-based computer-access request process at Johnson Space Center (JSC). Auto- CAR combines rules-based and role-based functionality in one application to provide a centralized system that is easily and widely accessible. The system features a work-flow engine that facilitates request routing, a user registration directory containing contact information and user metadata, an access request submission and tracking process, and a system administrator account management component. This provides full, end-to-end disposition approval chain accountability from the moment a request is submitted. By blending both rules-based and rolebased functionality, AutoCAR has the flexibility to route requests based on a user s nationality, JSC affiliation status, and other export-control requirements, while ensuring a user s request is addressed by either a primary or backup approver. All user accounts that are tracked in AutoCAR are recorded and mapped to the native operating system schema on the target platform where user accounts reside. This allows for future extensibility for supporting creation, deletion, and account management directly on the target platforms by way of AutoCAR. The system s directory-based lookup and day-today change analysis of directory information determines personnel moves, deletions, and additions, and automatically notifies a user via e-mail to revalidate his/her account access as a result of such changes. AutoCAR is a Microsoft classic active server page (ASP) application hosted on a Microsoft Internet Information Server (IIS).

  6. Computers as components principles of embedded computing system design

    CERN Document Server

    Wolf, Marilyn


    Computers as Components: Principles of Embedded Computing System Design, 3e, presents essential knowledge on embedded systems technology and techniques. Updated for today's embedded systems design methods, this edition features new examples including digital signal processing, multimedia, and cyber-physical systems. Author Marilyn Wolf covers the latest processors from Texas Instruments, ARM, and Microchip Technology plus software, operating systems, networks, consumer devices, and more. Like the previous editions, this textbook: Uses real processors to demonstrate both technology and tec

  7. Research on computer systems benchmarking (United States)

    Smith, Alan Jay (Principal Investigator)


    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  8. Computer vision in control systems

    CERN Document Server

    Jain, Lakhmi


    Volume 1 : This book is focused on the recent advances in computer vision methodologies and technical solutions using conventional and intelligent paradigms. The Contributions include: ·         Morphological Image Analysis for Computer Vision Applications. ·         Methods for Detecting of Structural Changes in Computer Vision Systems. ·         Hierarchical Adaptive KL-based Transform: Algorithms and Applications. ·         Automatic Estimation for Parameters of Image Projective Transforms Based on Object-invariant Cores. ·         A Way of Energy Analysis for Image and Video Sequence Processing. ·         Optimal Measurement of Visual Motion Across Spatial and Temporal Scales. ·         Scene Analysis Using Morphological Mathematics and Fuzzy Logic. ·         Digital Video Stabilization in Static and Dynamic Scenes. ·         Implementation of Hadamard Matrices for Image Processing. ·         A Generalized Criterion ...

  9. Trusted computing for embedded systems

    CERN Document Server

    Soudris, Dimitrios; Anagnostopoulos, Iraklis


    This book describes the state-of-the-art in trusted computing for embedded systems. It shows how a variety of security and trusted computing problems are addressed currently and what solutions are expected to emerge in the coming years. The discussion focuses on attacks aimed at hardware and software for embedded systems, and the authors describe specific solutions to create security features. Case studies are used to present new techniques designed as industrial security solutions. Coverage includes development of tamper resistant hardware and firmware mechanisms for lightweight embedded devices, as well as those serving as security anchors for embedded platforms required by applications such as smart power grids, smart networked and home appliances, environmental and infrastructure sensor networks, etc. ·         Enables readers to address a variety of security threats to embedded hardware and software; ·         Describes design of secure wireless sensor networks, to address secure authen...

  10. Computer systems and software engineering (United States)

    Mckay, Charles W.


    The High Technologies Laboratory (HTL) was established in the fall of 1982 at the University of Houston Clear Lake. Research conducted at the High Tech Lab is focused upon computer systems and software engineering. There is a strong emphasis on the interrelationship of these areas of technology and the United States' space program. In Jan. of 1987, NASA Headquarters announced the formation of its first research center dedicated to software engineering. Operated by the High Tech Lab, the Software Engineering Research Center (SERC) was formed at the University of Houston Clear Lake. The High Tech Lab/Software Engineering Research Center promotes cooperative research among government, industry, and academia to advance the edge-of-knowledge and the state-of-the-practice in key topics of computer systems and software engineering which are critical to NASA. The center also recommends appropriate actions, guidelines, standards, and policies to NASA in matters pertinent to the center's research. Results of the research conducted at the High Tech Lab/Software Engineering Research Center have given direction to many decisions made by NASA concerning the Space Station Program.

  11. Elastic Cloud Computing Architecture and System for Heterogeneous Spatiotemporal Computing (United States)

    Shi, X.


    Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs), while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC) or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA) may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.

  12. Computer system reliability safety and usability

    CERN Document Server

    Dhillon, BS


    Computer systems have become an important element of the world economy, with billions of dollars spent each year on development, manufacture, operation, and maintenance. Combining coverage of computer system reliability, safety, usability, and other related topics into a single volume, Computer System Reliability: Safety and Usability eliminates the need to consult many different and diverse sources in the hunt for the information required to design better computer systems.After presenting introductory aspects of computer system reliability such as safety, usability-related facts and figures,

  13. Integrated Computer System of Management in Logistics (United States)

    Chwesiuk, Krzysztof


    This paper aims at presenting a concept of an integrated computer system of management in logistics, particularly in supply and distribution chains. Consequently, the paper includes the basic idea of the concept of computer-based management in logistics and components of the system, such as CAM and CIM systems in production processes, and management systems for storage, materials flow, and for managing transport, forwarding and logistics companies. The platform which integrates computer-aided management systems is that of electronic data interchange.

  14. On the Universal Computing Power of Amorphous Computing Systems

    Czech Academy of Sciences Publication Activity Database

    Wiedermann, Jiří; Petrů, L.


    Roč. 45, č. 4 (2009), s. 995-1010 ISSN 1432-4350 R&D Projects: GA AV ČR 1ET100300517; GA ČR GD201/05/H014 Institutional research plan: CEZ:AV0Z10300504 Keywords : amorphous computing systems * universal computing * random access machine * simulation Subject RIV: IN - Informatics, Computer Science Impact factor: 0.726, year: 2009

  15. Conflict Resolution in Computer Systems

    Directory of Open Access Journals (Sweden)

    G. P. Mojarov


    Full Text Available A conflict situation in computer systems CS is the phenomenon arising when the processes have multi-access to the shared resources and none of the involved processes can proceed because of their waiting for the certain resources locked by the other processes which, in turn, are in a similar position. The conflict situation is also called a deadlock that has quite clear impact on the CS state.To find the reduced to practice algorithms to resolve the impasses is of significant applied importance for ensuring information security of computing process and thereupon the presented article is aimed at solving a relevant problem.The gravity of situation depends on the types of processes in a deadlock, types of used resources, number of processes, and a lot of other factors.A disadvantage of the method for preventing the impasses used in many modern operating systems and based on the preliminary planning resources required for the process is obvious - waiting time can be overlong. The preventing method with the process interruption and deallocation of its resources is very specific and a little effective, when there is a set of the polytypic resources requested dynamically. The drawback of another method, to prevent a deadlock by ordering resources, consists in restriction of possible sequences of resource requests.A different way of "struggle" against deadlocks is a prevention of impasses. In the future a prediction of appearing impasses is supposed. There are known methods [1,4,5] to define and prevent conditions under which deadlocks may occur. Thus the preliminary information on what resources a running process can request is used. Before allocating a free resource to the process, a test for a state “safety” condition is provided. The state is "safe" if in the future impasses cannot occur as a result of resource allocation to the process. Otherwise the state is considered to be " hazardous ", and resource allocation is postponed. The obvious

  16. Know Your Personal Computer The Personal Computer System ...

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 4. Know Your Personal Computer The Personal Computer System Software. Siddhartha Kumar Ghoshal. Series Article Volume 1 Issue 4 April 1996 pp 31-36. Fulltext. Click here to view fulltext PDF. Permanent link:

  17. Performance tuning for high performance computing systems


    Pahuja, Himanshu


    A Distributed System is composed by integration between loosely coupled software components and the underlying hardware resources that can be distributed over the standard internet framework. High Performance Computing used to involve utilization of supercomputers which could churn a lot of computing power to process massively complex computational tasks, but is now evolving across distributed systems, thereby having the ability to utilize geographically distributed computing resources. We...

  18. Contributions of Cloud Computing in CRM Systems


    Bobek, Pavel


    This work deals with contributions of cloud computing to CRM. The main objective of this work is evaluation of cloud computing and its contributions to CRM systems and determining demands on cloud solution of CRM for trading company. The first chapter deals with CRM systems characteristics. The second chapter sums up qualities and opportunities of utilization of cloud computing. The third chapter describes demands on CRM systém with utilization of cloud computing for trading company that deal...

  19. Applied computation and security systems

    CERN Document Server

    Saeed, Khalid; Choudhury, Sankhayan; Chaki, Nabendu


    This book contains the extended version of the works that have been presented and discussed in the First International Doctoral Symposium on Applied Computation and Security Systems (ACSS 2014) held during April 18-20, 2014 in Kolkata, India. The symposium has been jointly organized by the AGH University of Science & Technology, Cracow, Poland and University of Calcutta, India. The Volume I of this double-volume book contains fourteen high quality book chapters in three different parts. Part 1 is on Pattern Recognition and it presents four chapters. Part 2 is on Imaging and Healthcare Applications contains four more book chapters. The Part 3 of this volume is on Wireless Sensor Networking and it includes as many as six chapters. Volume II of the book has three Parts presenting a total of eleven chapters in it. Part 4 consists of five excellent chapters on Software Engineering ranging from cloud service design to transactional memory. Part 5 in Volume II is on Cryptography with two book...

  20. Universal blind quantum computation for hybrid system (United States)

    Huang, He-Liang; Bao, Wan-Su; Li, Tan; Li, Feng-Guang; Fu, Xiang-Qun; Zhang, Shuo; Zhang, Hai-Long; Wang, Xiang


    As progress on the development of building quantum computer continues to advance, first-generation practical quantum computers will be available for ordinary users in the cloud style similar to IBM's Quantum Experience nowadays. Clients can remotely access the quantum servers using some simple devices. In such a situation, it is of prime importance to keep the security of the client's information. Blind quantum computation protocols enable a client with limited quantum technology to delegate her quantum computation to a quantum server without leaking any privacy. To date, blind quantum computation has been considered only for an individual quantum system. However, practical universal quantum computer is likely to be a hybrid system. Here, we take the first step to construct a framework of blind quantum computation for the hybrid system, which provides a more feasible way for scalable blind quantum computation.

  1. Computational Models for Nonlinear Aeroelastic Systems Project (United States)

    National Aeronautics and Space Administration — Clear Science Corp. and Duke University propose to develop and demonstrate new and efficient computational methods of modeling nonlinear aeroelastic systems. The...

  2. Distributed computer systems theory and practice

    CERN Document Server

    Zedan, H S M


    Distributed Computer Systems: Theory and Practice is a collection of papers dealing with the design and implementation of operating systems, including distributed systems, such as the amoeba system, argus, Andrew, and grapevine. One paper discusses the concepts and notations for concurrent programming, particularly language notation used in computer programming, synchronization methods, and also compares three classes of languages. Another paper explains load balancing or load redistribution to improve system performance, namely, static balancing and adaptive load balancing. For program effici

  3. Intelligent computing systems emerging application areas

    CERN Document Server

    Virvou, Maria; Jain, Lakhmi


    This book at hand explores emerging scientific and technological areas in which Intelligent Computing Systems provide efficient solutions and, thus, may play a role in the years to come. It demonstrates how Intelligent Computing Systems make use of computational methodologies that mimic nature-inspired processes to address real world problems of high complexity for which exact mathematical solutions, based on physical and statistical modelling, are intractable. Common intelligent computational methodologies are presented including artificial neural networks, evolutionary computation, genetic algorithms, artificial immune systems, fuzzy logic, swarm intelligence, artificial life, virtual worlds and hybrid methodologies based on combinations of the previous. The book will be useful to researchers, practitioners and graduate students dealing with mathematically-intractable problems. It is intended for both the expert/researcher in the field of Intelligent Computing Systems, as well as for the general reader in t...

  4. FPGA-accelerated simulation of computer systems

    CERN Document Server

    Angepat, Hari; Chung, Eric S; Hoe, James C; Chung, Eric S


    To date, the most common form of simulators of computer systems are software-based running on standard computers. One promising approach to improve simulation performance is to apply hardware, specifically reconfigurable hardware in the form of field programmable gate arrays (FPGAs). This manuscript describes various approaches of using FPGAs to accelerate software-implemented simulation of computer systems and selected simulators that incorporate those techniques. More precisely, we describe a simulation architecture taxonomy that incorporates a simulation architecture specifically designed f

  5. Preventive maintenance for computer systems - concepts & issues ...

    African Journals Online (AJOL)

    Performing preventive maintenance activities for the computer is not optional. The computer is a sensitive and delicate device that needs adequate time and attention to make it work properly. In this paper, the concept and issues on how to prolong the life span of the system, that is, the way to make the system last long and ...

  6. Computer Literacy in a Distance Education System (United States)

    Farajollahi, Mehran; Zandi, Bahman; Sarmadi, Mohamadreza; Keshavarz, Mohsen


    In a Distance Education (DE) system, students must be equipped with seven skills of computer (ICDL) usage. This paper aims at investigating the effect of a DE system on the computer literacy of Master of Arts students at Tehran University. The design of this study is quasi-experimental. Pre-test and post-test were used in both control and…

  7. Interactive graphical computer-aided design system (United States)

    Edge, T. M.


    System is used for design, layout, and modification of large-scale-integrated (LSI) metal-oxide semiconductor (MOS) arrays. System is structured around small computer which provides real-time support for graphics storage display unit with keyboard, slave display unit, hard copy unit, and graphics tablet for designer/computer interface.

  8. Systems Execution Modeling Technologies for Large-Scale Net-Centric Department of Defense Systems (United States)


    70  Figure 28: Workload for the 1998 FIFA Soccer World Cup ........................................................ 72  Figure...which depicts a real-world scenario wherein workload of the FIFA 1998 soccer world cup website in the number of incoming clients to such a website is... FIFA Soccer World Cup  Such a workload is very typical of all commercial websites, and planning capacity for such workload is not easy. Capacity could

  9. MTA Computer Based Evaluation System. (United States)

    Brenner, Lisa P.; And Others

    The MTA PLATO-based evaluation system, which has been implemented by a consortium of schools of medical technology, is designed to be general-purpose, modular, data-driven, and interactive, and to accommodate other national and local item banks. The system provides a comprehensive interactive item-banking system in conjunction with online student…

  10. Assessing and Mitigating Risks in Computer Systems


    Netland, Lars-Helge


    When it comes to non-trivial networked computer systems, bulletproof security is very hard to achieve. Over a system's lifetime new security risks are likely to emerge from e.g. newly discovered classes of vulnerabilities or the arrival of new threat agents. Given the dynamic environment in which computer systems are deployed, continuous evaluations and adjustments are wiser than one-shot e orts for perfection. Security risk management focuses on assessing and treating security...

  11. A cost modelling system for cloud computing


    Ajeh, Daniel; Ellman, Jeremy; Keogh, Shelagh


    An advance in technology unlocks new opportunities for organizations to increase their productivity, efficiency and process automation while reducing the cost of doing business as well. The emergence of cloud computing addresses these prospects through the provision of agile systems that are scalable, flexible and reliable as well as cost effective. Cloud computing has made hosting and deployment of computing resources cheaper and easier with no up-front charges but pay per-use flexible payme...

  12. Computational Intelligence in Information Systems Conference

    CERN Document Server

    Au, Thien-Wan; Omar, Saiful


    This book constitutes the Proceedings of the Computational Intelligence in Information Systems conference (CIIS 2016), held in Brunei, November 18–20, 2016. The CIIS conference provides a platform for researchers to exchange the latest ideas and to present new research advances in general areas related to computational intelligence and its applications. The 26 revised full papers presented in this book have been carefully selected from 62 submissions. They cover a wide range of topics and application areas in computational intelligence and informatics.

  13. Attacker Modelling in Ubiquitous Computing Systems

    DEFF Research Database (Denmark)

    Papini, Davide

    in with our everyday life. This future is visible to everyone nowadays: terms like smartphone, cloud, sensor, network etc. are widely known and used in our everyday life. But what about the security of such systems. Ubiquitous computing devices can be limited in terms of energy, computing power and memory......, localisation services and many others. These technologies can be classified under the name of ubiquitous systems. The term Ubiquitous System dates back to 1991 when Mark Weiser at Xerox PARC Lab first referred to it in writing. He envisioned a future where computing technologies would have been melted...

  14. Resilience assessment and evaluation of computing systems

    CERN Document Server

    Wolter, Katinka; Vieira, Marco


    The resilience of computing systems includes their dependability as well as their fault tolerance and security. It defines the ability of a computing system to perform properly in the presence of various kinds of disturbances and to recover from any service degradation. These properties are immensely important in a world where many aspects of our daily life depend on the correct, reliable and secure operation of often large-scale distributed computing systems. Wolter and her co-editors grouped the 20 chapters from leading researchers into seven parts: an introduction and motivating examples,

  15. Understanding the computing system domain of advanced computing with microcomputers

    Energy Technology Data Exchange (ETDEWEB)

    Hake, K.A.


    Accepting the challenge by the Executive Office of the President, Office of Science and Technology Policy for research to keep pace with technology, the author surveys the knowledge domain of advanced microcomputers. The paper provides a general background for social scientists in technology traditionally relegated to computer science and engineering. The concept of systems integration serves as a framework of understanding for the various elements of the knowledge domain of advanced microcomputing. The systems integration framework is viewed as a series of interrelated building blocks composed of the domain elements. These elements are: the processor platform, operating system, display technology, mass storage, application software, and human-computer interface. References come from recent articles in popular magazines and journals to help emphasize the easy access of this information, its appropriate technical level for the social scientist, and its transient currency. 78 refs., 3 figs.

  16. System Upgrade of the KEK Central Computing System (United States)

    Murakami, Koichi; Iwai, Go; Sasaki, Takashi; Nakamura, Tomoaki; Takase, Wataru


    The KEK central computer system (KEKCC) supports various activities in KEK, such as the Belle/Belle II, J-PARC experiments, etc. The system was totally replaced and launched in September 2016. The computing resources in the new system are much enhanced as recent increase of computing demand. We have 10,000 CPU cores, 13 PB disk storage, and 70 PB maximum capacity of tape system. In this paper, we focus on the design and performance of the new storage system. Our knowledge, experience and challenges can be usefully shared among HEP data centers as a data-intensive computing facility for the next generation of HEP experiments.

  17. Computer-aided dispatching system design specification

    Energy Technology Data Exchange (ETDEWEB)

    Briggs, M.G.


    This document defines the performance requirements for a graphic display dispatching system to support Hanford Patrol Operations Center. This document reflects the as-built requirements for the system that was delivered by GTE Northwest, Inc. This system provided a commercial off-the-shelf computer-aided dispatching system and alarm monitoring system currently in operations at the Hanford Patrol Operations Center, Building 2721E. This system also provides alarm back-up capability for the Plutonium Finishing Plant (PFP).

  18. Project IDEALS. Educational Applications of Computer Systems. (United States)

    Hansen, Duncan; And Others

    This is a booklet in the Project IDEALS series which deals with the use of Educational Data Processing (EDP) systems. A section is devoted to the use of the computer in such varied school operations as the processing of student records, schedules, computer simulation, grade reports, business, student applications, cafeterias, and transportation.…

  19. Computer-aided power systems analysis

    CERN Document Server

    Kusic, George


    Computer applications yield more insight into system behavior than is possible by using hand calculations on system elements. Computer-Aided Power Systems Analysis: Second Edition is a state-of-the-art presentation of basic principles and software for power systems in steady-state operation. Originally published in 1985, this revised edition explores power systems from the point of view of the central control facility. It covers the elements of transmission networks, bus reference frame, network fault and contingency calculations, power flow on transmission networks, generator base power setti

  20. The structural robustness of multiprocessor computing system

    Directory of Open Access Journals (Sweden)

    N. Andronaty


    Full Text Available The model of the multiprocessor computing system on the base of transputers which permits to resolve the question of valuation of a structural robustness (viability, survivability is described.

  1. Console Networks for Major Computer Systems

    Energy Technology Data Exchange (ETDEWEB)

    Ophir, D; Shepherd, B; Spinrad, R J; Stonehill, D


    A concept for interactive time-sharing of a major computer system is developed in which satellite computers mediate between the central computing complex and the various individual user terminals. These techniques allow the development of a satellite system substantially independent of the details of the central computer and its operating system. Although the user terminals' roles may be rich and varied, the demands on the central facility are merely those of a tape drive or similar batched information transfer device. The particular system under development provides service for eleven visual display and communication consoles, sixteen general purpose, low rate data sources, and up to thirty-one typewriters. Each visual display provides a flicker-free image of up to 4000 alphanumeric characters or tens of thousands of points by employing a swept raster picture generating technique directly compatible with that of commercial television. Users communicate either by typewriter or a manually positioned light pointer.

  2. Sandia Laboratories technical capabilities: computation systems

    Energy Technology Data Exchange (ETDEWEB)


    This report characterizes the computation systems capabilities at Sandia Laboratories. Selected applications of these capabilities are presented to illustrate the extent to which they can be applied in research and development programs. 9 figures.

  3. Computational Models for Nonlinear Aeroelastic Systems Project (United States)

    National Aeronautics and Space Administration — Clear Science Corp. and Duke University propose to develop and demonstrate a new and efficient computational method of modeling nonlinear aeroelastic systems. The...

  4. Satellite system considerations for computer data transfer (United States)

    Cook, W. L.; Kaul, A. K.


    Communications satellites will play a key role in the transmission of computer generated data through nationwide networks. This paper examines critical aspects of satellite system design as they relate to the computer data transfer task. In addition, it discusses the factors influencing the choice of error control technique, modulation scheme, multiple-access mode, and satellite beam configuration based on an evaluation of system requirements for a broad range of application areas including telemetry, terminal dialog, and bulk data transmission.

  5. Potential of Cognitive Computing and Cognitive Systems


    Noor Ahmed K.


    Cognitive computing and cognitive technologies are game changers for future engineering systems, as well as for engineering practice and training. They are major drivers for knowledge automation work, and the creation of cognitive products with higher levels of intelligence than current smart products. This paper gives a brief review of cognitive computing and some of the cognitive engineering systems activities. The potential of cognitive technologies is outlined, alo...

  6. Computer-Aided dispatching system design specification

    Energy Technology Data Exchange (ETDEWEB)

    Briggs, M.G. [Kaiser Engineers Hanford Co., Richland, WA (United States)


    This document defines the performance requirements for a graphic display dispatching system to support Hanford Patrol emergency response. This system is defined as a Commercial-Off the-Shelf computer dispatching system providing both text and graphical display information while interfacing with the diverse reporting system within the Hanford Facility. This system also provided expansion capabilities to integrate Hanford Fire and the Occurrence Notification Center and provides back-up capabilities for the Plutonium Processing Facility.

  7. Information systems and computing technology

    CERN Document Server

    Zhang, Lei


    Invited papersIncorporating the multi-cross-sectional temporal effect in Geographically Weighted Logit Regression K. Wu, B. Liu, B. Huang & Z. LeiOne shot learning human actions recognition using key posesW.H. Zou, S.G. Li, Z. Lei & N. DaiBand grouping pansharpening for WorldView-2 satellite images X. LiResearch on GIS based haze trajectory data analysis system Y. Wang, J. Chen, J. Shu & X. WangRegular papersA warning model of systemic financial risks W. Xu & Q. WangResearch on smart mobile phone user experience with grounded theory J.P. Wan & Y.H. ZhuThe software reliability analysis based on

  8. Artificial immune system applications in computer security

    CERN Document Server

    Tan, Ying


    This book provides state-of-the-art information on the use, design, and development of the Artificial Immune System (AIS) and AIS-based solutions to computer security issues. Artificial Immune System: Applications in Computer Security focuses on the technologies and applications of AIS in malware detection proposed in recent years by the Computational Intelligence Laboratory of Peking University (CIL@PKU). It offers a theoretical perspective as well as practical solutions for readers interested in AIS, machine learning, pattern recognition and computer security. The book begins by introducing the basic concepts, typical algorithms, important features, and some applications of AIS. The second chapter introduces malware and its detection methods, especially for immune-based malware detection approaches. Successive chapters present a variety of advanced detection approaches for malware, including Virus Detection System, K-Nearest Neighbour (KNN), RBF networ s, and Support Vector Machines (SVM), Danger theory, ...

  9. Quantum Computing in Solid State Systems

    CERN Document Server

    Ruggiero, B; Granata, C


    The aim of Quantum Computation in Solid State Systems is to report on recent theoretical and experimental results on the macroscopic quantum coherence of mesoscopic systems, as well as on solid state realization of qubits and quantum gates. Particular attention has been given to coherence effects in Josephson devices. Other solid state systems, including quantum dots, optical, ion, and spin devices which exhibit macroscopic quantum coherence are also discussed. Quantum Computation in Solid State Systems discusses experimental implementation of quantum computing and information processing devices, and in particular observations of quantum behavior in several solid state systems. On the theoretical side, the complementary expertise of the contributors provides models of the various structures in connection with the problem of minimizing decoherence.

  10. Computation and design of autonomous intelligent systems (United States)

    Fry, Robert L.


    This paper describes a theory of intelligent systems and its reduction to engineering practice. The theory is based on a broader theory of computation wherein information and control are defined within the subjective frame of a system. At its most primitive level, the theory describes what it computationally means to both ask and answer questions which, like traditional logic, are also Boolean. The logic of questions describes the subjective rules of computation that are objective in the sense that all the described systems operate according to its principles. Therefore, all systems are autonomous by construct. These systems include thermodynamic, communication, and intelligent systems. Although interesting, the important practical consequence is that the engineering framework for intelligent systems can borrow efficient constructs and methodologies from both thermodynamics and information theory. Thermodynamics provides the Carnot cycle which describes intelligence dynamics when operating in the refrigeration mode. It also provides the principle of maximum entropy. Information theory has recently provided the important concept of dual-matching useful for the design of efficient intelligent systems. The reverse engineered model of computation by pyramidal neurons agrees well with biology and offers a simple and powerful exemplar of basic engineering concepts.


    African Journals Online (AJOL)

    The development of a computer based maintenance management system is presented for industries using optimization models. The system which is capable of using optimization data and programs to schedule for maintenance or replacement of machines has been designed such that it enables the maintenance ...

  12. Terrace Layout Using a Computer Assisted System (United States)

    Development of a web-based terrace design tool based on the MOTERR program is presented, along with representative layouts for conventional and parallel terrace systems. Using digital elevation maps and geographic information systems (GIS), this tool utilizes personal computers to rapidly construct ...

  13. A Survey of Civilian Dental Computer Systems. (United States)


    r.arketplace, the orthodontic community continued to pioneer clinical automation through diagnosis, treat- (1) patient registration, identification...Compugnath Dental Diagnostic Systems DDS Articulate Publications - Dental Management Plus Dentalis System VI Dental Office Computer Artificial...Kamp Mixed Dentition Analysis Office Management Software Key Management - Dental Office Rocky Mountain Orthodontics Receivables Insurance . CADIAS/RDE

  14. Infrastructure Support for Collaborative Pervasive Computing Systems

    DEFF Research Database (Denmark)

    Vestergaard Mogensen, Martin

    contribute by building real world Collaborative Pervasive Computing Systems, including the Activity-Based Collaboration system and the iHospital system, which has been deployed and evaluated.  Secondly, we contribute with novel hybrid and fusion Software Architectures. Moreover, we propose separating......Collaborative Pervasive Computing Systems (CPCS) are currently being deployed to support areas such as clinical work, emergency situations, education, ad-hoc meetings, and other areas involving information sharing and collaboration.These systems allow the users to work together synchronously......, but from different places, by sharing information and coordinating activities. Several researchers have shown the value of such distributed collaborative systems. However, building these systems is by no means a trivial task and introduces a lot of yet unanswered questions. The aforementioned areas...

  15. Unified Computational Intelligence for Complex Systems

    CERN Document Server

    Seiffertt, John


    Computational intelligence encompasses a wide variety of techniques that allow computation to learn, to adapt, and to seek. That is, they may be designed to learn information without explicit programming regarding the nature of the content to be retained, they may be imbued with the functionality to adapt to maintain their course within a complex and unpredictably changing environment, and they may help us seek out truths about our own dynamics and lives through their inclusion in complex system modeling. These capabilities place our ability to compute in a category apart from our ability to e

  16. Computer networks ISE a systems approach

    CERN Document Server

    Peterson, Larry L


    Computer Networks, 4E is the only introductory computer networking book written by authors who have had first-hand experience with many of the protocols discussed in the book, who have actually designed some of them as well, and who are still actively designing the computer networks today. This newly revised edition continues to provide an enduring, practical understanding of networks and their building blocks through rich, example-based instruction. The authors' focus is on the why of network design, not just the specifications comprising today's systems but how key technologies and p

  17. Annual Systems Engineering Conference (12th). Volume 1 (United States)



  18. Computer surety: computer system inspection guidance. [Contains glossary

    Energy Technology Data Exchange (ETDEWEB)


    This document discusses computer surety in NRC-licensed nuclear facilities from the perspective of physical protection inspectors. It gives background information and a glossary of computer terms, along with threats and computer vulnerabilities, methods used to harden computer elements, and computer audit controls.

  19. Computer-Aided dispatching system design specification

    Energy Technology Data Exchange (ETDEWEB)

    Briggs, M.G.


    This document defines the performance requirements for a graphic display dispatching system to support Hanford Patrol emergency response. This document outlines the negotiated requirements as agreed to by GTE Northwest during technical contract discussions. This system defines a commercial off-the-shelf computer dispatching system providing both test and graphic display information while interfacing with diverse alarm reporting system within the Hanford Site. This system provided expansion capability to integrate Hanford Fire and the Occurrence Notification Center. The system also provided back-up capability for the Plutonium Processing Facility (PFP).

  20. Reliable computer systems design and evaluatuion

    CERN Document Server

    Siewiorek, Daniel


    Enhance your hardware/software reliabilityEnhancement of system reliability has been a major concern of computer users and designers ¦ and this major revision of the 1982 classic meets users' continuing need for practical information on this pressing topic. Included are case studies of reliablesystems from manufacturers such as Tandem, Stratus, IBM, and Digital, as well as coverage of special systems such as the Galileo Orbiter fault protection system and AT&T telephone switching processors.

  1. Metasynthetic computing and engineering of complex systems

    CERN Document Server

    Cao, Longbing


    Provides a comprehensive overview and introduction to the concepts, methodologies, analysis, design and applications of metasynthetic computing and engineering. The author: Presents an overview of complex systems, especially open complex giant systems such as the Internet, complex behavioural and social problems, and actionable knowledge discovery and delivery in the big data era. Discusses ubiquitous intelligence in complex systems, including human intelligence, domain intelligence, social intelligence, network intelligence, data intelligence and machine intelligence, and their synergy thro

  2. SMASHUP: secure mashup for defense transformation and net-centric systems (United States)

    Heileman, Mark D.; Heileman, Gregory L.; Shaver, Matthew P.; Gilger, Mike; Jamkhedkar, Pramod A.


    The recent development of mashup technologies now enables users to easily collect, integrate, and display data from a vast array of different information sources available on the Internet. The ability to harness and leverage information in this manner provides a powerful means for discovering links between information, and greatly enhances decisionmaking capabilities. The availability of such services in DoD environments will provide tremendous advantages to the decision-makers engaged in analysis of critical situations, rapid-response, and long-term planning scenarios. However in the absence of mechanisms for managing the usage of resources, any mashup service in a DoD environment also opens up significant security vulnerabilities to insider threat and accidental leakage of confidential information, not to mention other security threats. In this paper we describe the development of a framework that will allow integration via mashups of content from various data sources in a secure manner. The framework is based on mathematical logic where addressable resources have formal usage terms applied to them, and these terms are used to specify and enforce usage policies over the resources. An advantage of this approach is it provides a formal means for securely managing the usage of resources that might exist within multilevel security environments.

  3. Architecture, systems research and computational sciences

    CERN Document Server


    The Winter 2012 (vol. 14 no. 1) issue of the Nexus Network Journal is dedicated to the theme “Architecture, Systems Research and Computational Sciences”. This is an outgrowth of the session by the same name which took place during the eighth international, interdisciplinary conference “Nexus 2010: Relationships between Architecture and Mathematics, held in Porto, Portugal, in June 2010. Today computer science is an integral part of even strictly historical investigations, such as those concerning the construction of vaults, where the computer is used to survey the existing building, analyse the data and draw the ideal solution. What the papers in this issue make especially evident is that information technology has had an impact at a much deeper level as well: architecture itself can now be considered as a manifestation of information and as a complex system. The issue is completed with other research papers, conference reports and book reviews.

  4. NIF Integrated Computer Controls System Description

    Energy Technology Data Exchange (ETDEWEB)

    VanArsdall, P.


    This System Description introduces the NIF Integrated Computer Control System (ICCS). The architecture is sufficiently abstract to allow the construction of many similar applications from a common framework. As discussed below, over twenty software applications derived from the framework comprise the NIF control system. This document lays the essential foundation for understanding the ICCS architecture. The NIF design effort is motivated by the magnitude of the task. Figure 1 shows a cut-away rendition of the coliseum-sized facility. The NIF requires integration of about 40,000 atypical control points, must be highly automated and robust, and will operate continuously around the clock. The control system coordinates several experimental cycles concurrently, each at different stages of completion. Furthermore, facilities such as the NIF represent major capital investments that will be operated, maintained, and upgraded for decades. The computers, control subsystems, and functionality must be relatively easy to extend or replace periodically with newer technology.

  5. Cloud Computing for Standard ERP Systems

    DEFF Research Database (Denmark)

    Schubert, Petra; Adisa, Femi

    for the operation of ERP systems. We argue that the phenomenon of cloud computing could lead to a decisive change in the way business software is deployed in companies. Our reference framework contains three levels (IaaS, PaaS, SaaS) and clarifies the meaning of public, private and hybrid clouds. The three levels...

  6. Logical Access Control Mechanisms in Computer Systems. (United States)

    Hsiao, David K.

    The subject of access control mechanisms in computer systems is concerned with effective means to protect the anonymity of private information on the one hand, and to regulate the access to shareable information on the other hand. Effective means for access control may be considered on three levels: memory, process and logical. This report is a…

  7. Computing in Large-Scale Dynamic Systems

    NARCIS (Netherlands)

    Pruteanu, A.S.


    Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data

  8. Prestandardisation Activities for Computer Based Safety Systems

    DEFF Research Database (Denmark)

    Taylor, J. R.; Bologna, S.; Ehrenberger, W.


    Questions of technical safety become more and more important. Due to the higher complexity of their functions computer based safety systems have special problems. Researchers, producers, licensing personnel and customers have met on a European basis to exchange knowledge and formulate positions...

  9. High performance computing on vector systems

    CERN Document Server

    Roller, Sabine


    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  10. Computational system for activity calculation of radiopharmaceuticals

    African Journals Online (AJOL)

    ... this is specially practised in big countries like Brazil where the distance from one state to other is bigger than one country compared to others in continents like Europe. The purpose of this paper is to describe a computational system developed to evaluate the dose of radiopharmaceuticals during the production until the ...

  11. Performance Aspects of Synthesizable Computing Systems

    DEFF Research Database (Denmark)

    Schleuniger, Pascal

    of interfaces can be integrated on a single device. This thesis consists of ve parts that address performance aspects of synthesizable computing systems on FPGAs. First, it is evaluated how synthesizable processor cores can exploit current state-of-the-art FPGA architectures. This evaluation results...

  12. Lumber Grading With A Computer Vision System (United States)

    Richard W. Conners; Tai-Hoon Cho; Philip A. Araman


    Over the past few years significant progress has been made in developing a computer vision system for locating and identifying defects on surfaced hardwood lumber. Unfortunately, until September of 1988 little research had gone into developing methods for analyzing rough lumber. This task is arguably more complex than the analysis of surfaced lumber. The prime...

  13. Personal healthcare system using cloud computing. (United States)

    Takeuchi, Hiroshi; Mayuzumi, Yuuki; Kodama, Naoki; Sato, Keiichi


    A personal healthcare system used with cloud computing has been developed. It enables a daily time-series of personal health and lifestyle data to be stored in the cloud through mobile devices. The cloud automatically extracts personally useful information, such as rules and patterns concerning lifestyle and health conditions embedded in the personal big data, by using a data mining technology. The system provides three editions (Diet, Lite, and Pro) corresponding to users' needs.

  14. Analytical performance modeling for computer systems

    CERN Document Server

    Tay, Y C


    This book is an introduction to analytical performance modeling for computer systems, i.e., writing equations to describe their performance behavior. It is accessible to readers who have taken college-level courses in calculus and probability, networking and operating systems. This is not a training manual for becoming an expert performance analyst. Rather, the objective is to help the reader construct simple models for analyzing and understanding the systems that they are interested in.Describing a complicated system abstractly with mathematical equations requires a careful choice of assumpti

  15. Space systems computer-aided design technology (United States)

    Garrett, L. B.


    The interactive Design and Evaluation of Advanced Spacecraft (IDEAS) system is described, together with planned capability increases in the IDEAS system. The system's disciplines consist of interactive graphics and interactive computing. A single user at an interactive terminal can create, design, analyze, and conduct parametric studies of earth-orbiting satellites, which represents a timely and cost-effective method during the conceptual design phase where various missions and spacecraft options require evaluation. Spacecraft concepts evaluated include microwave radiometer satellites, communication satellite systems, solar-powered lasers, power platforms, and orbiting space stations.

  16. International Conference on Soft Computing Systems

    CERN Document Server

    Panigrahi, Bijaya


    The book is a collection of high-quality peer-reviewed research papers presented in International Conference on Soft Computing Systems (ICSCS 2015) held at Noorul Islam Centre for Higher Education, Chennai, India. These research papers provide the latest developments in the emerging areas of Soft Computing in Engineering and Technology. The book is organized in two volumes and discusses a wide variety of industrial, engineering and scientific applications of the emerging techniques. It presents invited papers from the inventors/originators of new applications and advanced technologies.

  17. Landauer Bound for Analog Computing Systems

    CERN Document Server

    Diamantini, M. Cristina; Trugenberger, Carlo A.


    By establishing a relation between information erasure and continuous phase transitions we generalise the Landauer bound to analog computing systems. The entropy production per degree of freedom during erasure of an analog variable (reset to standard value) is given by the logarithm of the configurational volume measured in units of its minimal quantum. As a consequence every computation has to be carried on with a finite number of bits and infinite precision is forbidden by the fundamental laws of physics, since it would require an infinite amount of energy.

  18. Embedded systems for supporting computer accessibility. (United States)

    Mulfari, Davide; Celesti, Antonio; Fazio, Maria; Villari, Massimo; Puliafito, Antonio


    Nowadays, customized AT software solutions allow their users to interact with various kinds of computer systems. Such tools are generally available on personal devices (e.g., smartphones, laptops and so on) commonly used by a person with a disability. In this paper, we investigate a way of using the aforementioned AT equipments in order to access many different devices without assistive preferences. The solution takes advantage of open source hardware and its core component consists of an affordable Linux embedded system: it grabs data coming from the assistive software, which runs on the user's personal device, then, after processing, it generates native keyboard and mouse HID commands for the target computing device controlled by the end user. This process supports any operating system available on the target machine and it requires no specialized software installation; therefore the user with a disability can rely on a single assistive tool to control a wide range of computing platforms, including conventional computers and many kinds of mobile devices, which receive input commands through the USB HID protocol.

  19. Applicability of Computational Systems Biology in Toxicology

    DEFF Research Database (Denmark)

    Kongsbak, Kristine Grønning; Hadrup, Niels; Audouze, Karine Marie Laure


    and databases are used to model and predict effects of chemicals on, for instance, human health. In toxicology, computational systems biology enables identification of important pathways and molecules from large data sets; tasks that can be extremely laborious when performed by a classical literature search....... However, computational systems biology offers more advantages than providing a high-throughput literature search; it may form the basis for establishment of hypotheses on potential links between environmental chemicals and human diseases, which would be very difficult to establish experimentally...... be used to establish hypotheses on links between the chemical and human diseases. Such information can also be applied for designing more intelligent animal/cell experiments that can test the established hypotheses. Here, we describe how and why to apply an integrative systems biology method...

  20. Thermodynamics of Computational Copying in Biochemical Systems (United States)

    Ouldridge, Thomas E.; Govern, Christopher C.; ten Wolde, Pieter Rein


    Living cells use readout molecules to record the state of receptor proteins, similar to measurements or copies in typical computational devices. But is this analogy rigorous? Can cells be optimally efficient, and if not, why? We show that, as in computation, a canonical biochemical readout network generates correlations; extracting no work from these correlations sets a lower bound on dissipation. For general input, the biochemical network cannot reach this bound, even with arbitrarily slow reactions or weak thermodynamic driving. It faces an accuracy-dissipation trade-off that is qualitatively distinct from and worse than implied by the bound, and more complex steady-state copy processes cannot perform better. Nonetheless, the cost remains close to the thermodynamic bound unless accuracy is extremely high. Additionally, we show that biomolecular reactions could be used in thermodynamically optimal devices under exogenous manipulation of chemical fuels, suggesting an experimental system for testing computational thermodynamics.

  1. Risk analysis of computer system designs (United States)

    Vallone, A.


    Adverse events during implementation can affect final capabilities, schedule and cost of a computer system even though the system was accurately designed and evaluated. Risk analysis enables the manager to forecast the impact of those events and to timely ask for design revisions or contingency plans before making any decision. This paper presents a structured procedure for an effective risk analysis. The procedure identifies the required activities, separates subjective assessments from objective evaluations, and defines a risk measure to determine the analysis results. The procedure is consistent with the system design evaluation and enables a meaningful comparison among alternative designs.

  2. Decomposability queueing and computer system applications

    CERN Document Server

    Courtois, P J


    Decomposability: Queueing and Computer System Applications presents a set of powerful methods for systems analysis. This 10-chapter text covers the theory of nearly completely decomposable systems upon which specific analytic methods are based.The first chapters deal with some of the basic elements of a theory of nearly completely decomposable stochastic matrices, including the Simon-Ando theorems and the perturbation theory. The succeeding chapters are devoted to the analysis of stochastic queuing networks that appear as a type of key model. These chapters also discuss congestion problems in

  3. Computer aided system engineering for space construction (United States)

    Racheli, Ugo


    This viewgraph presentation covers the following topics. Construction activities envisioned for the assembly of large platforms in space (as well as interplanetary spacecraft and bases on extraterrestrial surfaces) require computational tools that exceed the capability of conventional construction management programs. The Center for Space Construction is investigating the requirements for new computational tools and, at the same time, suggesting the expansion of graduate and undergraduate curricula to include proficiency in Computer Aided Engineering (CAE) though design courses and individual or team projects in advanced space systems design. In the center's research, special emphasis is placed on problems of constructability and of the interruptability of planned activity sequences to be carried out by crews operating under hostile environmental conditions. The departure point for the planned work is the acquisition of the MCAE I-DEAS software, developed by the Structural Dynamics Research Corporation (SDRC), and its expansion to the level of capability denoted by the acronym IDEAS**2 currently used for configuration maintenance on Space Station Freedom. In addition to improving proficiency in the use of I-DEAS and IDEAS**2, it is contemplated that new software modules will be developed to expand the architecture of IDEAS**2. Such modules will deal with those analyses that require the integration of a space platform's configuration with a breakdown of planned construction activities and with a failure modes analysis to support computer aided system engineering (CASE) applied to space construction.

  4. Interactive computer-enhanced remote viewing system

    Energy Technology Data Exchange (ETDEWEB)

    Tourtellott, J.A.; Wagner, J.F. [Mechanical Technology Incorporated, Latham, NY (United States)


    Remediation activities such as decontamination and decommissioning (D&D) typically involve materials and activities hazardous to humans. Robots are an attractive way to conduct such remediation, but for efficiency they need a good three-dimensional (3-D) computer model of the task space where they are to function. This model can be created from engineering plans and architectural drawings and from empirical data gathered by various sensors at the site. The model is used to plan robotic tasks and verify that selected paths are clear of obstacles. This report describes the development of an Interactive Computer-Enhanced Remote Viewing System (ICERVS), a software system to provide a reliable geometric description of a robotic task space, and enable robotic remediation to be conducted more effectively and more economically.

  5. Checkpoint triggering in a computer system

    Energy Technology Data Exchange (ETDEWEB)

    Cher, Chen-Yong


    According to an aspect, a method for triggering creation of a checkpoint in a computer system includes executing a task in a processing node of the computer system and determining whether it is time to read a monitor associated with a metric of the task. The monitor is read to determine a value of the metric based on determining that it is time to read the monitor. A threshold for triggering creation of the checkpoint is determined based on the value of the metric. Based on determining that the value of the metric has crossed the threshold, the checkpoint including state data of the task is created to enable restarting execution of the task upon a restart operation.

  6. Byzantine-resilient distributed computing systems


    Patnaik, LM; S.Balaji


    This paper is aimed at reviewing the notion of Byzantine-resilient distributed computing systems, the relevant protocols and their possible applications as reported in the literature. The three agreement problems, namely, the consensus problem, the interactive consistency problem, and the generals problem have been discussed. Various agreement protocols for the Byzantine generals problem have been summarized in terms of their performance and level of fault-tolerance. The three classes of Byza...

  7. Music Genre Classification Systems - A Computational Approach


    Ahrendt, Peter; Hansen, Lars Kai


    Automatic music genre classification is the classification of a piece of music into its corresponding genre (such as jazz or rock) by a computer. It is considered to be a cornerstone of the research area Music Information Retrieval (MIR) and closely linked to the other areas in MIR. It is thought that MIR will be a key element in the processing, searching and retrieval of digital music in the near future. This dissertation is concerned with music genre classification systems and in particular...

  8. Visual Turing test for computer vision systems. (United States)

    Geman, Donald; Geman, Stuart; Hallonquist, Neil; Younes, Laurent


    Today, computer vision systems are tested by their accuracy in detecting and localizing instances of objects. As an alternative, and motivated by the ability of humans to provide far richer descriptions and even tell a story about an image, we construct a "visual Turing test": an operator-assisted device that produces a stochastic sequence of binary questions from a given test image. The query engine proposes a question; the operator either provides the correct answer or rejects the question as ambiguous; the engine proposes the next question ("just-in-time truthing"). The test is then administered to the computer-vision system, one question at a time. After the system's answer is recorded, the system is provided the correct answer and the next question. Parsing is trivial and deterministic; the system being tested requires no natural language processing. The query engine employs statistical constraints, learned from a training set, to produce questions with essentially unpredictable answers-the answer to a question, given the history of questions and their correct answers, is nearly equally likely to be positive or negative. In this sense, the test is only about vision. The system is designed to produce streams of questions that follow natural story lines, from the instantiation of a unique object, through an exploration of its properties, and on to its relationships with other uniquely instantiated objects.

  9. Performance evaluation of a computed radiography system

    Energy Technology Data Exchange (ETDEWEB)

    Roussilhe, J.; Fallet, E. [Carestream Health France, 71 - Chalon/Saone (France); Mango, St.A. [Carestream Health, Inc. Rochester, New York (United States)


    Computed radiography (CR) standards have been formalized and published in Europe and in the US. The CR system classification is defined in those standards by - minimum normalized signal-to-noise ratio (SNRN), and - maximum basic spatial resolution (SRb). Both the signal-to-noise ratio (SNR) and the contrast sensitivity of a CR system depend on the dose (exposure time and conditions) at the detector. Because of their wide dynamic range, the same storage phosphor imaging plate can qualify for all six CR system classes. The exposure characteristics from 30 to 450 kV, the contrast sensitivity, and the spatial resolution of the KODAK INDUSTREX CR Digital System have been thoroughly evaluated. This paper will present some of the factors that determine the system's spatial resolution performance. (authors)

  10. A Heterogeneous High-Performance System for Computational and Computer Science (United States)


    System for Computational and Computer Science Report Title This DoD HBC/MI Equipment/Instrumentation grant was awarded in October 2014 for the purchase... Computing (HPC) course taught in the department of computer science as to attract more graduate students from many disciplines where their research...AND SUBTITLE A Heterogeneous High-Performance System for Computational and Computer Science 5a. CONTRACT NUMBER W911NF-15-1-0023 5b

  11. System/360 Computer Assisted Network Scheduling (CANS) System (United States)

    Brewer, A. C.


    Computer assisted scheduling techniques that produce conflict-free and efficient schedules have been developed and implemented to meet needs of the Manned Space Flight Network. CANS system provides effective management of resources in complex scheduling environment. System is automated resource scheduling, controlling, planning, information storage and retrieval tool.

  12. Physical Optics Based Computational Imaging Systems (United States)

    Olivas, Stephen Joseph

    There is an ongoing demand on behalf of the consumer, medical and military industries to make lighter weight, higher resolution, wider field-of-view and extended depth-of-focus cameras. This leads to design trade-offs between performance and cost, be it size, weight, power, or expense. This has brought attention to finding new ways to extend the design space while adhering to cost constraints. Extending the functionality of an imager in order to achieve extraordinary performance is a common theme of computational imaging, a field of study which uses additional hardware along with tailored algorithms to formulate and solve inverse problems in imaging. This dissertation details four specific systems within this emerging field: a Fiber Bundle Relayed Imaging System, an Extended Depth-of-Focus Imaging System, a Platform Motion Blur Image Restoration System, and a Compressive Imaging System. The Fiber Bundle Relayed Imaging System is part of a larger project, where the work presented in this thesis was to use image processing techniques to mitigate problems inherent to fiber bundle image relay and then, form high-resolution wide field-of-view panoramas captured from multiple sensors within a custom state-of-the-art imager. The Extended Depth-of-Focus System goals were to characterize the angular and depth dependence of the PSF of a focal swept imager in order to increase the acceptably focused imaged scene depth. The goal of the Platform Motion Blur Image Restoration System was to build a system that can capture a high signal-to-noise ratio (SNR), long-exposure image which is inherently blurred while at the same time capturing motion data using additional optical sensors in order to deblur the degraded images. Lastly, the objective of the Compressive Imager was to design and build a system functionally similar to the Single Pixel Camera and use it to test new sampling methods for image generation and to characterize it against a traditional camera. These computational

  13. Computer performance optimization systems, applications, processes

    CERN Document Server

    Osterhage, Wolfgang W


    Computing power performance was important at times when hardware was still expensive, because hardware had to be put to the best use. Later on this criterion was no longer critical, since hardware had become inexpensive. Meanwhile, however, people have realized that performance again plays a significant role, because of the major drain on system resources involved in developing complex applications. This book distinguishes between three levels of performance optimization: the system level, application level and business processes level. On each, optimizations can be achieved and cost-cutting p

  14. Radiation management computer system for Monju

    Energy Technology Data Exchange (ETDEWEB)

    Aoyama, Kei; Yasutomo, Katsumi [Fuji Electric Co. Ltd., Tokyo (Japan); Sudou, Takayuki [FFC, Ltd., Tokyo (Japan); Yamashita, Masahiro [Japan Nuclear Cycle Development Inst., Monju Construction Office, Tsuruga, Fukui (Japan); Hayata, Kenichi; Ueda, Hajime [Kosokuro Gijyutsu Service K.K., Tsuruga, Fukui (Japan); Hosokawa, Hideo [Nuclear Energy System Inc., Tsuruga, Fukui (Japan)


    Radiation management of nuclear power research institutes, nuclear power stations and other such facilities are strictly managed under Japanese laws and management policies. Recently, the momentous issues of more accurate radiation dose management and increased work efficiency has been discussed. Up to now, Fuji Electric Company has supplied a large number of Radiation Management Systems to nuclear power stations and related nuclear facilities. We introduce the new radiation management computer system with adopted WWW technique for Japan Nuclear Cycle Development Institute, MONJU Fast Breeder Reactor (MONJU). (author)

  15. Computational modeling of shallow geothermal systems

    CERN Document Server

    Al-Khoury, Rafid


    A Step-by-step Guide to Developing Innovative Computational Tools for Shallow Geothermal Systems Geothermal heat is a viable source of energy and its environmental impact in terms of CO2 emissions is significantly lower than conventional fossil fuels. Shallow geothermal systems are increasingly utilized for heating and cooling of buildings and greenhouses. However, their utilization is inconsistent with the enormous amount of energy available underneath the surface of the earth. Projects of this nature are not getting the public support they deserve because of the uncertainties associated with

  16. Some queuing network models of computer systems (United States)

    Herndon, E. S.


    Queuing network models of a computer system operating with a single workload type are presented. Program algorithms are adapted for use on the Texas Instruments SR-52 programmable calculator. By slightly altering the algorithm to process the G and H matrices row by row instead of column by column, six devices and an unlimited job/terminal population could be handled on the SR-52. Techniques are also introduced for handling a simple load dependent server and for studying interactive systems with fixed multiprogramming limits.

  17. Production Management System for AMS Computing Centres (United States)

    Choutko, V.; Demakov, O.; Egorov, A.; Eline, A.; Shan, B. S.; Shi, R.


    The Alpha Magnetic Spectrometer [1] (AMS) has collected over 95 billion cosmic ray events since it was installed on the International Space Station (ISS) on May 19, 2011. To cope with enormous flux of events, AMS uses 12 computing centers in Europe, Asia and North America, which have different hardware and software configurations. The centers are participating in data reconstruction, Monte-Carlo (MC) simulation [2]/Data and MC production/as well as in physics analysis. Data production management system has been developed to facilitate data and MC production tasks in AMS computing centers, including job acquiring, submitting, monitoring, transferring, and accounting. It was designed to be modularized, light-weighted, and easy-to-be-deployed. The system is based on Deterministic Finite Automaton [3] model, and implemented by script languages, Python and Perl, and the built-in sqlite3 database on Linux operating systems. Different batch management systems, file system storage, and transferring protocols are supported. The details of the integration with Open Science Grid are presented as well.

  18. Computation in Dynamically Bounded Asymmetric Systems (United States)

    Rutishauser, Ueli; Slotine, Jean-Jacques; Douglas, Rodney


    Previous explanations of computations performed by recurrent networks have focused on symmetrically connected saturating neurons and their convergence toward attractors. Here we analyze the behavior of asymmetrical connected networks of linear threshold neurons, whose positive response is unbounded. We show that, for a wide range of parameters, this asymmetry brings interesting and computationally useful dynamical properties. When driven by input, the network explores potential solutions through highly unstable ‘expansion’ dynamics. This expansion is steered and constrained by negative divergence of the dynamics, which ensures that the dimensionality of the solution space continues to reduce until an acceptable solution manifold is reached. Then the system contracts stably on this manifold towards its final solution trajectory. The unstable positive feedback and cross inhibition that underlie expansion and divergence are common motifs in molecular and neuronal networks. Therefore we propose that very simple organizational constraints that combine these motifs can lead to spontaneous computation and so to the spontaneous modification of entropy that is characteristic of living systems. PMID:25617645

  19. System administration of ATLAS TDAQ computing environment

    Energy Technology Data Exchange (ETDEWEB)

    Adeel-Ur-Rehman, A [National Centre for Physics, Islamabad (Pakistan); Bujor, F; Dumitrescu, A; Dumitru, I; Leahu, M; Valsan, L [Politehnica University of Bucharest (Romania); Benes, J [Zapadoceska Univerzita v Plzni (Czech Republic); Caramarcu, C [National Institute of Physics and Nuclear Engineering (Romania); Dobson, M; Unel, G [University of California at Irvine (United States); Oreshkin, A [St. Petersburg Nuclear Physics Institute (Russian Federation); Popov, D [Max-Planck-Institut fuer Kernphysik, Heidelberg (Germany); Zaytsev, A, E-mail: Alexandr.Zaytsev@cern.c [Budker Institute of Nuclear Physics, Novosibirsk (Russian Federation)


    This contribution gives a thorough overview of the ATLAS TDAQ SysAdmin group activities which deals with administration of the TDAQ computing environment supporting High Level Trigger, Event Filter and other subsystems of the ATLAS detector operating on LHC collider at CERN. The current installation consists of approximately 1500 netbooted nodes managed by more than 60 dedicated servers, about 40 multi-screen user interface machines installed in the control rooms and various hardware and service monitoring machines as well. In the final configuration, the online computer farm will be capable of hosting tens of thousands applications running simultaneously. The software distribution requirements are matched by the two level NFS based solution. Hardware and network monitoring systems of ATLAS TDAQ are based on NAGIOS and MySQL cluster behind it for accounting and storing the monitoring data collected, IPMI tools, CERN LANDB and the dedicated tools developed by the group, e.g. ConfdbUI. The user management schema deployed in TDAQ environment is founded on the authentication and role management system based on LDAP. External access to the ATLAS online computing facilities is provided by means of the gateways supplied with an accounting system as well. Current activities of the group include deployment of the centralized storage system, testing and validating hardware solutions for future use within the ATLAS TDAQ environment including new multi-core blade servers, developing GUI tools for user authentication and roles management, testing and validating 64-bit OS, and upgrading the existing TDAQ hardware components, authentication servers and the gateways.

  20. [Renewal of NIHS computer network system]. (United States)

    Segawa, Katsunori; Nakano, Tatsuya; Saito, Yoshiro


    Updated version of National Institute of Health Sciences Computer Network System (NIHS-NET) is described. In order to reduce its electric power consumption, the main server system was newly built using the virtual machine technology. The service that each machine provided in the previous network system should be maintained as much as possible. Thus, the individual server was constructed for each service, because a virtual server often show decrement in its performance as compared with a physical server. As a result, though the number of virtual servers was increased and the network communication became complicated among the servers, the conventional service was able to be maintained, and security level was able to be rather improved, along with saving electrical powers. The updated NIHS-NET bears multiple security countermeasures. To maximal use of these measures, awareness for the network security by all users is expected.

  1. Visual computing scientific visualization and imaging systems

    CERN Document Server


    This volume aims to stimulate discussions on research involving the use of data and digital images as an understanding approach for analysis and visualization of phenomena and experiments. The emphasis is put not only on graphically representing data as a way of increasing its visual analysis, but also on the imaging systems which contribute greatly to the comprehension of real cases. Scientific Visualization and Imaging Systems encompass multidisciplinary areas, with applications in many knowledge fields such as Engineering, Medicine, Material Science, Physics, Geology, Geographic Information Systems, among others. This book is a selection of 13 revised and extended research papers presented in the International Conference on Advanced Computational Engineering and Experimenting -ACE-X conferences 2010 (Paris), 2011 (Algarve), 2012 (Istanbul) and 2013 (Madrid). The examples were particularly chosen from materials research, medical applications, general concepts applied in simulations and image analysis and ot...

  2. Epilepsy analytic system with cloud computing. (United States)

    Shen, Chia-Ping; Zhou, Weizhi; Lin, Feng-Seng; Sung, Hsiao-Ya; Lam, Yan-Yu; Chen, Wei; Lin, Jeng-Wei; Pan, Ming-Kai; Chiu, Ming-Jang; Lai, Feipei


    Biomedical data analytic system has played an important role in doing the clinical diagnosis for several decades. Today, it is an emerging research area of analyzing these big data to make decision support for physicians. This paper presents a parallelized web-based tool with cloud computing service architecture to analyze the epilepsy. There are many modern analytic functions which are wavelet transform, genetic algorithm (GA), and support vector machine (SVM) cascaded in the system. To demonstrate the effectiveness of the system, it has been verified by two kinds of electroencephalography (EEG) data, which are short term EEG and long term EEG. The results reveal that our approach achieves the total classification accuracy higher than 90%. In addition, the entire training time accelerate about 4.66 times and prediction time is also meet requirements in real time.

  3. Tutoring system for nondestructive testing using computer

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jin Koo; Koh, Sung Nam [Joong Ang Inspection Co.,Ltd., Seoul (Korea, Republic of); Shim, Yun Ju; Kim, Min Koo [Dept. of Computer Engineering, Aju University, Suwon (Korea, Republic of)


    This paper is written to introduce a multimedia tutoring system for nondestructive testing using personal computer. Nondestructive testing, one of the chief methods for inspecting welds and many other components, is very difficult for the NDT inspectors to understand its technical basis without a wide experience. And it is necessary for considerable repeated education and training for keeping their knowledge. The tutoring system that can simulate NDT works is suggested to solve the above problem based on reasonable condition. The tutoring system shows basic theories of nondestructive testing in a book-style with video images and hyper-links, and it offers practices, in which users can simulate the testing equipment. The book-style and simulation practices provide effective and individual environments for learning nondestructive testing.

  4. RASCAL: A Rudimentary Adaptive System for Computer-Aided Learning. (United States)

    Stewart, John Christopher

    Both the background of computer-assisted instruction (CAI) systems in general and the requirements of a computer-aided learning system which would be a reasonable assistant to a teacher are discussed. RASCAL (Rudimentary Adaptive System for Computer-Aided Learning) is a first attempt at defining a CAI system which would individualize the learning…

  5. Interactive computer enhanced remote viewing system

    Energy Technology Data Exchange (ETDEWEB)

    Smith, D.A.; Tourtellott, J.A.


    The Interactive, Computer Enhanced, Remote Viewing System (ICERVSA) is a volumetric data system designed to help the Department of Energy (DOE) improve remote operations in hazardous sites by providing reliable and accurate maps of task spaces where robots will clean up nuclear wastes. The ICERVS mission is to acquire, store, integrate and manage all the sensor data for a site and to provide the necessary tools to facilitate its visualization and interpretation. Empirical sensor data enters through the Common Interface for Sensors and after initial processing, is stored in the Volumetric Database. The data can be analyzed and displayed via a Graphic User Interface with a variety of visualization tools. Other tools permit the construction of geometric objects, such as wire frame models, to represent objects which the operator may recognize in the live TV image. A computer image can be generated that matches the viewpoint of the live TV camera at the remote site, facilitating access to site data. Lastly, the data can be gathered, processed, and transmitted in acceptable form to a robotic controller. Descriptions are given of all these components. The final phase of the ICERVS project, which has just begun, will produce a full scale system and demonstrate it at a DOE site to be selected. A task added to this Phase will adapt the ICERVS to meet the needs of the Dismantlement and Decommissioning (D and D) work at the Oak Ridge National Laboratory (ORNL).

  6. An Applet-based Anonymous Distributed Computing System. (United States)

    Finkel, David; Wills, Craig E.; Ciaraldi, Michael J.; Amorin, Kevin; Covati, Adam; Lee, Michael


    Defines anonymous distributed computing systems and focuses on the specifics of a Java, applet-based approach for large-scale, anonymous, distributed computing on the Internet. Explains the possibility of a large number of computers participating in a single computation and describes a test of the functionality of the system. (Author/LRW)


    Directory of Open Access Journals (Sweden)

    S. V. Yarmolik


    Full Text Available Various modified random testing approaches have been proposed for computer system testing in the black box environment. Their effectiveness has been evaluated on the typical failure patterns by employing three measures, namely, P-measure, E-measure and F-measure. A quasi-random testing, being a modified version of the random testing, has been proposed and analyzed. The quasi-random Sobol sequences and modified Sobol sequences are used as the test patterns. Some new methods for Sobol sequence generation have been proposed and analyzed.

  8. Large-scale neuromorphic computing systems (United States)

    Furber, Steve


    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.

  9. Railroad Classification Yard Technology Manual: Volume II : Yard Computer Systems (United States)


    This volume (Volume II) of the Railroad Classification Yard Technology Manual documents the railroad classification yard computer systems methodology. The subjects covered are: functional description of process control and inventory computer systems,...

  10. Grid Computing BOINC Redesign Mindmap with incentive system (gamification)


    Kitchen, Kris


    Grid Computing BOINC Redesign Mindmap with incentive system (gamification) this is a PDF viewable of

  11. Potential of Cognitive Computing and Cognitive Systems (United States)

    Noor, Ahmed K.


    Cognitive computing and cognitive technologies are game changers for future engineering systems, as well as for engineering practice and training. They are major drivers for knowledge automation work, and the creation of cognitive products with higher levels of intelligence than current smart products. This paper gives a brief review of cognitive computing and some of the cognitive engineering systems activities. The potential of cognitive technologies is outlined, along with a brief description of future cognitive environments, incorporating cognitive assistants - specialized proactive intelligent software agents designed to follow and interact with humans and other cognitive assistants across the environments. The cognitive assistants engage, individually or collectively, with humans through a combination of adaptive multimodal interfaces, and advanced visualization and navigation techniques. The realization of future cognitive environments requires the development of a cognitive innovation ecosystem for the engineering workforce. The continuously expanding major components of the ecosystem include integrated knowledge discovery and exploitation facilities (incorporating predictive and prescriptive big data analytics); novel cognitive modeling and visual simulation facilities; cognitive multimodal interfaces; and cognitive mobile and wearable devices. The ecosystem will provide timely, engaging, personalized / collaborative, learning and effective decision making. It will stimulate creativity and innovation, and prepare the participants to work in future cognitive enterprises and develop new cognitive products of increasing complexity.


    Directory of Open Access Journals (Sweden)



    Full Text Available Argumentation is nowadays seen both as skill that people use in various aspects of their lives, as well as an educational technique that can support the transfer or creation of knowledge thus aiding in the development of other skills (e.g. Communication, critical thinking or attitudes. However, teaching argumentation and teaching with argumentation is still a rare practice, mostly due to the lack of available resources such as time or expert human tutors that are specialized in argumentation. Intelligent Computer Systems (i.e. Systems that implement an inner representation of particular knowledge and try to emulate the behavior of humans could allow more people to understand the purpose, techniques and benefits of argumentation. The proposed paper investigates the state of the art concepts of computer-based argumentation used in education and tries to develop a conceptual map, showing benefits, limitation and relations between various concepts focusing on the duality “learning to argue – arguing to learn”.

  13. 14 CFR 415.123 - Computing systems and software. (United States)


    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Computing systems and software. 415.123... Launch Vehicle From a Non-Federal Launch Site § 415.123 Computing systems and software. (a) An applicant's safety review document must describe all computing systems and software that perform a safety...

  14. Spectrum optimization for computed radiography mammography systems. (United States)

    Figl, Michael; Homolka, Peter; Semturs, Friedrich; Kaar, Marcus; Hummel, Johann


    Technical quality assurance is a key issue in breast screening protocols. While full-field digital mammography systems produce excellent image quality at low dose, it appears difficult with computed radiography (CR) systems to fulfill the requirements for image quality, and to keep the dose below the limits. However, powder plate CR systems are still widely used, e.g., they represent ∼30% of the devices in the Austrian breast cancer screening program. For these systems the selection of an optimal spectrum is a key issue. We investigated different anode/filter (A/F) combinations over the clinical range of tube voltages. The figure-of-merit (FOM) to be optimized was squared signal-difference-to-noise ratio divided by glandular dose. Measurements were performed on a Siemens Mammomat 3000 with a Fuji Profect reader (SiFu) and on a GE Senograph DMR with a Carestream reader (GECa). For 50mm PMMA the maximum FOM was found with a Mo/Rh spectrum between 27kVp and 29kVp, while with 60mm Mo/Rh at 28kVp (GECa) and W/Rh 25kVp (SiFu) were superior. For 70mm PMMA the Rh/Rh spectrum had a peak at about 31kVp (GECa). FOM increases from 10% to >100% are demonstrated. Optimization as proposed in this paper can either lead to dose reduction with comparable image quality or image quality improvement if necessary. For systems with limited A/F combinations the choice of tube voltage is of considerable importance. In this work, optimization of AEC parameters such as anode-filter combination and tube potential was demonstrated for mammographic CR systems. Copyright © 2016. Published by Elsevier Ltd.

  15. System Matrix Analysis for Computed Tomography Imaging.

    Directory of Open Access Journals (Sweden)

    Liubov Flores

    Full Text Available In practical applications of computed tomography imaging (CT, it is often the case that the set of projection data is incomplete owing to the physical conditions of the data acquisition process. On the other hand, the high radiation dose imposed on patients is also undesired. These issues demand that high quality CT images can be reconstructed from limited projection data. For this reason, iterative methods of image reconstruction have become a topic of increased research interest. Several algorithms have been proposed for few-view CT. We consider that the accurate solution of the reconstruction problem also depends on the system matrix that simulates the scanning process. In this work, we analyze the application of the Siddon method to generate elements of the matrix and we present results based on real projection data.

  16. The fundamentals of computational intelligence system approach

    CERN Document Server

    Zgurovsky, Mikhail Z


    This monograph is dedicated to the systematic presentation of main trends, technologies and methods of computational intelligence (CI). The book pays big attention to novel important CI technology- fuzzy logic (FL) systems and fuzzy neural networks (FNN). Different FNN including new class of FNN- cascade neo-fuzzy neural networks are considered and their training algorithms are described and analyzed. The applications of FNN to the forecast in macroeconomics and at stock markets are examined. The book presents the problem of portfolio optimization under uncertainty, the novel theory of fuzzy portfolio optimization free of drawbacks of classical model of Markovitz as well as an application for portfolios optimization at Ukrainian, Russian and American stock exchanges. The book also presents the problem of corporations bankruptcy risk forecasting under incomplete and fuzzy information, as well as new methods based on fuzzy sets theory and fuzzy neural networks and results of their application for bankruptcy ris...

  17. Computational Modeling of Biological Systems From Molecules to Pathways

    CERN Document Server


    Computational modeling is emerging as a powerful new approach for studying and manipulating biological systems. Many diverse methods have been developed to model, visualize, and rationally alter these systems at various length scales, from atomic resolution to the level of cellular pathways. Processes taking place at larger time and length scales, such as molecular evolution, have also greatly benefited from new breeds of computational approaches. Computational Modeling of Biological Systems: From Molecules to Pathways provides an overview of established computational methods for the modeling of biologically and medically relevant systems. It is suitable for researchers and professionals working in the fields of biophysics, computational biology, systems biology, and molecular medicine.

  18. Self-Configurable FPGA-Based Computer Systems

    Directory of Open Access Journals (Sweden)

    MELNYK, A.


    Full Text Available Method of information processing in reconfigurable computer systems is formulated and its improvements that allow an information processing efficiency to increase are proposed. New type of high-performance computer systems, which are named self-configurable FPGA-based computer systems and perform information processing according to this improved method, is proposed. The structure of self-configurable FPGA-based computer systems, rules of application of computer software and hardware means, which are necessary for these systems implementation, are described and their execution time characteristics are estimated. The directions for further works are discussed.

  19. Advances in Future Computer and Control Systems v.2

    CERN Document Server

    Lin, Sally; 2012 International Conference on Future Computer and Control Systems(FCCS2012)


    FCCS2012 is an integrated conference concentrating its focus on Future Computer and Control Systems. “Advances in Future Computer and Control Systems” presents the proceedings of the 2012 International Conference on Future Computer and Control Systems(FCCS2012) held April 21-22,2012, in Changsha, China including recent research results on Future Computer and Control Systems of researchers from all around the world.

  20. Advances in Future Computer and Control Systems v.1

    CERN Document Server

    Lin, Sally; 2012 International Conference on Future Computer and Control Systems(FCCS2012)


    FCCS2012 is an integrated conference concentrating its focus on Future Computer and Control Systems. “Advances in Future Computer and Control Systems” presents the proceedings of the 2012 International Conference on Future Computer and Control Systems(FCCS2012) held April 21-22,2012, in Changsha, China including recent research results on Future Computer and Control Systems of researchers from all around the world.

  1. Evolution: The Computer Systems Engineer Designing Minds

    Directory of Open Access Journals (Sweden)

    Aaron Sloman


    Full Text Available What we have learnt in the last six or seven decades about virtual machinery, as a result of a great deal of science and technology, enables us to offer Darwin a new defence against critics who argued that only physical form, not mental capabilities and consciousness could be products of evolution by natural selection. The defence compares the mental phenomena mentioned by Darwin’s opponents with contents of virtual machinery in computing systems. Objects, states, events, and processes in virtual machinery which we have only recently learnt how to design and build, and could not even have been thought about in Darwin’s time, can interact with the physical machinery in which they are implemented, without being identical with their physical implementation, nor mere aggregates of physical structures and processes. The existence of various kinds of virtual machinery (including both “platform” virtual machines that can host other virtual machines, e.g. operating systems, and “application” virtual machines, e.g. spelling checkers, and computer games depends on complex webs of causal connections involving hardware and software structures, events and processes, where the specification of such causal webs requires concepts that cannot be defined in terms of concepts of the physical sciences. That indefinability, plus the possibility of various kinds of self-monitoring within virtual machinery, seems to explain some of the allegedly mysterious and irreducible features of consciousness that motivated Darwin’s critics and also more recent philosophers criticising AI. There are consequences for philosophy, psychology, neuroscience and robotics.

  2. Reliable timing systems for computer controlled accelerators (United States)

    Knott, Jürgen; Nettleton, Robert


    Over the past decade the use of computers has set new standards for control systems of accelerators with ever increasing complexity coupled with stringent reliability criteria. In fact, with very slow cycling machines or storage rings any erratic operation or timing pulse will cause the loss of precious particles and waste hours of time and effort of preparation. Thus, for the CERN linac and LEAR (Low Energy Antiproton Ring) timing system reliability becomes a crucial factor in the sense that all components must operate practically without fault for very long periods compared to the effective machine cycle. This has been achieved by careful selection of components and design well below thermal and electrical limits, using error detection and correction where possible, as well as developing "safe" decoding techniques for serial data trains. Further, consistent structuring had to be applied in order to obtain simple and flexible modular configurations with very few components on critical paths and to minimize the exchange of information to synchronize accelerators. In addition, this structuring allows the development of efficient strategies for on-line and off-line fault diagnostics. As a result, the timing system for Linac 2 has, so far, been operating without fault for three years, the one for LEAR more than one year since its final debugging.

  3. A computing system for LBB considerations

    Energy Technology Data Exchange (ETDEWEB)

    Ikonen, K.; Miettinen, J.; Raiko, H.; Keskinen, R.


    A computing system has been developed at VTT Energy for making efficient leak-before-break (LBB) evaluations of piping components. The system consists of fracture mechanics and leak rate analysis modules which are linked via an interactive user interface LBBCAL. The system enables quick tentative analysis of standard geometric and loading situations by means of fracture mechanics estimation schemes such as the R6, FAD, EPRI J, Battelle, plastic limit load and moments methods. Complex situations are handled with a separate in-house made finite-element code EPFM3D which uses 20-noded isoparametric solid elements, automatic mesh generators and advanced color graphics. Analytical formulas and numerical procedures are available for leak area evaluation. A novel contribution for leak rate analysis is the CRAFLO code which is based on a nonequilibrium two-phase flow model with phase slip. Its predictions are essentially comparable with those of the well known SQUIRT2 code; additionally it provides outputs for temperature, pressure and velocity distributions in the crack depth direction. An illustrative application to a circumferentially cracked elbow indicates expectedly that a small margin relative to the saturation temperature of the coolant reduces the leak rate and is likely to influence the LBB implementation to intermediate diameter (300 mm) primary circuit piping of BWR plants.

  4. A computed tomographic imaging system for experimentation (United States)

    Lu, Yanping; Wang, Jue; Liu, Fenglin; Yu, Honglin


    Computed tomography (CT) is a non-invasive imaging technique, which is widely applied in medicine for diagnosis and surgical planning, and in industry for non-destructive testing (NDT) and non-destructive evaluation (NDE). So, it is significant for college students to understand the fundamental of CT. In this work, A CT imaging system named CD-50BG with 50mm field-of-view has been developed for experimental teaching at colleges. With the translate-rotate scanning mode, the system makes use of a 7.4×10 8Bq (20mCi) activity 137Cs radioactive source which is held in a tungsten alloy to shield the radiation and guarantee no harm to human body, and a single plastic scintillator + photomultitude detector which is convenient for counting because of its short-time brightness and good single pulse. At same time, an image processing software with the functions of reconstruction, image processing and 3D visualization has also been developed to process the 16 bits acquired data. The reconstruction time for a 128×128 image is less than 0.1 second. High quality images with 0.8mm spatial resolution and 2% contrast sensitivity can be obtained. So far in China, more than ten institutions of higher education, including Tsinghua University and Peking University, have already applied the system for elementary teaching.

  5. 07271 Summary -- Computational Social Systems and the Internet


    Cramton, Peter; Müller, Rudolf; Tardos, Eva; Tennenholtz, Moshe


    The seminar "Computational Social Systems and the Internet" facilitated a very fruitful interaction between economists and computer scientists, which intensified the understanding of the other disciplines' tool sets. The seminar helped to pave the way to a unified theory of social systems on the Internet that takes into account both the economic and the computational issues---and their deep interaction.

  6. A computer fault inquiry system of quick navigation (United States)

    Guo-cheng, Yin

    The computer maintains depend on the experience and knowledge of the experts. The paper poses a computer fault inquiry system of quick navigation to achieve the reusing and sharing of the knowledge of the computer maintenance. The paper presents the needs analysis of the computer fault inquiry system, and gives the partition of the system function, and then designs the system, including the database logical design, the main form menu design and directory query module design; Finally, the code implementation of the query module is given and the implementation of the computer fault quick navigation methods of the keywords-based is stress introduced.

  7. Implementation of an integrated network of various ISR-systems (United States)

    Böker, D.


    Experiments were carried out at naval base Eckernförde, Germany, bringing together several projects concerning Defense Against Terrorism (DAT) and the net-centric battlespace. An integrated network of various Intelligence, Surveillance and Reconnaissance (ISR) systems was realized to evaluate the benefit of net-centric operations, i. e. in DAT focusing on force protection. ISR systems of the German Army, Air force, and Navy as well as a number of not yet operational systems were integrated in a joint network. Information relationships, data models, collaborative system functions and services were defined for as much as 27 systems from 20 companies. The NATO ISR Interoperability Architecture (NIIA) as the corner stone of technical standards for interoperable ISR products like images and motion imagery was used to the extent possible. Services and systems adopted by the multinational project MAJIIC were the starting point to develop appropriate collaboration tools and mechanisms.

  8. New computing systems, future computing environment, and their implications on structural analysis and design (United States)

    Noor, Ahmed K.; Housner, Jerrold M.


    Recent advances in computer technology that are likely to impact structural analysis and design of flight vehicles are reviewed. A brief summary is given of the advances in microelectronics, networking technologies, and in the user-interface hardware and software. The major features of new and projected computing systems, including high performance computers, parallel processing machines, and small systems, are described. Advances in programming environments, numerical algorithms, and computational strategies for new computing systems are reviewed. The impact of the advances in computer technology on structural analysis and the design of flight vehicles is described. A scenario for future computing paradigms is presented, and the near-term needs in the computational structures area are outlined.

  9. Applications of membrane computing in systems and synthetic biology

    CERN Document Server

    Gheorghe, Marian; Pérez-Jiménez, Mario


    Membrane Computing was introduced as a computational paradigm in Natural Computing. The models introduced, called Membrane (or P) Systems, provide a coherent platform to describe and study living cells as computational systems. Membrane Systems have been investigated for their computational aspects and employed to model problems in other fields, like: Computer Science, Linguistics, Biology, Economy, Computer Graphics, Robotics, etc. Their inherent parallelism, heterogeneity and intrinsic versatility allow them to model a broad range of processes and phenomena, being also an efficient means to solve and analyze problems in a novel way. Membrane Computing has been used to model biological systems, becoming with time a thorough modeling paradigm comparable, in its modeling and predicting capabilities, to more established models in this area. This book is the result of the need to collect, in an organic way, different facets of this paradigm. The chapters of this book, together with the web pages accompanying th...

  10. Optical Computing-Optical Components and Storage Systems

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 8; Issue 6. Optical Computing - Optical Components and Storage Systems ... Keywords. Advanced materials. optical switching. pulse shaping. optical storage device. high-performance computing. imaging; nanotechnology. photonics. telecommunications ...

  11. System for Computer Automated Typesetting ((SCAT) of Computer Authored Texts. (United States)


    line, the isolated word is re- ferred to as an "orphan." Both widows and orphans are anathema to compositors and typographers. 9 Approximately 25...possible that further material savings could accrue through the use of a more sophisticated compositor system. Such a system would make more effective use...on the last line of a paragraph. Orphans are anathema to compositors and typographers. photoprocessor An electro-chemical-mechanical device for

  12. Dynamical Systems Theory for Transparent Symbolic Computation in Neuronal Networks


    Carmantini, Giovanni Sirio


    In this thesis, we explore the interface between symbolic and dynamical system computation, with particular regard to dynamical system models of neuronal networks. In doing so, we adhere to a definition of computation as the physical realization of a formal system, where we say that a dynamical system performs a computation if a correspondence can be found between its dynamics on a vectorial space and the formal system’s dynamics on a symbolic space. Guided by this definition, we characterize...

  13. An operating system for future aerospace vehicle computer systems (United States)

    Foudriat, E. C.; Berman, W. J.; Will, R. W.; Bynum, W. L.


    The requirements for future aerospace vehicle computer operating systems are examined in this paper. The computer architecture is assumed to be distributed with a local area network connecting the nodes. Each node is assumed to provide a specific functionality. The network provides for communication so that the overall tasks of the vehicle are accomplished. The O/S structure is based upon the concept of objects. The mechanisms for integrating node unique objects with node common objects in order to implement both the autonomy and the cooperation between nodes is developed. The requirements for time critical performance and reliability and recovery are discussed. Time critical performance impacts all parts of the distributed operating system; e.g., its structure, the functional design of its objects, the language structure, etc. Throughout the paper the tradeoffs - concurrency, language structure, object recovery, binding, file structure, communication protocol, programmer freedom, etc. - are considered to arrive at a feasible, maximum performance design. Reliability of the network system is considered. A parallel multipath bus structure is proposed for the control of delivery time for time critical messages. The architecture also supports immediate recovery for the time critical message system after a communication failure.

  14. Biocellion: accelerating computer simulation of multicellular biological system models. (United States)

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya


    Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit for additional information. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail:

  15. Biocellion: accelerating computer simulation of multicellular biological system models (United States)

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya


    Motivation: Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. Results: We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Availability and implementation: Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit for additional information. Contact: PMID:25064572

  16. Possible Computer Vision Systems and Automated or Computer-Aided Edging and Trimming (United States)

    Philip A. Araman


    This paper discusses research which is underway to help our industry reduce costs, increase product volume and value recovery, and market more accurately graded and described products. The research is part of a team effort to help the hardwood sawmill industry automate with computer vision systems, and computer-aided or computer controlled processing. This paper...

  17. Context-aware computing and self-managing systems

    CERN Document Server

    Dargie, Waltenegus


    Bringing together an extensively researched area with an emerging research issue, Context-Aware Computing and Self-Managing Systems presents the core contributions of context-aware computing in the development of self-managing systems, including devices, applications, middleware, and networks. The expert contributors reveal the usefulness of context-aware computing in developing autonomous systems that have practical application in the real world.The first chapter of the book identifies features that are common to both context-aware computing and autonomous computing. It offers a basic definit

  18. Sensitometer/densitometer system with computer communication

    Energy Technology Data Exchange (ETDEWEB)

    Elbern, Martin Kruel; Souto, Eduardo de Brito [Pro-Rad, Consultores em Radioprotecao S/S Ltda., Porto Alegre, RS (Brazil)]. E-mails:;; Van der Laan, Flavio Tadeu [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Dept. de Engenharia Nuclear]. E-mail:


    Health institutions that work with X-Rays use the Sensitometer/Densitometer system in the routine ascertainments of the image quality of the used films. These are very important tests. When done properly, they help to reduce chemical replenishment, film rejection ratio and the patient dose. This way, it does also reduce the cost of every exam. In Brazil, the quality control tests should be daily for mammography and monthly for other radiographic equipment. They are not done that often, because of the high cost equipment and the knowledge required using it properly. A sensitometer and a densitometer were developed. The Sensitometer is the equipment used to sensitize the films and the densitometer is used to measure the optic density of the sensitizations caused by the sensitometer. The developed densitometer will show in a display not only the optical densities read, but also the results of the important parameters for the quality control in the revelation process of a radiographic film. This densitometer also has computer communication for a more careful analysis and confection of the reports through proper software. As these equipment are designed for the Brazilian market they will help popularize the test, for having a low cost and for calculating the parameters of interest. At least they will help to reduce the collective dose. (author)

  19. Lightness computation by the human visual system (United States)

    Rudd, Michael E.


    A model of achromatic color computation by the human visual system is presented, which is shown to account in an exact quantitative way for a large body of appearance matching data collected with simple visual displays. The model equations are closely related to those of the original Retinex model of Land and McCann. However, the present model differs in important ways from Land and McCann's theory in that it invokes additional biological and perceptual mechanisms, including contrast gain control, different inherent neural gains for incremental, and decremental luminance steps, and two types of top-down influence on the perceptual weights applied to local luminance steps in the display: edge classification and spatial integration attentional windowing. Arguments are presented to support the claim that these various visual processes must be instantiated by a particular underlying neural architecture. By pointing to correspondences between the architecture of the model and findings from visual neurophysiology, this paper suggests that edge classification involves a top-down gating of neural edge responses in early visual cortex (cortical areas V1 and/or V2) while spatial integration windowing occurs in cortical area V4 or beyond.

  20. Modelling, abstraction, and computation in systems biology: A view from computer science. (United States)

    Melham, Tom


    Systems biology is centrally engaged with computational modelling across multiple scales and at many levels of abstraction. Formal modelling, precise and formalised abstraction relationships, and computation also lie at the heart of computer science--and over the past decade a growing number of computer scientists have been bringing their discipline's core intellectual and computational tools to bear on biology in fascinating new ways. This paper explores some of the apparent points of contact between the two fields, in the context of a multi-disciplinary discussion on conceptual foundations of systems biology. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Net-Centric Sustainment and Operational Reach on the Modern Battlefield (United States)


    significant distances.49 The Army furnished the required capability with a very small aperture terminal ( VSAT ) system that provided non-line of sight...solution prevented implementation of the ideal solution. Nonetheless, the main systems, the VSAT and Joint Node Network, which replaced the mobile...The Joint Node Network and VSAT enabled every battalion to transmit data to its supporting unit within three hours if on the move, and continuously

  2. Computer Graphics for System Effectiveness Analysis. (United States)


    using the round operation when computing the number of shots: a real number must be converted into an integer number [ Chapra and ... Canale, 1985]. Then...02139, August 1982. Chapra , Steven C., and Raymond P. Canale, (1985), Numerical Methods for Engineers with Personal Computer Applications New York

  3. Computable Analysis with Applications to Dynamic Systems

    NARCIS (Netherlands)

    P.J. Collins (Pieter)


    htmlabstractIn this article we develop a theory of computation for continuous mathematics. The theory is based on earlier developments of computable analysis, especially that of the school of Weihrauch, and is presented as a model of intuitionistic type theory. Every effort has been made to keep the

  4. The hack attack - Increasing computer system awareness of vulnerability threats (United States)

    Quann, John; Belford, Peter


    The paper discusses the issue of electronic vulnerability of computer based systems supporting NASA Goddard Space Flight Center (GSFC) by unauthorized users. To test the security of the system and increase security awareness, NYMA, Inc. employed computer 'hackers' to attempt to infiltrate the system(s) under controlled conditions. Penetration procedures, methods, and descriptions are detailed in the paper. The procedure increased the security consciousness of GSFC management to the electronic vulnerability of the system(s).

  5. Computer system in Prolog for legal consultation relating to radiations

    Energy Technology Data Exchange (ETDEWEB)

    Kaminishi, Tokishi; Matsuda, Hideharu; Koshino, Masao


    A computer consulting system on legal questions relating to radiations was developed, which system was described with Prolog in BASIC using personal computer. A remarkable feature of this system is easiness and simplicity in its operation. Furthermore the programming for answer is simple and flexible owing to Prolog. This consulting system is similar to CAI rather than Expert System. An outline of the system is described and several examples are shown with the executed results in this report.

  6. Overview of ASC Capability Computing System Governance Model

    Energy Technology Data Exchange (ETDEWEB)

    Doebling, Scott W. [Los Alamos National Laboratory


    This document contains a description of the Advanced Simulation and Computing Program's Capability Computing System Governance Model. Objectives of the Governance Model are to ensure that the capability system resources are allocated on a priority-driven basis according to the Program requirements; and to utilize ASC Capability Systems for the large capability jobs for which they were designed and procured.

  7. Application of computational intelligence in emerging power systems

    African Journals Online (AJOL)

    ... in the electrical engineering applications. This paper highlights the application of computational intelligence methods in power system problems. Various types of CI methods, which are widely used in power system, are also discussed in the brief. Keywords: Power systems, computational intelligence, artificial intelligence.

  8. Configurable computing for high-security/high-performance ambient systems


    Gogniat, Guy; Bossuet, Lilian; Burleson, Wayne


    This paper stresses why configurable computing is a promising target to guarantee the hardware security of ambient systems. Many works have focused on configurable computing to demonstrate its efficiency but as far as we know none have addressed the security issue from system to circuit levels. This paper recalls main hardware attacks before focusing on issues to build secure systems on configurable computing. Two complementary views are presented to provide a guide for security and main issues ...

  9. Granular computing analysis and design of intelligent systems

    CERN Document Server

    Pedrycz, Witold


    Information granules, as encountered in natural language, are implicit in nature. To make them fully operational so they can be effectively used to analyze and design intelligent systems, information granules need to be made explicit. An emerging discipline, granular computing focuses on formalizing information granules and unifying them to create a coherent methodological and developmental environment for intelligent system design and analysis. Granular Computing: Analysis and Design of Intelligent Systems presents the unified principles of granular computing along with its comprehensive algo

  10. Computational Modeling of Flow Control Systems for Aerospace Vehicles Project (United States)

    National Aeronautics and Space Administration — Clear Science Corp. proposes to develop computational methods for designing active flow control systems on aerospace vehicles with the primary objective of...

  11. Simulation model of load balancing in distributed computing systems (United States)

    Botygin, I. A.; Popov, V. N.; Frolov, S. G.


    The availability of high-performance computing, high speed data transfer over the network and widespread of software for the design and pre-production in mechanical engineering have led to the fact that at the present time the large industrial enterprises and small engineering companies implement complex computer systems for efficient solutions of production and management tasks. Such computer systems are generally built on the basis of distributed heterogeneous computer systems. The analytical problems solved by such systems are the key models of research, but the system-wide problems of efficient distribution (balancing) of the computational load and accommodation input, intermediate and output databases are no less important. The main tasks of this balancing system are load and condition monitoring of compute nodes, and the selection of a node for transition of the user’s request in accordance with a predetermined algorithm. The load balancing is one of the most used methods of increasing productivity of distributed computing systems through the optimal allocation of tasks between the computer system nodes. Therefore, the development of methods and algorithms for computing optimal scheduling in a distributed system, dynamically changing its infrastructure, is an important task.

  12. An Analysis of the Effects of Net-Centric Operations Using Multi-Agent Adaptive Behavior (United States)

    Calderon-Meza, Guillermo


    The National Airspace System (NAS) is a resource managed in the public good. Equity in NAS access, and use for private, commercial and government purposes is coordinated by regulations and made possible by procedures, and technology. Researchers have documented scenarios in which the introduction of new concepts-of-operations and technologies has…

  13. An Investigation of Network Enterprise Risk Management Techniques to Support Military Net-Centric Operations (United States)


    Risk Management IATF Information Assurance Technical Framework IDS Intrusion Detection System IEEE Institute of Electrical and Electronic Engineers...alternatives to measure parameters It is built within the DoD’s Information Assurance Technical Framework ( IATF ), which is what the DoD has developed as the

  14. Evolutionary Computing for Intelligent Power System Optimization and Control

    DEFF Research Database (Denmark)

    This new book focuses on how evolutionary computing techniques benefit engineering research and development tasks by converting practical problems of growing complexities into simple formulations, thus largely reducing development efforts. This book begins with an overview of the optimization the...... theory and modern evolutionary computing techniques, and goes on to cover specific applications of evolutionary computing to power system optimization and control problems....

  15. Top 10 Threats to Computer Systems Include Professors and Students (United States)

    Young, Jeffrey R.


    User awareness is growing in importance when it comes to computer security. Not long ago, keeping college networks safe from cyberattackers mainly involved making sure computers around campus had the latest software patches. New computer worms or viruses would pop up, taking advantage of some digital hole in the Windows operating system or in…

  16. Computer-aided Instructional System for Transmission Line Simulation. (United States)

    Reinhard, Erwin A.; Roth, Charles H., Jr.

    A computer-aided instructional system has been developed which utilizes dynamic computer-controlled graphic displays and which requires student interaction with a computer simulation in an instructional mode. A numerical scheme has been developed for digital simulation of a uniform, distortionless transmission line with resistive terminations and…

  17. Cloud Computing and Online Operating System


    Mohit Jain; Mohd. Danish; Hemant Yadav


    HOW YOU DO FEEL WHEN YOU ARE USING THE SOFTWARE WITHOUT INSTALLING IT IN YOUR COMPUTER? Isn’t it miracle? Yes it is .Cloud computing makes it possible in today’s world. It saves your memory both primary and secondary because your data is on centralized data center located outside your house which is highly secure. It is not in your computer memory so that it can be accessed anywhere by you. It also saves money which you don’t need to buy any expensive hardware to access the particular softw...

  18. Computational methods in power system analysis

    CERN Document Server

    Idema, Reijer


    This book treats state-of-the-art computational methods for power flow studies and contingency analysis. In the first part the authors present the relevant computational methods and mathematical concepts. In the second part, power flow and contingency analysis are treated. Furthermore, traditional methods to solve such problems are compared to modern solvers, developed using the knowledge of the first part of the book. Finally, these solvers are analyzed both theoretically and experimentally, clearly showing the benefits of the modern approach.

  19. BRAHMS: Novel middleware for integrated systems computation


    Mitchinson, B.; Chan, T. S.; Chambers, J.; Pearson, M; Humphries, M; Fox, C; Gurney, K.; Prescott, T. J.


    Biological computational modellers are becoming increasingly interested in building large, eclectic models, including components on many different computational substrates, both biological and non-biological. At the same time, the rise of the philosophy of embodied modelling is generating a need to deploy biological models as controllers for robots in real-world environments. Finally, robotics engineers are beginning to find value in seconding biomimetic control strategies for use on practica...

  20. Actor Model of Computation for Scalable Robust Information Systems : One computer is no computer in IoT


    Hewitt, Carl


    International audience; The Actor Model is a mathematical theory that treats “Actors” as the universal conceptual primitives of digital computation. Hypothesis: All physically possible computation can be directly implemented using Actors.The model has been used both as a framework for a theoretical understanding of concurrency, and as the theoretical basis for several practical implementations of concurrent systems. The advent of massive concurrency through client-cloud computing and many-cor...

  1. Safety Metrics for Human-Computer Controlled Systems (United States)

    Leveson, Nancy G; Hatanaka, Iwao


    The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems.This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.

  2. Computational system identification of continuous-time nonlinear systems using approximate Bayesian computation (United States)

    Krishnanathan, Kirubhakaran; Anderson, Sean R.; Billings, Stephen A.; Kadirkamanathan, Visakan


    In this paper, we derive a system identification framework for continuous-time nonlinear systems, for the first time using a simulation-focused computational Bayesian approach. Simulation approaches to nonlinear system identification have been shown to outperform regression methods under certain conditions, such as non-persistently exciting inputs and fast-sampling. We use the approximate Bayesian computation (ABC) algorithm to perform simulation-based inference of model parameters. The framework has the following main advantages: (1) parameter distributions are intrinsically generated, giving the user a clear description of uncertainty, (2) the simulation approach avoids the difficult problem of estimating signal derivatives as is common with other continuous-time methods, and (3) as noted above, the simulation approach improves identification under conditions of non-persistently exciting inputs and fast-sampling. Term selection is performed by judging parameter significance using parameter distributions that are intrinsically generated as part of the ABC procedure. The results from a numerical example demonstrate that the method performs well in noisy scenarios, especially in comparison to competing techniques that rely on signal derivative estimation.

  3. Computer Generated Hologram System for Wavefront Measurement System Calibration (United States)

    Olczak, Gene


    Computer Generated Holograms (CGHs) have been used for some time to calibrate interferometers that require nulling optics. A typical scenario is the testing of aspheric surfaces with an interferometer placed near the paraxial center of curvature. Existing CGH technology suffers from a reduced capacity to calibrate middle and high spatial frequencies. The root cause of this shortcoming is as follows: the CGH is not placed at an image conjugate of the asphere due to limitations imposed by the geometry of the test and the allowable size of the CGH. This innovation provides a calibration system where the imaging properties in calibration can be made comparable to the test configuration. Thus, if the test is designed to have good imaging properties, then middle and high spatial frequency errors in the test system can be well calibrated. The improved imaging properties are provided by a rudimentary auxiliary optic as part of the calibration system. The auxiliary optic is simple to characterize and align to the CGH. Use of the auxiliary optic also reduces the size of the CGH required for calibration and the density of the lines required for the CGH. The resulting CGH is less expensive than the existing technology and has reduced write error and alignment error sensitivities. This CGH system is suitable for any kind of calibration using an interferometer when high spatial resolution is required. It is especially well suited for tests that include segmented optical components or large apertures.

  4. Design technologies for green and sustainable computing systems

    CERN Document Server

    Ganguly, Amlan; Chakrabarty, Krishnendu


    This book provides a comprehensive guide to the design of sustainable and green computing systems (GSC). Coverage includes important breakthroughs in various aspects of GSC, including multi-core architectures, interconnection technology, data centers, high-performance computing (HPC), and sensor networks. The authors address the challenges of power efficiency and sustainability in various contexts, including system design, computer architecture, programming languages, compilers and networking. ·         Offers readers a single-source reference for addressing the challenges of power efficiency and sustainability in embedded computing systems; ·         Provides in-depth coverage of the key underlying design technologies for green and sustainable computing; ·         Covers a wide range of topics, from chip-level design to architectures, computing systems, and networks.

  5. A comparison of queueing, cluster and distributed computing systems (United States)

    Kaplan, Joseph A.; Nelson, Michael L.


    Using workstation clusters for distributed computing has become popular with the proliferation of inexpensive, powerful workstations. Workstation clusters offer both a cost effective alternative to batch processing and an easy entry into parallel computing. However, a number of workstations on a network does not constitute a cluster. Cluster management software is necessary to harness the collective computing power. A variety of cluster management and queuing systems are compared: Distributed Queueing Systems (DQS), Condor, Load Leveler, Load Balancer, Load Sharing Facility (LSF - formerly Utopia), Distributed Job Manager (DJM), Computing in Distributed Networked Environments (CODINE), and NQS/Exec. The systems differ in their design philosophy and implementation. Based on published reports on the different systems and conversations with the system's developers and vendors, a comparison of the systems are made on the integral issues of clustered computing.

  6. National electronic medical records integration on cloud computing system. (United States)

    Mirza, Hebah; El-Masri, Samir


    Few Healthcare providers have an advanced level of Electronic Medical Record (EMR) adoption. Others have a low level and most have no EMR at all. Cloud computing technology is a new emerging technology that has been used in other industry and showed a great success. Despite the great features of Cloud computing, they haven't been utilized fairly yet in healthcare industry. This study presents an innovative Healthcare Cloud Computing system for Integrating Electronic Health Record (EHR). The proposed Cloud system applies the Cloud Computing technology on EHR system, to present a comprehensive EHR integrated environment.

  7. The Cc1 Project – System For Private Cloud Computing

    Directory of Open Access Journals (Sweden)

    J Chwastowski


    Full Text Available The main features of the Cloud Computing system developed at IFJ PAN are described. The project is financed from the structural resources provided by the European Commission and the Polish Ministry of Science and Higher Education (Innovative Economy, National Cohesion Strategy. The system delivers a solution for carrying out computer calculations on a Private Cloud computing infrastructure. It consists of an intuitive Web based user interface, a module for the users and resources administration and the standard EC2 interface implementation. Thanks to the distributed character of the system it allows for the integration of a geographically distant federation of computer clusters within a uniform user environment.

  8. Research on computer virus database management system (United States)

    Qi, Guoquan


    The growing proliferation of computer viruses becomes the lethal threat and research focus of the security of network information. While new virus is emerging, the number of viruses is growing, virus classification increasing complex. Virus naming because of agencies' capture time differences can not be unified. Although each agency has its own virus database, the communication between each other lacks, or virus information is incomplete, or a small number of sample information. This paper introduces the current construction status of the virus database at home and abroad, analyzes how to standardize and complete description of virus characteristics, and then gives the information integrity, storage security and manageable computer virus database design scheme.

  9. The emergent computational potential of evolving artificial living systems

    NARCIS (Netherlands)

    Wiedermann, J.; Leeuwen, J. van


    The computational potential of articial living systems can be studied without knowing the algorithms that govern their behavior. Modeling single organisms by means of so- called cognitive transducers, we will estimate the computational power of AL systems by viewing them as conglomerates of such

  10. 3-D Signal Processing in a Computer Vision System (United States)

    Dongping Zhu; Richard W. Conners; Philip A. Araman


    This paper discusses the problem of 3-dimensional image filtering in a computer vision system that would locate and identify internal structural failure. In particular, a 2-dimensional adaptive filter proposed by Unser has been extended to 3-dimension. In conjunction with segmentation and labeling, the new filter has been used in the computer vision system to...

  11. Quantum Accelerators for High-performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S. [ORNL; Britt, Keith A. [ORNL; Mohiyaddin, Fahd A. [ORNL


    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, the prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.

  12. On the Computation of Lyapunov Functions for Interconnected Systems

    DEFF Research Database (Denmark)

    Sloth, Christoffer


    This paper addresses the computation of additively separable Lyapunov functions for interconnected systems. The presented results can be applied to reduce the complexity of the computations associated with stability analysis of large scale systems. We provide a necessary and sufficient condition ...

  13. 10 CFR 35.457 - Therapy-related computer systems. (United States)


    ... 10 Energy 1 2010-01-01 2010-01-01 false Therapy-related computer systems. 35.457 Section 35.457 Energy NUCLEAR REGULATORY COMMISSION MEDICAL USE OF BYPRODUCT MATERIAL Manual Brachytherapy § 35.457 Therapy-related computer systems. The licensee shall perform acceptance testing on the treatment planning...

  14. Entrepreneurial Health Informatics for Computer Science and Information Systems Students (United States)

    Lawler, James; Joseph, Anthony; Narula, Stuti


    Corporate entrepreneurship is a critical area of curricula for computer science and information systems students. Few institutions of computer science and information systems have entrepreneurship in the curricula however. This paper presents entrepreneurial health informatics as a course in a concentration of Technology Entrepreneurship at a…

  15. A computer controlled pulse penerator for an ST Radar System ...

    African Journals Online (AJOL)

    A computer controlled pulse genarator for an ST radar system is described. It uses a highly flexible software and a hardware with a small IC count, making the system compact and highly programmable. The parameters of the signals of the pulse generator are initially entered from the keyboard. The computer then generates ...

  16. Music Genre Classification Systems - A Computational Approach

    DEFF Research Database (Denmark)

    Ahrendt, Peter


    to systems which use e.g. a symbolic representation or textual information about the music. The approach to music genre classification systems has here been system-oriented. In other words, all the different aspects of the systems have been considered and it is emphasized that the systems should...

  17. Cloud Computing Based E-Learning System (United States)

    Al-Zoube, Mohammed; El-Seoud, Samir Abou; Wyne, Mudasser F.


    Cloud computing technologies although in their early stages, have managed to change the way applications are going to be developed and accessed. These technologies are aimed at running applications as services over the internet on a flexible infrastructure. Microsoft office applications, such as word processing, excel spreadsheet, access database…

  18. Data systems and computer science programs: Overview (United States)

    Smith, Paul H.; Hunter, Paul


    An external review of the Integrated Technology Plan for the Civil Space Program is presented. The topics are presented in viewgraph form and include the following: onboard memory and storage technology; advanced flight computers; special purpose flight processors; onboard networking and testbeds; information archive, access, and retrieval; visualization; neural networks; software engineering; and flight control and operations.

  19. Interactive Computer Graphics for System Analysis. (United States)


    confusing distortion to the picture. George Washington Univesity has documented this problem [38:37] and provided a solution for FORTRAN users but makes...Analysis. New York: Holt, Rinehart, and Winston, 1974. 21. Moore M.V. and L.H. Nawrocki, The Educational Effectiveness of Graphic Displays for Computer

  20. Computers and Information Systems in Education. (United States)

    Goodlad, John I.; And Others

    In an effort to increase the awareness of educators about the potential of electronic data processing (EDP) in education and acquaint the EDP specialists with current educational problems, this book discusses the routine uses of EDP for business and student accounting, as well as its innovative uses in instruction. A description of computers and…

  1. Impact of new computing systems on computational mechanics and flight-vehicle structures technology (United States)

    Noor, A. K.; Storaasli, O. O.; Fulton, R. E.


    Advances in computer technology which may have an impact on computational mechanics and flight vehicle structures technology were reviewed. The characteristics of supersystems, highly parallel systems, and small systems are summarized. The interrelations of numerical algorithms and software with parallel architectures are discussed. A scenario for future hardware/software environment and engineering analysis systems is presented. Research areas with potential for improving the effectiveness of analysis methods in the new environment are identified.

  2. Evaluation of computer-based ultrasonic inservice inspection systems

    Energy Technology Data Exchange (ETDEWEB)

    Harris, R.V. Jr.; Angel, L.J.; Doctor, S.R.; Park, W.R.; Schuster, G.J.; Taylor, T.T. [Pacific Northwest Lab., Richland, WA (United States)


    This report presents the principles, practices, terminology, and technology of computer-based ultrasonic testing for inservice inspection (UT/ISI) of nuclear power plants, with extensive use of drawings, diagrams, and LTT images. The presentation is technical but assumes limited specific knowledge of ultrasonics or computers. The report is divided into 9 sections covering conventional LTT, computer-based LTT, and evaluation methodology. Conventional LTT topics include coordinate axes, scanning, instrument operation, RF and video signals, and A-, B-, and C-scans. Computer-based topics include sampling, digitization, signal analysis, image presentation, SAFI, ultrasonic holography, transducer arrays, and data interpretation. An evaluation methodology for computer-based LTT/ISI systems is presented, including questions, detailed procedures, and test block designs. Brief evaluations of several computer-based LTT/ISI systems are given; supplementary volumes will provide detailed evaluations of selected systems.

  3. Computer graphics application in the engineering design integration system (United States)

    Glatt, C. R.; Abel, R. W.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Stewart, W. A.


    The computer graphics aspect of the Engineering Design Integration (EDIN) system and its application to design problems were discussed. Three basic types of computer graphics may be used with the EDIN system for the evaluation of aerospace vehicles preliminary designs: offline graphics systems using vellum-inking or photographic processes, online graphics systems characterized by direct coupled low cost storage tube terminals with limited interactive capabilities, and a minicomputer based refresh terminal offering highly interactive capabilities. The offline line systems are characterized by high quality (resolution better than 0.254 mm) and slow turnaround (one to four days). The online systems are characterized by low cost, instant visualization of the computer results, slow line speed (300 BAUD), poor hard copy, and the early limitations on vector graphic input capabilities. The recent acquisition of the Adage 330 Graphic Display system has greatly enhanced the potential for interactive computer aided design.

  4. Security for small computer systems a practical guide for users

    CERN Document Server

    Saddington, Tricia


    Security for Small Computer Systems: A Practical Guide for Users is a guidebook for security concerns for small computers. The book provides security advice for the end-users of small computers in different aspects of computing security. Chapter 1 discusses the security and threats, and Chapter 2 covers the physical aspect of computer security. The text also talks about the protection of data, and then deals with the defenses against fraud. Survival planning and risk assessment are also encompassed. The last chapter tackles security management from an organizational perspective. The bo

  5. Fast high-resolution computer-generated hologram computation using multiple graphics processing unit cluster system. (United States)

    Takada, Naoki; Shimobaba, Tomoyoshi; Nakayama, Hirotaka; Shiraki, Atsushi; Okada, Naohisa; Oikawa, Minoru; Masuda, Nobuyuki; Ito, Tomoyoshi


    To overcome the computational complexity of a computer-generated hologram (CGH), we implement an optimized CGH computation in our multi-graphics processing unit cluster system. Our system can calculate a CGH of 6,400×3,072 pixels from a three-dimensional (3D) object composed of 2,048 points in 55 ms. Furthermore, in the case of a 3D object composed of 4096 points, our system is 553 times faster than a conventional central processing unit (using eight threads).

  6. Computer simulation of electrokinetics in colloidal systems (United States)

    Schmitz, R.; Starchenko, V.; Dünweg, B.


    The contribution gives a brief overview outlining how our theoretical understanding of the phenomenon of colloidal electrophoresis has improved over the decades. Particular emphasis is put on numerical calculations and computer simulation models, which have become more and more important as the level of description became more detailed and refined. Due to computational limitations, it has so far not been possible to study "perfect" models. Different complementary models have hence been developed, and their various strengths and deficiencies are briefly discussed. This is contrasted with the experimental situation, where there are still observations waiting for theoretical explanation. The contribution then outlines our recent development of a numerical method to solve the electrokinetic equations for a finite volume in three dimensions, and describes some new results that could be obtained by the approach.

  7. A Reliable Distributed Computing System Architecture for Planetary Rover (United States)

    Jingping, C.; Yunde, J.

    Computing system is one of the most important parts in planetary rover Computing system is crucial to the rover function capability and survival probability When the planetary rover executes some tasks it needs to react to the events in time and to tolerant the faults cause by the environment or itself To meet the requirements the planetary rover computing system architecture should be reactive high reliable adaptable consistent and extendible This paper introduces reliable distributed computing system architecture for planetary rover This architecture integrates the new ideas and technologies of hardware architecture software architecture network architecture fault tolerant technology and the intelligent control system architecture The planetary computing system architecture defines three dimensions of fault containment regions the channel dimension the lane dimension and the integrity dimension The whole computing system has three channels The channels provide the main fault containment regions for system hardware It is the ultimate line of defense of a single physical fault The lanes are the secondary fault containment regions for physical faults It can be used to improve the capability for fault diagnosis within a channel and can improve the coverage with respect to design faults through hardware and software diversity It also can be used as backups for each others to improve the availability and can improve the computing capability The integrity dimension provides faults containment region for software design Its purpose

  8. An Annotated and Cross-Referenced Bibliography on Computer Security and Access Control in Computer Systems. (United States)

    Bergart, Jeffrey G.; And Others

    This paper represents a careful study of published works on computer security and access control in computer systems. The study includes a selective annotated bibliography of some eighty-five important published results in the field and, based on these papers, analyzes the state of the art. In annotating these works, the authors try to be…

  9. Computer Sciences and Data Systems, volume 1 (United States)


    Topics addressed include: software engineering; university grants; institutes; concurrent processing; sparse distributed memory; distributed operating systems; intelligent data management processes; expert system for image analysis; fault tolerant software; and architecture research.

  10. Safe Neighborhood Computation for Hybrid System Verification

    Directory of Open Access Journals (Sweden)

    Yi Deng


    Full Text Available For the design and implementation of engineering systems, performing model-based analysis can disclose potential safety issues at an early stage. The analysis of hybrid system models is in general difficult due to the intrinsic complexity of hybrid dynamics. In this paper, a simulation-based approach to formal verification of hybrid systems is presented.

  11. Multiple-User, Multitasking, Virtual-Memory Computer System (United States)

    Generazio, Edward R.; Roth, Don J.; Stang, David B.


    Computer system designed and programmed to serve multiple users in research laboratory. Provides for computer control and monitoring of laboratory instruments, acquisition and anlaysis of data from those instruments, and interaction with users via remote terminals. System provides fast access to shared central processing units and associated large (from megabytes to gigabytes) memories. Underlying concept of system also applicable to monitoring and control of industrial processes.

  12. DNA-Enabled Integrated Molecular Systems for Computation and Sensing (United States)


    a PE incapacitated , semifunctional, or even detrimental to other nodes. At the architectural level, internode connections may be omitted or broken. In... architectures and systems. Over a decade of work at the intersection of DNA nanotechnology and computer system design has shown several key elements and...introduces a framework for optical computing at the molecular level. This Account also highlights several architectural system studies that

  13. Intelligent decision support systems for sustainable computing paradigms and applications

    CERN Document Server

    Abraham, Ajith; Siarry, Patrick; Sheng, Michael


    This unique book dicusses the latest research, innovative ideas, challenges and computational intelligence (CI) solutions in sustainable computing. It presents novel, in-depth fundamental research on achieving a sustainable lifestyle for society, either from a methodological or from an application perspective. Sustainable computing has expanded to become a significant research area covering the fields of computer science and engineering, electrical engineering and other engineering disciplines, and there has been an increase in the amount of literature on aspects sustainable computing such as energy efficiency and natural resources conservation that emphasizes the role of ICT (information and communications technology) in achieving system design and operation objectives. The energy impact/design of more efficient IT infrastructures is a key challenge in realizing new computing paradigms. The book explores the uses of computational intelligence (CI) techniques for intelligent decision support that can be explo...

  14. 14 CFR 417.123 - Computing systems and software. (United States)


    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Computing systems and software. 417.123... systems and software. (a) A launch operator must document a system safety process that identifies the... systems and software. (b) A launch operator must identify all safety-critical functions associated with...


    Directory of Open Access Journals (Sweden)

    Mychaylo Paszeczko


    Full Text Available This work discusses computational capabilities of the programs belonging to the CAS (Computer Algebra Systems. A review of commercial and non-commercial software has been done here as well. In addition, there has been one of the programs belonging to the this group (program Mathcad selected and its application to the chosen example has been presented. Computational capabilities and ease of handling were decisive factors for the selection.


    Directory of Open Access Journals (Sweden)

    Jerzy Balicki


    Full Text Available The article discusses some paradigms of artificial intelligence in the context of their applications in computer financial systems. The proposed approach has a significant po-tential to increase the competitiveness of enterprises, including financial institutions. However, it requires the effective use of supercomputers, grids and cloud computing. A reference is made to the computing environment for Bitcoin. In addition, we characterized genetic programming and artificial neural networks to prepare investment strategies on the stock exchange market.

  17. Artificial intelligence, expert systems, computer vision, and natural language processing (United States)

    Gevarter, W. B.


    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  18. Computer Aided Facial Prosthetics Manufacturing System

    Directory of Open Access Journals (Sweden)

    Peng H.K.


    Full Text Available Facial deformities can impose burden to the patient. There are many solutions for facial deformities such as plastic surgery and facial prosthetics. However, current fabrication method of facial prosthetics is high-cost and time consuming. This study aimed to identify a new method to construct a customized facial prosthetic. A 3D scanner, computer software and 3D printer were used in this study. Results showed that the new developed method can be used to produce a customized facial prosthetics. The advantages of the developed method over the conventional process are low cost, reduce waste of material and pollution in order to meet the green concept.

  19. Automatic behaviour analysis system for honeybees using computer vision

    DEFF Research Database (Denmark)

    Tu, Gang Jun; Hansen, Mikkel Kragh; Kryger, Per


    -cost embedded computer with very limited computational resources as compared to an ordinary PC. The system succeeds in counting honeybees, identifying their position and measuring their in-and-out activity. Our algorithm uses background subtraction method to segment the images. After the segmentation stage...... demonstrate that this system can be used as a tool to detect the behaviour of honeybees and assess their state in the beehive entrance. Besides, the result of the computation time show that the Raspberry Pi is a viable solution in such real-time video processing system....

  20. Emerging Trends in Computing, Informatics, Systems Sciences, and Engineering

    CERN Document Server

    Elleithy, Khaled


    Emerging Trends in Computing, Informatics, Systems Sciences, and Engineering includes a set of rigorously reviewed world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of  Industrial Electronics, Technology & Automation, Telecommunications and Networking, Systems, Computing Sciences and Software Engineering, Engineering Education, Instructional Technology, Assessment, and E-learning. This book includes the proceedings of the International Joint Conferences on Computer, Information, and Systems Sciences, and Engineering (CISSE 2010). The proceedings are a set of rigorously reviewed world-class manuscripts presenting the state of international practice in Innovative Algorithms and Techniques in Automation, Industrial Electronics and Telecommunications.

  1. Computer system organization the B5700/B6700 series

    CERN Document Server

    Organick, Elliott I


    Computer System Organization: The B5700/B6700 Series focuses on the organization of the B5700/B6700 Series developed by Burroughs Corp. More specifically, it examines how computer systems can (or should) be organized to support, and hence make more efficient, the running of computer programs that evolve with characteristically similar information structures.Comprised of nine chapters, this book begins with a background on the development of the B5700/B6700 operating systems, paying particular attention to their hardware/software architecture. The discussion then turns to the block-structured p

  2. Computing Operating Characteristics Of Bearing/Shaft Systems (United States)

    Moore, James D.


    SHABERTH computer program predicts operating characteristics of bearings in multibearing load-support system. Lubricated and nonlubricated bearings modeled. Calculates loads, torques, temperatures, and fatigue lives of ball and/or roller bearings on single shaft. Provides for analysis of reaction of system to termination of supply of lubricant to bearings and other lubricated mechanical elements. Valuable in design and analysis of shaft/bearing systems. Two versions of SHABERTH available. Cray version (LEW-14860), "Computing Thermal Performances Of Shafts and Bearings". IBM PC version (MFS-28818), written for IBM PC-series and compatible computers running MS-DOS.

  3. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard


    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  4. Innovations and Advances in Computer, Information, Systems Sciences, and Engineering

    CERN Document Server

    Sobh, Tarek


    Innovations and Advances in Computer, Information, Systems Sciences, and Engineering includes the proceedings of the International Joint Conferences on Computer, Information, and Systems Sciences, and Engineering (CISSE 2011). The contents of this book are a set of rigorously reviewed, world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of  Industrial Electronics, Technology and Automation, Telecommunications and Networking, Systems, Computing Sciences and Software Engineering, Engineering Education, Instructional Technology, Assessment, and E-learning.

  5. Computational intelligence for decision support in cyber-physical systems

    CERN Document Server

    Ali, A; Riaz, Zahid


    This book is dedicated to applied computational intelligence and soft computing techniques with special reference to decision support in Cyber Physical Systems (CPS), where the physical as well as the communication segment of the networked entities interact with each other. The joint dynamics of such systems result in a complex combination of computers, software, networks and physical processes all combined to establish a process flow at system level. This volume provides the audience with an in-depth vision about how to ensure dependability, safety, security and efficiency in real time by making use of computational intelligence in various CPS applications ranging from the nano-world to large scale wide area systems of systems. Key application areas include healthcare, transportation, energy, process control and robotics where intelligent decision support has key significance in establishing dynamic, ever-changing and high confidence future technologies. A recommended text for graduate students and researche...


    Directory of Open Access Journals (Sweden)

    A. Kravchenko


    Full Text Available Domestic cars and foreign analogues are considered. Failings are marked related to absence of the auxiliary electronic system which serves for the increase of safety and comfort of vehicle management. Innovative development of the complex system of vocal management which provides reliability, comfort and simplicity of movement in a vehicle is offered.

  7. Modeling Workflow Management in a Distributed Computing System ...

    African Journals Online (AJOL)

    Distributed computing is becoming increasingly important in our daily life. This is because it enables the people who use it to share information more rapidly and increases their productivity. A major characteristic feature or distributed computing is the explicit representation of process logic within a communication system, ...

  8. modeling workflow management in a distributed computing system ...

    African Journals Online (AJOL)

    Dr Obe

    It is a fact of life that various organisations and individuals are becoming increasingly dependent on distributed computing systems. According to V. Glushkov, a well-known. Soviet Scientist, "the development of computer networks and terminals results in a situation where the ever greater part of information, first and foremost ...

  9. Python for Scientific Computing Education: Modeling of Queueing Systems

    Directory of Open Access Journals (Sweden)

    Vladimiras Dolgopolovas


    Full Text Available In this paper, we present the methodology for the introduction to scientific computing based on model-centered learning. We propose multiphase queueing systems as a basis for learning objects. We use Python and parallel programming for implementing the models and present the computer code and results of stochastic simulations.

  10. On the programmability of heterogeneous massively-parallel computing systems


    Gelado Fernández, Isaac


    Heterogeneous parallel computing combines general purpose processors with accelerators to efficiently execute both sequential control-intensive and data-parallel phases of applications. Existing programming models for heterogeneous parallel computing impose added coding complexity when compared to traditional sequential shared-memory programming models for homogeneous systems. This extra code complexity is assumable in supercomputing environments, where programmability is sacrificed in pursui...

  11. The Handbook for the Computer Security Certification of Trusted Systems (United States)


    The Navy has designated the Naval Research Laboratory (NRL) as its Center for Computer Security Research and Evaluation. NRL is actively developing a...certification criteria through the production of the Handbook for the Computer Security Certification of Trusted Systems. Through this effort, NRL hopes to

  12. Software design for resilient computer systems

    CERN Document Server

    Schagaev, Igor


    This book addresses the question of how system software should be designed to account for faults, and which fault tolerance features it should provide for highest reliability. The authors first show how the system software interacts with the hardware to tolerate faults. They analyze and further develop the theory of fault tolerance to understand the different ways to increase the reliability of a system, with special attention on the role of system software in this process. They further develop the general algorithm of fault tolerance (GAFT) with its three main processes: hardware checking, preparation for recovery, and the recovery procedure. For each of the three processes, they analyze the requirements and properties theoretically and give possible implementation scenarios and system software support required. Based on the theoretical results, the authors derive an Oberon-based programming language with direct support of the three processes of GAFT. In the last part of this book, they introduce a simulator...

  13. Computer systems for annotation of single molecule fragments (United States)

    Schwartz, David Charles; Severin, Jessica


    There are provided computer systems for visualizing and annotating single molecule images. Annotation systems in accordance with this disclosure allow a user to mark and annotate single molecules of interest and their restriction enzyme cut sites thereby determining the restriction fragments of single nucleic acid molecules. The markings and annotations may be automatically generated by the system in certain embodiments and they may be overlaid translucently onto the single molecule images. An image caching system may be implemented in the computer annotation systems to reduce image processing time. The annotation systems include one or more connectors connecting to one or more databases capable of storing single molecule data as well as other biomedical data. Such diverse array of data can be retrieved and used to validate the markings and annotations. The annotation systems may be implemented and deployed over a computer network. They may be ergonomically optimized to facilitate user interactions.

  14. Computing handbook information systems and information technology

    CERN Document Server

    Topi, Heikki


    Disciplinary Foundations and Global ImpactEvolving Discipline of Information Systems Heikki TopiDiscipline of Information Technology Barry M. Lunt and Han ReichgeltInformation Systems as a Practical Discipline Juhani IivariInformation Technology Han Reichgelt, Joseph J. Ekstrom, Art Gowan, and Barry M. LuntSociotechnical Approaches to the Study of Information Systems Steve Sawyer and Mohammad Hossein JarrahiIT and Global Development Erkki SutinenUsing ICT for Development, Societal Transformation, and Beyond Sherif KamelTechnical Foundations of Data and Database ManagementData Models Avi Silber

  15. A computer-controlled adaptive antenna system (United States)

    Fetterolf, P. C.; Price, K. M.

    The problem of active pattern control in multibeam or phased array antenna systems is one that is well suited to technologies based upon microprocessor feedback control systems. Adaptive arrays can be realized by incorporating microprocessors as control elements in closed-loop feedback paths. As intelligent controllers, microprocessors can detect variations in arrays and implement suitable configuration changes. The subject of this paper is the application of the Howells-Applebaum power inversion algorithm in a C-band multibeam antenna system. A proof-of-concept, microprocessor controlled, adaptive beamforming network (BFN) was designed, assembled, and subsequent tests were performed demonstrating the algorithm's capacity for nulling narrowband jammers.

  16. Fundamentals of computational intelligence neural networks, fuzzy systems, and evolutionary computation

    CERN Document Server

    Keller, James M; Fogel, David B


    This book covers the three fundamental topics that form the basis of computational intelligence: neural networks, fuzzy systems, and evolutionary computation. The text focuses on inspiration, design, theory, and practical aspects of implementing procedures to solve real-world problems. While other books in the three fields that comprise computational intelligence are written by specialists in one discipline, this book is co-written by current former Editor-in-Chief of IEEE Transactions on Neural Networks and Learning Systems, a former Editor-in-Chief of IEEE Transactions on Fuzzy Systems, and the founding Editor-in-Chief of IEEE Transactions on Evolutionary Computation. The coverage across the three topics is both uniform and consistent in style and notation. Discusses single-layer and multilayer neural networks, radial-basi function networks, and recurrent neural networks Covers fuzzy set theory, fuzzy relations, fuzzy logic interference, fuzzy clustering and classification, fuzzy measures and fuzz...

  17. On The Computational Capabilities of Physical Systems. Part 2; Relationship With Conventional Computer Science (United States)

    Wolpert, David H.; Koga, Dennis (Technical Monitor)


    In the first of this pair of papers, it was proven that there cannot be a physical computer to which one can properly pose any and all computational tasks concerning the physical universe. It was then further proven that no physical computer C can correctly carry out all computational tasks that can be posed to C. As a particular example, this result means that no physical computer that can, for any physical system external to that computer, take the specification of that external system's state as input and then correctly predict its future state before that future state actually occurs; one cannot build a physical computer that can be assured of correctly "processing information faster than the universe does". These results do not rely on systems that are infinite, and/or non-classical, and/or obey chaotic dynamics. They also hold even if one uses an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing Machine. This generality is a direct consequence of the fact that a novel definition of computation - "physical computation" - is needed to address the issues considered in these papers, which concern real physical computers. While this novel definition does not fit into the traditional Chomsky hierarchy, the mathematical structure and impossibility results associated with it have parallels in the mathematics of the Chomsky hierarchy. This second paper of the pair presents a preliminary exploration of some of this mathematical structure. Analogues of Chomskian results concerning universal Turing Machines and the Halting theorem are derived, as are results concerning the (im)possibility of certain kinds of error-correcting codes. In addition, an analogue of algorithmic information complexity, "prediction complexity", is elaborated. A task-independent bound is derived on how much the prediction complexity of a computational task can differ for two different reference universal physical computers used to solve that task

  18. Research on the Teaching System of the University Computer Foundation

    Directory of Open Access Journals (Sweden)

    Ji Xiaoyun


    Full Text Available Inonal students, the teaching contents, classification, hierarchical teaching methods with the combination of professional level training, as well as for top-notch students after class to promote comprehensive training methods for different students, establish online Q & A, test platform, to strengthen the integration professional education and computer education and training system of college computer basic course of study and exploration, and the popularization and application of the basic programming course, promote the cultivation of university students in the computer foundation, thinking methods and innovative practice ability, achieve the goal of individualized educ the College of computer basic course teaching, the specific circumstances of the need for students, professiation.

  19. Mechatronic sensory system for computer integrated manufacturing

    CSIR Research Space (South Africa)

    Kumile, CM


    Full Text Available Changing manufacturing environment is characterised by aggressive competition on a global scale and rapid changes in process technology requires to create manufacturing systems that are be able to quickly adapt to new products, processes as well...

  20. Architecture of a Computer Based Instructional System

    Directory of Open Access Journals (Sweden)

    Emilia PECHEANU


    Full Text Available The paper describes the architecture of a tutorial system that can be used at various engineering graduate and postgraduate courses. The tutorial is using Internet-style WWW services to provide access to the teaching information and the evaluation exercises maintained with a RDMS. The tutorial will consist of server-side applications that process and present teaching material and assessing exercises to the student using the well-known Web interface. All information in the system will be stored in a relational database. By closely sticking to the ANSI SQL specifications, the system can take advantage of a free database managing system running on Linux, the mini-SQL. The tutorial can be used to on-line deliver any courses, creating new, continuing education opportunities. Taking advantage of the modern deployment techniques, the instructional/assessing tutorial offer high degrees of accessibility.

  1. Distributed Computation in a Quadrupedal Robotic System

    Directory of Open Access Journals (Sweden)

    Daniel Kuehn


    Full Text Available Today's and future space missions (will have to deal with increasing requirements regarding autonomy and flexibility in the locomotor system. To cope with these requirements, a higher bandwidth for sensor information is needed. In this paper, a robotic system is presented that is equipped with artificial feet and a spine incorporating increased sensing capabilities for walking robots. In the proposed quadrupedal robotic system, the front and rear parts are connected via an actuated spinal structure with six degrees of freedom. In order to increase the robustness of the system's locomotion in terms of traction and stability, a foot-like structure equipped with various sensors has been developed. In terms of distributed local control, both structures are as self-contained as possible with regard to sensing, sensor preprocessing, control and communication. This allows the robot to respond rapidly to occurring events with only minor latency.

  2. [Analog gamma camera digitalization computer system]. (United States)

    Rojas, G M; Quintana, J C; Jer, J; Astudillo, S; Arenas, L; Araya, H


    Digitalization of analogue gamma cameras systems, using special acquisition boards in microcomputers and appropriate software for acquisition and processing of nuclear medicine images is described in detail. Microcomputer integrated systems interconnected by means of a Local Area Network (LAN) and connected to several gamma cameras have been implemented using specialized acquisition boards. The PIP software (Portable Image Processing) was installed on each microcomputer to acquire and preprocess the nuclear medicine images. A specialized image processing software has been designed and developed for these purposes. This software allows processing of each nuclear medicine exam, in a semiautomatic procedure, and recording of the results on radiological films. . A stable, flexible and inexpensive system which makes it possible to digitize, visualize, process, and print nuclear medicine images obtained from analogue gamma cameras was implemented in the Nuclear Medicine Division. Such a system yields higher quality images than those obtained with analogue cameras while keeping operating costs considerably lower (filming: 24.6%, fixing 48.2% and developing 26%.) Analogue gamma camera systems can be digitalized economically. This system makes it possible to obtain optimal clinical quality nuclear medicine images, to increase the acquisition and processing efficiency, and to reduce the steps involved in each exam.

  3. Morphable Computer Architectures for Highly Energy Aware Systems

    National Research Council Canada - National Science Library

    Kogge, Peter


    To achieve a revolutionary reduction in overall power consumption, computing systems must be constructed out of both inherently low-power structures and power-aware or energy-aware hardware and software subsystems...

  4. Computing Differential Invariants of Hybrid Systems as Fixedpoints

    National Research Council Canada - National Science Library

    Platzer, Andre; Clarke, Edmund M


    .... In order to verify non-trivial systems without solving their differential equations and without numerical errors, we use a continuous generalization of induction, for which our algorithm computes...

  5. Securing Cloud Computing from Different Attacks Using Intrusion Detection Systems

    Directory of Open Access Journals (Sweden)

    Omar Achbarou


    Full Text Available Cloud computing is a new way of integrating a set of old technologies to implement a new paradigm that creates an avenue for users to have access to shared and configurable resources through internet on-demand. This system has many common characteristics with distributed systems, hence, the cloud computing also uses the features of networking. Thus the security is the biggest issue of this system, because the services of cloud computing is based on the sharing. Thus, a cloud computing environment requires some intrusion detection systems (IDSs for protecting each machine against attacks. The aim of this work is to present a classification of attacks threatening the availability, confidentiality and integrity of cloud resources and services. Furthermore, we provide literature review of attacks related to the identified categories. Additionally, this paper also introduces related intrusion detection models to identify and prevent these types of attacks.

  6. Proceedings: Computer Science and Data Systems Technical Symposium, volume 1 (United States)

    Larsen, Ronald L.; Wallgren, Kenneth


    Progress reports and technical updates of programs being performed by NASA centers are covered. Presentations in viewgraph form are included for topics in three categories: computer science, data systems and space station applications.

  7. Proceedings: Computer Science and Data Systems Technical Symposium, volume 2 (United States)

    Larsen, Ronald L.; Wallgren, Kenneth


    Progress reports and technical updates of programs being performed by NASA centers are covered. Presentations in viewgraph form, along with abstracts, are included for topics in three catagories: computer science, data systems, and space station applications.

  8. Towards a global monitoring system for CMS computing operations

    CERN Multimedia

    CERN. Geneva; Bauerdick, Lothar A.T.


    The operation of the CMS computing system requires a complex monitoring system to cover all its aspects: central services, databases, the distributed computing infrastructure, production and analysis workflows, the global overview of the CMS computing activities and the related historical information. Several tools are available to provide this information, developed both inside and outside of the collaboration and often used in common with other experiments. Despite the fact that the current monitoring allowed CMS to successfully perform its computing operations, an evolution of the system is clearly required, to adapt to the recent changes in the data and workload management tools and models and to address some shortcomings that make its usage less than optimal. Therefore, a recent and ongoing coordinated effort was started in CMS, aiming at improving the entire monitoring system by identifying its weaknesses and the new requirements from the stakeholders, rationalise and streamline existing components and ...

  9. Computational Fluid and Particle Dynamics in the Human Respiratory System

    CERN Document Server

    Tu, Jiyuan; Ahmadi, Goodarz


    Traditional research methodologies in the human respiratory system have always been challenging due to their invasive nature. Recent advances in medical imaging and computational fluid dynamics (CFD) have accelerated this research. This book compiles and details recent advances in the modelling of the respiratory system for researchers, engineers, scientists, and health practitioners. It breaks down the complexities of this field and provides both students and scientists with an introduction and starting point to the physiology of the respiratory system, fluid dynamics and advanced CFD modeling tools. In addition to a brief introduction to the physics of the respiratory system and an overview of computational methods, the book contains best-practice guidelines for establishing high-quality computational models and simulations. Inspiration for new simulations can be gained through innovative case studies as well as hands-on practice using pre-made computational code. Last but not least, students and researcher...

  10. User-Oriented Computer-Aided Hydraulic System Design. (United States)


    NATIONAL BUREAU Of STANDARt P63 A - - o_ • • - • -. ° ...- -. -..-- - ---.-. Q~ %. i0 Report No. FPRC 83-A-Fl USER-ORIENTED COMPUTER-AIDED HYDRAULIC aided design user oriented system simulation power flow modeling problem oriented language transient state steady state valves * FORTRAN PL/I pumps...a problem oriented language for use with the developed program, and the models of commonly used hydraulic valves, pumps, motors, and cylinders are

  11. Effective Methodology for Security Risk Assessment of Computer Systems


    Daniel F. García; Adrián Fernández


    Today, computer systems are more and more complex and support growing security risks. The security managers need to find effective security risk assessment methodologies that allow modeling well the increasing complexity of current computer systems but also maintaining low the complexity of the assessment procedure. This paper provides a brief analysis of common security risk assessment methodologies leading to the selection of a proper methodology to fulfill these requirements. Then, a detai...

  12. Cluster-based localization and tracking in ubiquitous computing systems

    CERN Document Server

    Martínez-de Dios, José Ramiro; Torres-González, Arturo; Ollero, Anibal


    Localization and tracking are key functionalities in ubiquitous computing systems and techniques. In recent years a very high variety of approaches, sensors and techniques for indoor and GPS-denied environments have been developed. This book briefly summarizes the current state of the art in localization and tracking in ubiquitous computing systems focusing on cluster-based schemes. Additionally, existing techniques for measurement integration, node inclusion/exclusion and cluster head selection are also described in this book.



    Taha Chaabouni1; Maher Khemakhem


    Cloud computing is a new emerging system which offers information technologies via Internet. Clients use services they need when they need and at the place they want and pay only for what they have consumed. So, cloud computing offers many advantages especially for business. A deep study and understanding of this emerging system and the inherent components help a lot in identifying what should we do in order to improve its performance. In this work, we present first cloud compu...

  14. Towards accurate quantum simulations of large systems with small computers. (United States)

    Yang, Yonggang


    Numerical simulations are important for many systems. In particular, various standard computer programs have been developed for solving the quantum Schrödinger equations. However, the accuracy of these calculations is limited by computer capabilities. In this work, an iterative method is introduced to enhance the accuracy of these numerical calculations, which is otherwise prohibitive by conventional methods. The method is easily implementable and general for many systems.

  15. Redberry: a computer algebra system designed for tensor manipulation (United States)

    Poslavsky, Stanislav; Bolotin, Dmitry


    In this paper we focus on the main aspects of computer-aided calculations with tensors and present a new computer algebra system Redberry which was specifically designed for algebraic tensor manipulation. We touch upon distinctive features of tensor software in comparison with pure scalar systems, discuss the main approaches used to handle tensorial expressions and present the comparison of Redberry performance with other relevant tools.

  16. Exploring Computation-Communication Tradeoffs in Camera Systems


    Mazumdar, Amrita; Moreau, Thierry; Kim, Sung; Cowan, Meghan; Alaghi, Armin; Ceze, Luis; Oskin, Mark; Sathe, Visvesh


    Cameras are the defacto sensor. The growing demand for real-time and low-power computer vision, coupled with trends towards high-efficiency heterogeneous systems, has given rise to a wide range of image processing acceleration techniques at the camera node and in the cloud. In this paper, we characterize two novel camera systems that use acceleration techniques to push the extremes of energy and performance scaling, and explore the computation-communication tradeoffs in their design. The firs...

  17. Understanding and Improving the Performance Consistency of Distributed Computing Systems

    NARCIS (Netherlands)

    Yigitbasi, M.N.


    With the increasing adoption of distributed systems in both academia and industry, and with the increasing computational and storage requirements of distributed applications, users inevitably demand more from these systems. Moreover, users also depend on these systems for latency and throughput

  18. Secure system design and trustable computing

    CERN Document Server

    Potkonjak, Miodrag


    This book provides the foundations for understanding hardware security and trust, which have become major concerns for national security over the past decade.  Coverage includes issues related to security and trust in a variety of electronic devices and systems related to the security of hardware, firmware and software, spanning system applications, online transactions, and networking services.  This serves as an invaluable reference to the state-of-the-art research that is of critical significance to the security of, and trust in, modern society’s microelectronic-supported infrastructures.

  19. Decentralized Resource Management in Distributed Computer Systems. (United States)


    Interprocess Communication 14 Decentralized Resource Management 15 2.3.3 MicroNet 16 * System Goals and Objectives 16 Physical...executive level) is moderately low. 16 Background 2.3.3 MicroNet System Goals and Objectives MicroNet [47] was designed to support multiple...tolerate the loss of nodes, allow for a wide variety of interconnect topologies, and adapt to dynamic variations in loading. The designers of MicroNet

  20. 1st International Conference on Signal, Networks, Computing, and Systems

    CERN Document Server

    Mohapatra, Durga; Nagar, Atulya; Sahoo, Manmath


    The book is a collection of high-quality peer-reviewed research papers presented in the first International Conference on Signal, Networks, Computing, and Systems (ICSNCS 2016) held at Jawaharlal Nehru University, New Delhi, India during February 25–27, 2016. The book is organized in to two volumes and primarily focuses on theory and applications in the broad areas of communication technology, computer science and information security. The book aims to bring together the latest scientific research works of academic scientists, professors, research scholars and students in the areas of signal, networks, computing and systems detailing the practical challenges encountered and the solutions adopted.

  1. A neuromuscular monitoring system based on a personal computer. (United States)

    White, D A; Hull, M


    We have developed a computerized neuromuscular monitoring system (NMMS) using commercially available subsystems, i.e., computer equipment, clinical nerve stimulator, force transducer, and strip-chart recorder. This NMMS was developed for acquisition and analysis of data for research and teaching purposes. Computer analysis of the muscle response to stimulation allows graphic and numeric presentation of the twitch response and calculated ratios. Since the system can store and recall data, research data can be accessed for analysis and graphic presentation. An IBM PC/AT computer is used as the central controller and data processor. The computer controls timing of the nerve stimulator output, initiates data acquisition, and adjusts the paper speed of the strip chart recorder. The data processing functions include establishing control response values (when no neuromuscular blockade is present), displaying force versus time and calculated data graphically and numerically, and storing these data for further analysis. The general purpose nature of the computer and strip chart recording equipment allow modification of the system primarily by changes in software. For example, new patterns of nerve stimulation, such as the posttetanic count, can be programmed into the computer system along with appropriate data display and analysis routines. The NMMS has functioned well in the operating room environment. We have had no episodes of electrocautery interference with the computer functions. The automated features have enhanced the utility of the NMMS.(ABSTRACT TRUNCATED AT 250 WORDS)

  2. Data Acquisition, Control, Communication and Computation System ...

    Indian Academy of Sciences (India)

    SOXS aims to study solar flares, which are the most violent and energetic phenomena in the solar system, in the energy range of 4–56 keV with high spectral and temporal resolution. SOXS employs state-of-the-art semiconductor devices, viz., Si-Pin and CZT detectors to achieve sub-keV energy resolution requirements.

  3. STAR Network Distributed Computer Systems Evaluation Results. (United States)


    image processing systems. Further, because of the small data require- ments a segment of TOTT is a good candidate for VLSI. It can attain the...broadcast capabilities of the distributed architecture to isolate the overhead of accounting and enhacing of fault isolation (see Figure B-1). B-1 The

  4. Cloud computing principles, systems and applications

    CERN Document Server

    Antonopoulos, Nick


    This essential reference is a thorough and timely examination of the services, interfaces and types of applications that can be executed on cloud-based systems. Among other things, it identifies and highlights state-of-the-art techniques and methodologies.

  5. Programming Languages for Distributed Computing Systems

    NARCIS (Netherlands)

    Bal, H.E.; Steiner, J.G.; Tanenbaum, A.S.


    When distributed systems first appeared, they were programmed in traditional sequential languages, usually with the addition of a few library procedures for sending and receiving messages. As distributed applications became more commonplace and more sophisticated, this ad hoc approach became less

  6. Soft computing in green and renewable energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Gopalakrishnan, Kasthurirangan [Iowa State Univ., Ames, IA (United States). Iowa Bioeconomy Inst.; US Department of Energy, Ames, IA (United States). Ames Lab; Kalogirou, Soteris [Cyprus Univ. of Technology, Limassol (Cyprus). Dept. of Mechanical Engineering and Materials Sciences and Engineering; Khaitan, Siddhartha Kumar (eds.) [Iowa State Univ. of Science and Technology, Ames, IA (United States). Dept. of Electrical Engineering and Computer Engineering


    Soft Computing in Green and Renewable Energy Systems provides a practical introduction to the application of soft computing techniques and hybrid intelligent systems for designing, modeling, characterizing, optimizing, forecasting, and performance prediction of green and renewable energy systems. Research is proceeding at jet speed on renewable energy (energy derived from natural resources such as sunlight, wind, tides, rain, geothermal heat, biomass, hydrogen, etc.) as policy makers, researchers, economists, and world agencies have joined forces in finding alternative sustainable energy solutions to current critical environmental, economic, and social issues. The innovative models, environmentally benign processes, data analytics, etc. employed in renewable energy systems are computationally-intensive, non-linear and complex as well as involve a high degree of uncertainty. Soft computing technologies, such as fuzzy sets and systems, neural science and systems, evolutionary algorithms and genetic programming, and machine learning, are ideal in handling the noise, imprecision, and uncertainty in the data, and yet achieve robust, low-cost solutions. As a result, intelligent and soft computing paradigms are finding increasing applications in the study of renewable energy systems. Researchers, practitioners, undergraduate and graduate students engaged in the study of renewable energy systems will find this book very useful. (orig.)

  7. The Rabi Oscillation in Subdynamic System for Quantum Computing

    Directory of Open Access Journals (Sweden)

    Bi Qiao


    Full Text Available A quantum computation for the Rabi oscillation based on quantum dots in the subdynamic system is presented. The working states of the original Rabi oscillation are transformed to the eigenvectors of subdynamic system. Then the dissipation and decoherence of the system are only shown in the change of the eigenvalues as phase errors since the eigenvectors are fixed. This allows both dissipation and decoherence controlling to be easier by only correcting relevant phase errors. This method can be extended to general quantum computation systems.

  8. Human computer interaction issues in Clinical Trials Management Systems. (United States)

    Starren, Justin B; Payne, Philip R O; Kaufman, David R


    Clinical trials increasingly rely upon web-based Clinical Trials Management Systems (CTMS). As with clinical care systems, Human Computer Interaction (HCI) issues can greatly affect the usefulness of such systems. Evaluation of the user interface of one web-based CTMS revealed a number of potential human-computer interaction problems, in particular, increased workflow complexity associated with a web application delivery model and potential usability problems resulting from the use of ambiguous icons. Because these design features are shared by a large fraction of current CTMS, the implications extend beyond this individual system.

  9. High performance computing system for flight simulation at NASA Langley (United States)

    Cleveland, Jeff I., II; Sudik, Steven J.; Grove, Randall D.


    The computer architecture and components used in the NASA Langley Advanced Real-Time Simulation System (ARTSS) are briefly described and illustrated with diagrams and graphs. Particular attention is given to the advanced Convex C220 processing units, the UNIX-based operating system, the software interface to the fiber-optic-linked Computer Automated Measurement and Control system, configuration-management and real-time supervisor software, ARTSS hardware modifications, and the current implementation status. Simulation applications considered include the Transport Systems Research Vehicle, the Differential Maneuvering Simulator, the General Aviation Simulator, and the Visual Motion Simulator.

  10. Method to Compute CT System MTF

    Energy Technology Data Exchange (ETDEWEB)

    Kallman, Jeffrey S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)


    The modulation transfer function (MTF) is the normalized spatial frequency representation of the point spread function (PSF) of the system. Point objects are hard to come by, so typically the PSF is determined by taking the numerical derivative of the system's response to an edge. This is the method we use, and we typically use it with cylindrical objects. Given a cylindrical object, we first put an active contour around it, as shown in Figure 1(a). The active contour lets us know where the boundary of the test object is. We next set a threshold (Figure 1(b)) and determine the center of mass of the above threshold voxels. For the purposes of determining the center of mass, each voxel is weighted identically (not by voxel value).

  11. Computing enclosures for uncertain biochemical systems. (United States)

    August, E; Koeppl, H


    In this study, the authors present a novel method that provides enclosures for state trajectories of a non-linear dynamical system with uncertainties in initial conditions and parameter values. It is based on solving positivity conditions by means of semi-definite programmes and sum of squares decompositions. The method accounts for the indeterminacy of kinetic parameters, measurement uncertainties and fluctuations in the reaction rates because of extrinsic noise. This is particularly useful in the field of systems biology when one seeks to determine model behaviour quantitatively or, if this is not possible, semiquantitatively. The authors also demonstrate the significance of the proposed method to model selection in biology. The authors illustrate the applicability of their method on the mitogen-activated protein kinase signalling pathway, which is an important and reoccurring network motif that apparently also plays a crucial role in the development of cancer.

  12. Computational Modeling, Formal Analysis, and Tools for Systems Biology.

    Directory of Open Access Journals (Sweden)

    Ezio Bartocci


    Full Text Available As the amount of biological data in the public domain grows, so does the range of modeling and analysis techniques employed in systems biology. In recent years, a number of theoretical computer science developments have enabled modeling methodology to keep pace. The growing interest in systems biology in executable models and their analysis has necessitated the borrowing of terms and methods from computer science, such as formal analysis, model checking, static analysis, and runtime verification. Here, we discuss the most important and exciting computational methods and tools currently available to systems biologists. We believe that a deeper understanding of the concepts and theory highlighted in this review will produce better software practice, improved investigation of complex biological processes, and even new ideas and better feedback into computer science.

  13. Development of a Computer Writing System Based on EOG. (United States)

    López, Alberto; Ferrero, Francisco; Yangüela, David; Álvarez, Constantina; Postolache, Octavian


    The development of a novel computer writing system based on eye movements is introduced herein. A system of these characteristics requires the consideration of three subsystems: (1) A hardware device for the acquisition and transmission of the signals generated by eye movement to the computer; (2) A software application that allows, among other functions, data processing in order to minimize noise and classify signals; and (3) A graphical interface that allows the user to write text easily on the computer screen using eye movements only. This work analyzes these three subsystems and proposes innovative and low cost solutions for each one of them. This computer writing system was tested with 20 users and its efficiency was compared to a traditional virtual keyboard. The results have shown an important reduction in the time spent on writing, which can be very useful, especially for people with severe motor disorders.

  14. Development of a Computer Writing System Based on EOG

    Directory of Open Access Journals (Sweden)

    Alberto López


    Full Text Available The development of a novel computer writing system based on eye movements is introduced herein. A system of these characteristics requires the consideration of three subsystems: (1 A hardware device for the acquisition and transmission of the signals generated by eye movement to the computer; (2 A software application that allows, among other functions, data processing in order to minimize noise and classify signals; and (3 A graphical interface that allows the user to write text easily on the computer screen using eye movements only. This work analyzes these three subsystems and proposes innovative and low cost solutions for each one of them. This computer writing system was tested with 20 users and its efficiency was compared to a traditional virtual keyboard. The results have shown an important reduction in the time spent on writing, which can be very useful, especially for people with severe motor disorders.


    governing or regulatory boards. Steps will have to be taken to provide training and orientation for all levels of management in higher education , the training of novice administrators who will manage tomorrow’s systems of higher education . Using the existing technology (not all of it as...instructional program of the institution. The main problem at the moment is not the technology, which has outpaced its users in higher education , but dissemination, development, and the training of appropriate personnel.

  16. A Management System for Computer Performance Evaluation. (United States)


    background nearly always exposes an individual to fundamientals of mathematics and statistics. These traits of systematic thinking and a kmowledge of math and...It may be derived rigorously through the use of measurement, simulation, or mathematics or it may be literally estimated based on observation and...systematic identification of a. comuter performance management system. 5. Administration of Group and Project 11anagement Depending on the size and

  17. Cluster computer based education delivery system

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, D.M.; Bitzer, D.L.; Rader, R.K.; Sherwood, B.A.; Tucker, P.T.


    This patent describes an interactive instructional multi-processor system for providing instructional programs for execution at one or more processor stations while relieving memory requirements at the processor stations without allowing a perceivable delay to users at the processor stations as a result of paging of instructional program segments. The system comprises: a cluster subsystem and a plurality of processor stations interconnected by a high speed multi-access communication subsystem, in which the cluster subsystem comprises: at least one mass storage device for storing a library of instructional programs averaging at least about 50 kilobytes in length, high speed buffer means coupled to the mass storage device for simultaneously storing a plurality of instructional programs, an interface for the speed communication sub-system, and processor means including a digital processor for managing the mass storage device, the high speed buffer means and the interface. The processor means further includes a bus interconnecting the mass storage device, the high speed buffer means, the interface and the digital processor. The digital processor includes controller means for transferring a requested instructional program from the mass storage device to the high speed buffer means and for retaining the instructional program in the high speed buffer means for at least a target time related to the processor stations coupled to the cluster subsystem.

  18. Computational Virtual Reality (VR) as a human-computer interface in the operation of telerobotic systems (United States)

    Bejczy, Antal K.


    This presentation focuses on the application of computer graphics or 'virtual reality' (VR) techniques as a human-computer interface tool in the operation of telerobotic systems. VR techniques offer very valuable task realization aids for planning, previewing and predicting robotic actions, operator training, and for visual perception of non-visible events like contact forces in robotic tasks. The utility of computer graphics in telerobotic operation can be significantly enhanced by high-fidelity calibration of virtual reality images to actual TV camera images. This calibration will even permit the creation of artificial (synthetic) views of task scenes for which no TV camera views are available.


    CERN Multimedia

    I. Fisk


    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...


    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  1. Evolution and development of complex computational systems using the paradigm of metabolic computing in Epigenetic Tracking


    Alessandro Fontana; Borys Wróbel,


    Epigenetic Tracking (ET) is an Artificial Embryology system which allows for the evolution and development of large complex structures built from artificial cells. In terms of the number of cells, the complexity of the bodies generated with ET is comparable with the complexity of biological organisms. We have previously used ET to simulate the growth of multicellular bodies with arbitrary 3-dimensional shapes which perform computation using the paradigm of ``metabolic computing''. In this pap...

  2. The Design Methodology of Distributed Computer Systems. (United States)


    TRANSMITTAL TO DDC This technical report has been reviewed and is approved for public release lAW AR 190-12 (7b). Distribution is unlimited. A. D. BLOSE...Evaluation of Asynchronous Concurrent System 3.1 Review of Petri Nets 3.1.1 Basic Properties of Petri Nets Petri nets (PET 77, AGE 75) are a formal...the 2nd International Conference on Software Engineering, October, 1976. ([IZ 72) Brinch Hanse , P., "Structure Multiprogramming," Comm. ACM, Vol, 15

  3. Computer-aided measurement system analysis

    Directory of Open Access Journals (Sweden)

    J. Feliks


    Full Text Available Product analysis with the alternative method is commonly used, especially where the direct or indirect measurement taken as a numerical value of the interesting feature of the product is infeasible, difficult or too expensive. Such an analysis results in deciding whether a given product meets the specified requirements or not. The product may also be analysed in several categories. Neither the measurement itself, nor its result provides information on the extent to which the requirements are met with respect to the analysed feature. The measurement only supports the decision whether to accept the part inspected as ‘good’ or reject and deem it ‘bad’ (made improperly. Several analysis methods for systems of this type have been described in the literature: the Analytic Method, the Signal Detection Method, Cohen’s Kappa (Cross Tab Method. The paper discusses selected methods of measurement system analysis for alternative parameters in the scope of requirements related to the application of statistical process control and quality control. The feasibility is presented of using the MSExcel® package to procedure implementation and result analysis for measurement experiments.

  4. Industrial Personal Computer based Display for Nuclear Safety System

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ji Hyeon; Kim, Aram; Jo, Jung Hee; Kim, Ki Beom; Cheon, Sung Hyun; Cho, Joo Hyun; Sohn, Se Do; Baek, Seung Min [KEPCO, Youngin (Korea, Republic of)


    The safety display of nuclear system has been classified as important to safety (SIL:Safety Integrity Level 3). These days the regulatory agencies are imposing more strict safety requirements for digital safety display system. To satisfy these requirements, it is necessary to develop a safety-critical (SIL 4) grade safety display system. This paper proposes industrial personal computer based safety display system with safety grade operating system and safety grade display methods. The description consists of three parts, the background, the safety requirements and the proposed safety display system design. The hardware platform is designed using commercially available off-the-shelf processor board with back plane bus. The operating system is customized for nuclear safety display application. The display unit is designed adopting two improvement features, i.e., one is to provide two separate processors for main computer and display device using serial communication, and the other is to use Digital Visual Interface between main computer and display device. In this case the main computer uses minimized graphic functions for safety display. The display design is at the conceptual phase, and there are several open areas to be concreted for a solid system. The main purpose of this paper is to describe and suggest a methodology to develop a safety-critical display system and the descriptions are focused on the safety requirement point of view.

  5. Computer model of cardiovascular control system responses to exercise (United States)

    Croston, R. C.; Rummel, J. A.; Kay, F. J.


    Approaches of systems analysis and mathematical modeling together with computer simulation techniques are applied to the cardiovascular system in order to simulate dynamic responses of the system to a range of exercise work loads. A block diagram of the circulatory model is presented, taking into account arterial segments, venous segments, arterio-venous circulation branches, and the heart. A cardiovascular control system model is also discussed together with model test results.

  6. Towards a Global Monitoring System for CMS Computing Operations

    Energy Technology Data Exchange (ETDEWEB)

    Bauerdick, L. A.T. [Fermilab; Sciaba, Andrea [CERN


    The operation of the CMS computing system requires a complex monitoring system to cover all its aspects: central services, databases, the distributed computing infrastructure, production and analysis workflows, the global overview of the CMS computing activities and the related historical information. Several tools are available to provide this information, developed both inside and outside of the collaboration and often used in common with other experiments. Despite the fact that the current monitoring allowed CMS to successfully perform its computing operations, an evolution of the system is clearly required, to adapt to the recent changes in the data and workload management tools and models and to address some shortcomings that make its usage less than optimal. Therefore, a recent and ongoing coordinated effort was started in CMS, aiming at improving the entire monitoring system by identifying its weaknesses and the new requirements from the stakeholders, rationalise and streamline existing components and drive future software development. This contribution gives a complete overview of the CMS monitoring system and a description of all the recent activities that have been started with the goal of providing a more integrated, modern and functional global monitoring system for computing operations.

  7. Complex system modelling and control through intelligent soft computations

    CERN Document Server

    Azar, Ahmad


    The book offers a snapshot of the theories and applications of soft computing in the area of complex systems modeling and control. It presents the most important findings discussed during the 5th International Conference on Modelling, Identification and Control, held in Cairo, from August 31-September 2, 2013. The book consists of twenty-nine selected contributions, which have been thoroughly reviewed and extended before their inclusion in the volume. The different chapters, written by active researchers in the field, report on both current theories and important applications of soft-computing. Besides providing the readers with soft-computing fundamentals, and soft-computing based inductive methodologies/algorithms, the book also discusses key industrial soft-computing applications, as well as multidisciplinary solutions developed for a variety of purposes, like windup control, waste management, security issues, biomedical applications and many others. It is a perfect reference guide for graduate students, r...

  8. Computer aided system for parametric design of combination die (United States)

    Naranje, Vishal G.; Hussein, H. M. A.; Kumar, S.


    In this paper, a computer aided system for parametric design of combination dies is presented. The system is developed using knowledge based system technique of artificial intelligence. The system is capable to design combination dies for production of sheet metal parts having punching and cupping operations. The system is coded in Visual Basic and interfaced with AutoCAD software. The low cost of the proposed system will help die designers of small and medium scale sheet metal industries for design of combination dies for similar type of products. The proposed system is capable to reduce design time and efforts of die designers for design of combination dies.

  9. Neuromorphic Computing – From Materials Research to Systems Architecture Roundtable

    Energy Technology Data Exchange (ETDEWEB)

    Schuller, Ivan K. [Univ. of California, San Diego, CA (United States); Stevens, Rick [Argonne National Lab. (ANL), Argonne, IL (United States); Univ. of Chicago, IL (United States); Pino, Robinson [Dept. of Energy (DOE) Office of Science, Washington, DC (United States); Pechan, Michael [Dept. of Energy (DOE) Office of Science, Washington, DC (United States)


    Computation in its many forms is the engine that fuels our modern civilization. Modern computation—based on the von Neumann architecture—has allowed, until now, the development of continuous improvements, as predicted by Moore’s law. However, computation using current architectures and materials will inevitably—within the next 10 years—reach a limit because of fundamental scientific reasons. DOE convened a roundtable of experts in neuromorphic computing systems, materials science, and computer science in Washington on October 29-30, 2015 to address the following basic questions: Can brain-like (“neuromorphic”) computing devices based on new material concepts and systems be developed to dramatically outperform conventional CMOS based technology? If so, what are the basic research challenges for materials sicence and computing? The overarching answer that emerged was: The development of novel functional materials and devices incorporated into unique architectures will allow a revolutionary technological leap toward the implementation of a fully “neuromorphic” computer. To address this challenge, the following issues were considered: The main differences between neuromorphic and conventional computing as related to: signaling models, timing/clock, non-volatile memory, architecture, fault tolerance, integrated memory and compute, noise tolerance, analog vs. digital, and in situ learning New neuromorphic architectures needed to: produce lower energy consumption, potential novel nanostructured materials, and enhanced computation Device and materials properties needed to implement functions such as: hysteresis, stability, and fault tolerance Comparisons of different implementations: spin torque, memristors, resistive switching, phase change, and optical schemes for enhanced breakthroughs in performance, cost, fault tolerance, and/or manufacturability.

  10. Advanced computer architecture specification for automated weld systems (United States)

    Katsinis, Constantine


    This report describes the requirements for an advanced automated weld system and the associated computer architecture, and defines the overall system specification from a broad perspective. According to the requirements of welding procedures as they relate to an integrated multiaxis motion control and sensor architecture, the computer system requirements are developed based on a proven multiple-processor architecture with an expandable, distributed-memory, single global bus architecture, containing individual processors which are assigned to specific tasks that support sensor or control processes. The specified architecture is sufficiently flexible to integrate previously developed equipment, be upgradable and allow on-site modifications.

  11. Fundamentals of power integrity for computer platforms and systems

    CERN Document Server

    DiBene, Joseph T


    An all-encompassing text that focuses on the fundamentals of power integrity Power integrity is the study of power distribution from the source to the load and the system level issues that can occur across it. For computer systems, these issues can range from inside the silicon to across the board and may egress into other parts of the platform, including thermal, EMI, and mechanical. With a focus on computer systems and silicon level power delivery, this book sheds light on the fundamentals of power integrity, utilizing the author's extensive background in the power integrity industry and un

  12. Modern Embedded Computing Designing Connected, Pervasive, Media-Rich Systems

    CERN Document Server

    Barry, Peter


    Modern embedded systems are used for connected, media-rich, and highly integrated handheld devices such as mobile phones, digital cameras, and MP3 players. All of these embedded systems require networking, graphic user interfaces, and integration with PCs, as opposed to traditional embedded processors that can perform only limited functions for industrial applications. While most books focus on these controllers, Modern Embedded Computing provides a thorough understanding of the platform architecture of modern embedded computing systems that drive mobile devices. The book offers a comprehen

  13. Template based parallel checkpointing in a massively parallel computer system (United States)

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN


    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  14. The engineering design integration (EDIN) system. [digital computer program complex (United States)

    Glatt, C. R.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Reiners, S. J.


    A digital computer program complex for the evaluation of aerospace vehicle preliminary designs is described. The system consists of a Univac 1100 series computer and peripherals using the Exec 8 operating system, a set of demand access terminals of the alphanumeric and graphics types, and a library of independent computer programs. Modification of the partial run streams, data base maintenance and construction, and control of program sequencing are provided by a data manipulation program called the DLG processor. The executive control of library program execution is performed by the Univac Exec 8 operating system through a user established run stream. A combination of demand and batch operations is employed in the evaluation of preliminary designs. Applications accomplished with the EDIN system are described.

  15. An E-learning System based on Affective Computing (United States)

    Duo, Sun; Song, Lu Xue

    In recent years, e-learning as a learning system is very popular. But the current e-learning systems cannot instruct students effectively since they do not consider the emotional state in the context of instruction. The emergence of the theory about "Affective computing" can solve this question. It can make the computer's intelligence no longer be a pure cognitive one. In this paper, we construct an emotional intelligent e-learning system based on "Affective computing". A dimensional model is put forward to recognize and analyze the student's emotion state and a virtual teacher's avatar is offered to regulate student's learning psychology with consideration of teaching style based on his personality trait. A "man-to-man" learning environment is built to simulate the traditional classroom's pedagogy in the system.

  16. A review of residential computer oriented energy control systems

    Energy Technology Data Exchange (ETDEWEB)

    North, Greg


    The purpose of this report is to bring together as much information on Residential Computer Oriented Energy Control Systems as possible within a single document. This report identifies the main elements of the system and is intended to provide many technical options for the design and implementation of various energy related services.

  17. Demonstrating Operating System Principles via Computer Forensics Exercises (United States)

    Duffy, Kevin P.; Davis, Martin H., Jr.; Sethi, Vikram


    We explore the feasibility of sparking student curiosity and interest in the core required MIS operating systems course through inclusion of computer forensics exercises into the course. Students were presented with two in-class exercises. Each exercise demonstrated an aspect of the operating system, and each exercise was written as a computer…

  18. [Filing and processing systems of ultrasonic images in personal computers]. (United States)

    Filatov, I A; Bakhtin, D A; Orlov, A V


    The paper covers the software pattern for the ultrasonic image filing and processing system. The system records images on a computer display in real time or still, processes them by local filtration techniques, makes different measurements and stores the findings in the graphic database. It is stressed that the database should be implemented as a network version.

  19. Development of an Intelligent Instruction System for Mathematical Computation (United States)

    Kim, Du Gyu; Lee, Jaemu


    In this paper, we propose the development of a web-based, intelligent instruction system to help elementary school students for mathematical computation. We concentrate on the intelligence facilities which support diagnosis and advice. The existing web-based instruction systems merely give information on whether the learners' replies are…

  20. Snore related signals processing in a private cloud computing system. (United States)

    Qian, Kun; Guo, Jian; Xu, Huijie; Zhu, Zhaomeng; Zhang, Gongxuan


    Snore related signals (SRS) have been demonstrated to carry important information about the obstruction site and degree in the upper airway of Obstructive Sleep Apnea-Hypopnea Syndrome (OSAHS) patients in recent years. To make this acoustic signal analysis method more accurate and robust, big SRS data processing is inevitable. As an emerging concept and technology, cloud computing has motivated numerous researchers and engineers to exploit applications both in academic and industry field, which could have an ability to implement a huge blue print in biomedical engineering. Considering the security and transferring requirement of biomedical data, we designed a system based on private cloud computing to process SRS. Then we set the comparable experiments of processing a 5-hour audio recording of an OSAHS patient by a personal computer, a server and a private cloud computing system to demonstrate the efficiency of the infrastructure we proposed.

  1. Optical character recognition systems for different languages with soft computing

    CERN Document Server

    Chaudhuri, Arindam; Badelia, Pratixa; K Ghosh, Soumya


    The book offers a comprehensive survey of soft-computing models for optical character recognition systems. The various techniques, including fuzzy and rough sets, artificial neural networks and genetic algorithms, are tested using real texts written in different languages, such as English, French, German, Latin, Hindi and Gujrati, which have been extracted by publicly available datasets. The simulation studies, which are reported in details here, show that soft-computing based modeling of OCR systems performs consistently better than traditional models. Mainly intended as state-of-the-art survey for postgraduates and researchers in pattern recognition, optical character recognition and soft computing, this book will be useful for professionals in computer vision and image processing alike, dealing with different issues related to optical character recognition.

  2. Experimental quantum computing to solve systems of linear equations. (United States)

    Cai, X-D; Weedbrook, C; Su, Z-E; Chen, M-C; Gu, Mile; Zhu, M-J; Li, Li; Liu, Nai-Le; Lu, Chao-Yang; Pan, Jian-Wei


    Solving linear systems of equations is ubiquitous in all areas of science and engineering. With rapidly growing data sets, such a task can be intractable for classical computers, as the best known classical algorithms require a time proportional to the number of variables N. A recently proposed quantum algorithm shows that quantum computers could solve linear systems in a time scale of order log(N), giving an exponential speedup over classical computers. Here we realize the simplest instance of this algorithm, solving 2×2 linear equations for various input vectors on a quantum computer. We use four quantum bits and four controlled logic gates to implement every subroutine required, demonstrating the working principle of this algorithm.

  3. Hybrid soft computing systems for electromyographic signals analysis: a review (United States)


    Electromyographic (EMG) is a bio-signal collected on human skeletal muscle. Analysis of EMG signals has been widely used to detect human movement intent, control various human-machine interfaces, diagnose neuromuscular diseases, and model neuromusculoskeletal system. With the advances of artificial intelligence and soft computing, many sophisticated techniques have been proposed for such purpose. Hybrid soft computing system (HSCS), the integration of these different techniques, aims to further improve the effectiveness, efficiency, and accuracy of EMG analysis. This paper reviews and compares key combinations of neural network, support vector machine, fuzzy logic, evolutionary computing, and swarm intelligence for EMG analysis. Our suggestions on the possible future development of HSCS in EMG analysis are also given in terms of basic soft computing techniques, further combination of these techniques, and their other applications in EMG analysis. PMID:24490979

  4. 9th International Conference on Computer Recognition Systems

    CERN Document Server

    Jackowski, Konrad; Kurzyński, Marek; Woźniak, Michał; Żołnierek, Andrzej


    The computer recognition systems are nowadays one of the most promising directions in artificial intelligence. This book is the most comprehensive study of this field. It contains a collection of 79 carefully selected articles contributed by experts of pattern recognition. It reports on current research with respect to both methodology and applications. In particular, it includes the following sections: Features, learning, and classifiers Biometrics Data Stream Classification and Big Data Analytics Image processing and computer vision Medical applications Applications RGB-D perception: recent developments and applications This book is a great reference tool for scientists who deal with the problems of designing computer pattern recognition systems. Its target readers can be the as well researchers as students of computer science, artificial intelligence or robotics.  .

  5. 8th International Conference on Computer Recognition Systems

    CERN Document Server

    Jackowski, Konrad; Kurzynski, Marek; Wozniak, Michał; Zolnierek, Andrzej


    The computer recognition systems are nowadays one of the most promising directions in artificial intelligence. This book is the most comprehensive study of this field. It contains a collection of 86 carefully selected articles contributed by experts of pattern recognition. It reports on current research with respect to both methodology and applications. In particular, it includes the following sections: Biometrics Data Stream Classification and Big Data Analytics  Features, learning, and classifiers Image processing and computer vision Medical applications Miscellaneous applications Pattern recognition and image processing in robotics  Speech and word recognition This book is a great reference tool for scientists who deal with the problems of designing computer pattern recognition systems. Its target readers can be the as well researchers as students of computer science, artificial intelligence or robotics.


    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hsien-Hsin S


    The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniques and system software for achieving a robust, secure, and reliable computing system toward our goal.

  7. Radiation Tolerant, FPGA-Based SmallSat Computer System (United States)

    LaMeres, Brock J.; Crum, Gary A.; Martinez, Andres; Petro, Andrew


    The Radiation Tolerant, FPGA-based SmallSat Computer System (RadSat) computing platform exploits a commercial off-the-shelf (COTS) Field Programmable Gate Array (FPGA) with real-time partial reconfiguration to provide increased performance, power efficiency and radiation tolerance at a fraction of the cost of existing radiation hardened computing solutions. This technology is ideal for small spacecraft that require state-of-the-art on-board processing in harsh radiation environments but where using radiation hardened processors is cost prohibitive.

  8. CONATION: English Command Input/Output System for Computers


    Sharma, Kamlesh; Dr. T. V. Prasad


    In this information technology age, a convenient and user friendly interface is required to operate the computer system on very fast rate. In the human being, speech being a natural mode of communication has potential to being a fast and convenient mode of interaction with computer. Speech recognition will play an important role in taking technology to them. It is the need of this era to access the information within seconds. This paper describes the design and development of speaker independ...

  9. Software Requirements for a System to Compute Mean Failure Cost

    Energy Technology Data Exchange (ETDEWEB)

    Aissa, Anis Ben [University of Tunis, Belvedere, Tunisia; Abercrombie, Robert K [ORNL; Sheldon, Frederick T [ORNL; Mili, Ali [New Jersey Insitute of Technology


    In earlier works, we presented a computational infrastructure that allows an analyst to estimate the security of a system in terms of the loss that each stakeholder. We also demonstrated this infrastructure through the results of security breakdowns for the ecommerce case. In this paper, we illustrate this infrastructure by an application that supports the computation of the Mean Failure Cost (MFC) for each stakeholder.

  10. Stochastic Bayesian Computation for Autonomous Robot Sensorimotor System


    Faix, Marvin; Lobo, Jorge; Laurent, Raphael; Vaufreydaz, Dominique; Mazer, Emmanuel


    International audience; This paper presents a stochastic computing implementationof a Bayesian sensorimotor system that performsobstacle avoidance for an autonomous robot. In a previouswork we have shown that we are able to automatically design aprobabilistic machine which computes inferences on a Bayesianmodel using stochastic arithmetic. We start from a high levelBayesian model description, then our compiler generates anelectronic circuit, corresponding to the probabilistic inference,operat...

  11. Optical interconnection networks for high-performance computing systems. (United States)

    Biberman, Aleksandr; Bergman, Keren


    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers.

  12. Architectural requirements for the Red Storm computing system.

    Energy Technology Data Exchange (ETDEWEB)

    Camp, William J.; Tomkins, James Lee


    This report is based on the Statement of Work (SOW) describing the various requirements for delivering 3 new supercomputer system to Sandia National Laboratories (Sandia) as part of the Department of Energy's (DOE) Accelerated Strategic Computing Initiative (ASCI) program. This system is named Red Storm and will be a distributed memory, massively parallel processor (MPP) machine built primarily out of commodity parts. The requirements presented here distill extensive architectural and design experience accumulated over a decade and a half of research, development and production operation of similar machines at Sandia. Red Storm will have an unusually high bandwidth, low latency interconnect, specially designed hardware and software reliability features, a light weight kernel compute node operating system and the ability to rapidly switch major sections of the machine between classified and unclassified computing environments. Particular attention has been paid to architectural balance in the design of Red Storm, and it is therefore expected to achieve an atypically high fraction of its peak speed of 41 TeraOPS on real scientific computing applications. In addition, Red Storm is designed to be upgradeable to many times this initial peak capability while still retaining appropriate balance in key design dimensions. Installation of the Red Storm computer system at Sandia's New Mexico site is planned for 2004, and it is expected that the system will be operated for a minimum of five years following installation.

  13. A data management system to enable urgent natural disaster computing (United States)

    Leong, Siew Hoon; Kranzlmüller, Dieter; Frank, Anton


    Civil protection, in particular natural disaster management, is very important to most nations and civilians in the world. When disasters like flash floods, earthquakes and tsunamis are expected or have taken place, it is of utmost importance to make timely decisions for managing the affected areas and reduce casualties. Computer simulations can generate information and provide predictions to facilitate this decision making process. Getting the data to the required resources is a critical requirement to enable the timely computation of the predictions. An urgent data management system to support natural disaster computing is thus necessary to effectively carry out data activities within a stipulated deadline. Since the trigger of a natural disaster is usually unpredictable, it is not always possible to prepare required resources well in advance. As such, an urgent data management system for natural disaster computing has to be able to work with any type of resources. Additional requirements include the need to manage deadlines and huge volume of data, fault tolerance, reliable, flexibility to changes, ease of usage, etc. The proposed data management platform includes a service manager to provide a uniform and extensible interface for the supported data protocols, a configuration manager to check and retrieve configurations of available resources, a scheduler manager to ensure that the deadlines can be met, a fault tolerance manager to increase the reliability of the platform and a data manager to initiate and perform the data activities. These managers will enable the selection of the most appropriate resource, transfer protocol, etc. such that the hard deadline of an urgent computation can be met for a particular urgent activity, e.g. data staging or computation. We associated 2 types of deadlines [2] with an urgent computing system. Soft-hard deadline: Missing a soft-firm deadline will render the computation less useful resulting in a cost that can have severe

  14. Interactive Rhythm Learning System by Combining Tablet Computers and Robots

    Directory of Open Access Journals (Sweden)

    Chien-Hsing Chou


    Full Text Available This study proposes a percussion learning device that combines tablet computers and robots. This device comprises two systems: a rhythm teaching system, in which users can compose and practice rhythms by using a tablet computer, and a robot performance system. First, teachers compose the rhythm training contents on the tablet computer. Then, the learners practice these percussion exercises by using the tablet computer and a small drum set. The teaching system provides a new and user-friendly score editing interface for composing a rhythm exercise. It also provides a rhythm rating function to facilitate percussion training for children and improve the stability of rhythmic beating. To encourage children to practice percussion exercises, a robotic performance system is used to interact with the children; this system can perform percussion exercises for students to listen to and then help them practice the exercise. This interaction enhances children’s interest and motivation to learn and practice rhythm exercises. The results of experimental course and field trials reveal that the proposed system not only increases students’ interest and efficiency in learning but also helps them in understanding musical rhythms through interaction and composing simple rhythms.


    Directory of Open Access Journals (Sweden)

    MILDEOVÁ, Stanislava


    Full Text Available When seeking solutions to current problems in the field of computer science – and other fields – we encounter situations where traditional approaches no longer bring the desired results. Our cognitive skills also limit the implementation of reliable mental simulation within the basic set of relations. The world around us is becoming more complex and mutually interdependent, and this is reflected in the demands on computer support. Thus, in today’s education and science in the field of computer science and all other disciplines and areas of life need to address the issue of the paradigm shift, which is generally accepted by experts. The goal of the paper is to present the systems thinking that facilitates and extends the understanding of the world through relations and linkages. Moreover, the paper introduces the essence of systems thinking and the possibilities to achieve mental a shift toward systems thinking skills. At the same time, the link between systems thinking and functional literacy is presented. We adopted the “Bathtub Test” from the variety of systems thinking tests that allow people to assess the understanding of basic systemic concepts, in order to assess the level of systems thinking. University students (potential information managers were the examined subjects of the examination of systems thinking that was conducted over a longer time period and whose aim was to determine the status of systems thinking. . The paper demonstrates that some pedagogical concepts and activities, in our case the subject of System Dynamics that leads to the appropriate integration of systems thinking in education. There is some evidence that basic knowledge of system dynamics and systems thinking principles will affect students, and their thinking will contribute to an improved approach to solving problems of computer science both in theory and practice.

  16. A computer aided engineering tool for ECLS systems (United States)

    Bangham, Michal E.; Reuter, James L.


    The Computer-Aided Systems Engineering and Analysis tool used by NASA for environmental control and life support system design studies is capable of simulating atmospheric revitalization systems, water recovery and management systems, and single-phase active thermal control systems. The designer/analysis interface used is graphics-based, and allows the designer to build a model by constructing a schematic of the system under consideration. Data management functions are performed, and the program is translated into a format that is compatible with the solution routines.

  17. An introduction to computer simulation methods applications to physical systems

    CERN Document Server

    Gould, Harvey; Christian, Wolfgang


    Now in its third edition, this book teaches physical concepts using computer simulations. The text incorporates object-oriented programming techniques and encourages readers to develop good programming habits in the context of doing physics. Designed for readers at all levels , An Introduction to Computer Simulation Methods uses Java, currently the most popular programming language. Introduction, Tools for Doing Simulations, Simulating Particle Motion, Oscillatory Systems, Few-Body Problems: The Motion of the Planets, The Chaotic Motion of Dynamical Systems, Random Processes, The Dynamics of Many Particle Systems, Normal Modes and Waves, Electrodynamics, Numerical and Monte Carlo Methods, Percolation, Fractals and Kinetic Growth Models, Complex Systems, Monte Carlo Simulations of Thermal Systems, Quantum Systems, Visualization and Rigid Body Dynamics, Seeing in Special and General Relativity, Epilogue: The Unity of Physics For all readers interested in developing programming habits in the context of doing phy...

  18. National Ignition Facility sub-system design requirements computer system SSDR 1.5.1

    Energy Technology Data Exchange (ETDEWEB)

    Spann, J.; VanArsdall, P.; Bliss, E.


    This System Design Requirement document establishes the performance, design, development and test requirements for the Computer System, WBS 1.5.1 which is part of the NIF Integrated Computer Control System (ICCS). This document responds directly to the requirements detailed in ICCS (WBS 1.5) which is the document directly above.

  19. Direct computation of optimal control of forced linear system (United States)

    Utku, S.; Kuo, C.-P.; Salama, M.


    It is known that the optimal control of a forced linear system may be reduced to that of tracking the system without forces. The solution of the tracking problem is available via the costate variables method. This procedure is computationally expensive for large order systems. It requires solution of matrix Riccati equation and two final value problems. An alternate approach is outlined for the direct computation of the optimal control. Instead of Riccati equation, a matrix Volterra integral must be solved. For this purpose two computational schemes are described, and an illustrative example is given. The results compare favorably with the classical solution. This alternative approach may be especially useful for the control of large space structure where large order models are required.

  20. Method of Computer-aided Instruction in Situation Control Systems

    Directory of Open Access Journals (Sweden)

    Anatoliy O. Kargin


    Full Text Available The article considers the problem of computer-aided instruction in context-chain motivated situation control system of the complex technical system behavior. The conceptual and formal models of situation control with practical instruction are considered. Acquisition of new behavior knowledge is presented as structural changes in system memory in the form of situational agent set. Model and method of computer-aided instruction represent formalization, based on the nondistinct theories by physiologists and cognitive psychologists.The formal instruction model describes situation and reaction formation and dependence on different parameters, effecting education, such as the reinforcement value, time between the stimulus, action and the reinforcement. The change of the contextual link between situational elements when using is formalized.The examples and results of computer instruction experiments of the robot device “LEGO MINDSTORMS NXT”, equipped with ultrasonic distance, touch, light sensors.

  1. Cloud computing framework for a hydro information system


    Delipetrev, Blagoj; Jonoski, Andreja; Solomatine, Dimitri


    The cloud computing framework of the hydro information system is based on three concepts that are closely related: cloud, service oriented architecture and web geographic information system. The architecture of the prototype hydro information system contains three tiers. The bottom tier is a distributed relational database (PostgreSQL, PostGIS) that store geospatial and other types of data. The middle tier is GeoServer web application that manages and presents geospatial maps and ...

  2. Computational Fluid Dynamic Approach for Biological System Modeling


    Huang, Weidong; Wu, Chundu; Xiao, Bingjia; Xia, Weidong


    Various biological system models have been proposed in systems biology, which are based on the complex biological reactions kinetic of various components. These models are not practical because we lack of kinetic information. In this paper, it is found that the enzymatic reaction and multi-order reaction rate is often controlled by the transport of the reactants in biological systems. A Computational Fluid Dynamic (CFD) approach, which is based on transport of the components and kinetics of b...

  3. Distributed parallel computing in stochastic modeling of groundwater systems. (United States)

    Dong, Yanhui; Li, Guomin; Xu, Haizhen


    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  4. The ACP (Advanced Computer Program) multiprocessor system at Fermilab

    Energy Technology Data Exchange (ETDEWEB)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Case, G.; Cook, A.; Fischler, M.; Gaines, I.; Hance, R.; Husby, D.


    The Advanced Computer Program at Fermilab has developed a multiprocessor system which is easy to use and uniquely cost effective for many high energy physics problems. The system is based on single board computers which cost under $2000 each to build including 2 Mbytes of on board memory. These standard VME modules each run experiment reconstruction code in Fortran at speeds approaching that of a VAX 11/780. Two versions have been developed: one uses Motorola's 68020 32 bit microprocessor, the other runs with AT and T's 32100. both include the corresponding floating point coprocessor chip. The first system, when fully configured, uses 70 each of the two types of processors. A 53 processor system has been operated for several months with essentially no down time by computer operators in the Fermilab Computer Center, performing at nearly the capacity of 6 CDC Cyber 175 mainframe computers. The VME crates in which the processing ''nodes'' sit are connected via a high speed ''Branch Bus'' to one or more MicroVAX computers which act as hosts handling system resource management and all I/O in offline applications. An interface from Fastbus to the Branch Bus has been developed for online use which has been tested error free at 20 Mbytes/sec for 48 hours. ACP hardware modules are now available commercially. A major package of software, including a simulator that runs on any VAX, has been developed. It allows easy migration of existing programs to this multiprocessor environment. This paper describes the ACP Multiprocessor System and early experience with it at Fermilab and elsewhere.

  5. Cloud Computing in the Curricula of Schools of Computer Science and Information Systems (United States)

    Lawler, James P.


    The cloud continues to be a developing area of information systems. Evangelistic literature in the practitioner field indicates benefit for business firms but disruption for technology departments of the firms. Though the cloud currently is immature in methodology, this study defines a model program by which computer science and information…

  6. A completely open source based computing system for computer generation of Fourier holograms (United States)

    Jackin, B. J.; Palanisamy, P. K.


    Computer generated holograms are usually generated using commercial software like MATLAB, MATHCAD, Mathematica, etc. This work is an approach in doing the same using freely distributed open source packages and Operating System. A Fourier hologram is generated using this method and tested for simulated and optical reconstruction. The reconstructed images are in good agreement with the objects chosen. The significance of using such a system is also discussed. Program summaryProgram title: FHOLO Catalogue identifier: AEDS_v1_0 Program summary URL: Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, No. of lines in distributed program, including test data, etc.: 176 336 No. of bytes in distributed program, including test data, etc.: 4 294 872 Distribution format: tar.gz Programming language: C++ Computer: any X86 micro computer Operating system: Linux (Debian Etch) RAM: 512 MB Classification: 18 Nature of problem: To generate a Fourier Hologram in micro computer only by using open source operating system and packages. Running time: Depends on the matrix size. 10 sec for a matrix of size 256×256.

  7. Microeconomic theory and computation applying the maxima open-source computer algebra system

    CERN Document Server

    Hammock, Michael R


    This book provides a step-by-step tutorial for using Maxima, an open-source multi-platform computer algebra system, to examine the economic relationships that form the core of microeconomics in a way that complements traditional modeling techniques.

  8. Computer systems and software description for gas characterization system

    Energy Technology Data Exchange (ETDEWEB)

    Vo, C.V.


    The Gas Characterization System Project was commissioned by TWRS management with funding from TWRS Safety, on December 1, 1994. The project objective is to establish an instrumentation system to measure flammable gas concentrations in the vapor space of selected watch list tanks, starting with tank AN-105 and AW-101. Data collected by this system is meant to support first tank characterization, then tank safety. System design is premised upon Characterization rather than mitigation, therefore redundancy is not required.

  9. A Synthesized Framework for Formal Verification of Computing Systems

    Directory of Open Access Journals (Sweden)

    Nikola Bogunovic


    Full Text Available Design process of computing systems gradually evolved to a level that encompasses formal verification techniques. However, the integration of formal verification techniques into a methodical design procedure has many inherent miscomprehensions and problems. The paper explicates the discrepancy between the real system implementation and the abstracted model that is actually used in the formal verification procedure. Particular attention is paid to the seamless integration of all phases of the verification procedure that encompasses definition of the specification language and denotation and execution of conformance relation between the abstracted model and its intended behavior. The concealed obstacles are exposed, computationally expensive steps identified and possible improvements proposed.


    CERN Multimedia

    I. Fisk


    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  11. Computational physics simulation of classical and quantum systems

    CERN Document Server

    Scherer, Philipp O J


    This textbook presents basic and advanced computational physics in a very didactic style. It contains very-well-presented and simple mathematical descriptions of many of the most important algorithms used in computational physics. Many clear mathematical descriptions of important techniques in computational physics are given. The first part of the book discusses the basic numerical methods. A large number of exercises and computer experiments allows to study the properties of these methods. The second part concentrates on simulation of classical and quantum systems. It uses a rather general concept for the equation of motion which can be applied to ordinary and partial differential equations. Several classes of integration methods are discussed including not only the standard Euler and Runge Kutta method but also multistep methods and the class of Verlet methods which is introduced by studying the motion in Liouville space. Besides the classical methods, inverse interpolation is discussed, together with the p...

  12. Development of a proton Computed Tomography Detector System

    CERN Document Server

    Naimuddin, Md; Blazey, G; Boi, S; Dyshkant, A; Erdelyi, B; Hedin, D; Johnson, E; Krider, J; Rukalin, V; Uzunyan, S A; Zutshi, V; Fordt, R; Sellberg, G; Rauch, J E; Roman, M; Rubinov, P; Wilson, P


    Computer tomography is one of the most promising new methods to image abnormal tissues inside the human body. Tomography is also used to position the patient accurately before radiation therapy. Hadron therapy for treating cancer has become one of the most advantageous and safe options. In order to fully utilize the advantages of hadron therapy, there is a necessity of performing radiography with hadrons as well. In this paper we present the development of a proton computed tomography system. Our second-generation proton tomography system consists of two upstream and two downstream trackers made up of fibers as active material and a range detector consisting of plastic scintillators. We present details of the detector system, readout electronics, and data acquisition system as well as the commissioning of the entire system. We also present preliminary results from the test beam of the range detector.

  13. Development of a proton Computed Tomography Detector System

    Energy Technology Data Exchange (ETDEWEB)

    Naimuddin, Md. [Delhi U.; Coutrakon, G. [Northern Illinois U.; Blazey, G. [Northern Illinois U.; Boi, S. [Northern Illinois U.; Dyshkant, A. [Northern Illinois U.; Erdelyi, B. [Northern Illinois U.; Hedin, D. [Northern Illinois U.; Johnson, E. [Northern Illinois U.; Krider, J. [Northern Illinois U.; Rukalin, V. [Northern Illinois U.; Uzunyan, S. A. [Northern Illinois U.; Zutshi, V. [Northern Illinois U.; Fordt, R. [Fermilab; Sellberg, G. [Fermilab; Rauch, J. E. [Fermilab; Roman, M. [Fermilab; Rubinov, P. [Fermilab; Wilson, P. [Fermilab


    Computer tomography is one of the most promising new methods to image abnormal tissues inside the human body. Tomography is also used to position the patient accurately before radiation therapy. Hadron therapy for treating cancer has become one of the most advantegeous and safe options. In order to fully utilize the advantages of hadron therapy, there is a necessity of performing radiography with hadrons as well. In this paper we present the development of a proton computed tomography system. Our second-generation proton tomography system consists of two upstream and two downstream trackers made up of fibers as active material and a range detector consisting of plastic scintillators. We present details of the detector system, readout electronics, and data acquisition system as well as the commissioning of the entire system. We also present preliminary results from the test beam of the range detector.

  14. Computer modeling and software development for unsteady chemical technological systems

    Directory of Open Access Journals (Sweden)

    Dolganov Igor


    Full Text Available The paper deals with mathematical modeling in transient conditions to create a computer system that can reflect the behavior of real industrial plants. Such systems can respond to complex and pressing questions about the stability of the industrial facilities and the time spent on transients passing through the unstable regimes. In addition, such systems have a kind of intelligence and predictive ability, as they consider partial integral and differential systems of equations that are based on physical and chemical nature of the processes occurring in devices of technological systems.

  15. Software Safety Risk in Legacy Safety-Critical Computer Systems (United States)

    Hill, Janice; Baggs, Rhoda


    Safety-critical computer systems must be engineered to meet system and software safety requirements. For legacy safety-critical computer systems, software safety requirements may not have been formally specified during development. When process-oriented software safety requirements are levied on a legacy system after the fact, where software development artifacts don't exist or are incomplete, the question becomes 'how can this be done?' The risks associated with only meeting certain software safety requirements in a legacy safety-critical computer system must be addressed should such systems be selected as candidates for reuse. This paper proposes a method for ascertaining formally, a software safety risk assessment, that provides measurements for software safety for legacy systems which may or may not have a suite of software engineering documentation that is now normally required. It relies upon the NASA Software Safety Standard, risk assessment methods based upon the Taxonomy-Based Questionnaire, and the application of reverse engineering CASE tools to produce original design documents for legacy systems.

  16. AVES: A Computer Cluster System approach for INTEGRAL Scientific Analysis (United States)

    Federici, M.; Martino, B. L.; Natalucci, L.; Umbertini, P.

    The AVES computing system, based on an "Cluster" architecture is a fully integrated, low cost computing facility dedicated to the archiving and analysis of the INTEGRAL data. AVES is a modular system that uses the software resource manager (SLURM) and allows almost unlimited expandibility (65,536 nodes and hundreds of thousands of processors); actually is composed by 30 Personal Computers with Quad-Cores CPU able to reach the computing power of 300 Giga Flops (300x10{9} Floating point Operations Per Second), with 120 GB of RAM and 7.5 Tera Bytes (TB) of storage memory in UFS configuration plus 6 TB for users area. AVES was designed and built to solve growing problems raised from the analysis of the large data amount accumulated by the INTEGRAL mission (actually about 9 TB) and due to increase every year. The used analysis software is the OSA package, distributed by the ISDC in Geneva. This is a very complex package consisting of dozens of programs that can not be converted to parallel computing. To overcome this limitation we developed a series of programs to distribute the workload analysis on the various nodes making AVES automatically divide the analysis in N jobs sent to N cores. This solution thus produces a result similar to that obtained by the parallel computing configuration. In support of this we have developed tools that allow a flexible use of the scientific software and quality control of on-line data storing. The AVES software package is constituted by about 50 specific programs. Thus the whole computing time, compared to that provided by a Personal Computer with single processor, has been enhanced up to a factor 70.

  17. Multi-memetic Mind Evolutionary Computation Algorithm for Loosely Coupled Systems of Desktop Computers

    Directory of Open Access Journals (Sweden)

    M. K. Sakharov


    Full Text Available This paper deals with the development and software implementation of the hybrid multi-memetic algorithm for distributed computing systems. The main algorithm is based on the modification of MEC algorithm proposed by the authors. The multi-memetic algorithm utilizes three various local optimization methods. Software implementation was developed using MPI for Python and tested on a grid network made of twenty desktop computers. Performance of the proposed algorithm and its software implementation was investigated using multi-dimensional multi-modal benchmark functions from CEC’14.

  18. An Expert Fitness Diagnosis System Based on Elastic Cloud Computing

    Directory of Open Access Journals (Sweden)

    Kevin C. Tseng


    Full Text Available This paper presents an expert diagnosis system based on cloud computing. It classifies a user’s fitness level based on supervised machine learning techniques. This system is able to learn and make customized diagnoses according to the user’s physiological data, such as age, gender, and body mass index (BMI. In addition, an elastic algorithm based on Poisson distribution is presented to allocate computation resources dynamically. It predicts the required resources in the future according to the exponential moving average of past observations. The experimental results show that Naïve Bayes is the best classifier with the highest accuracy (90.8% and that the elastic algorithm is able to capture tightly the trend of requests generated from the Internet and thus assign corresponding computation resources to ensure the quality of service.

  19. Computational Design and Experimental Validation of New Thermal Barrier Systems

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim


    This project (10/01/2010-9/30/2014), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This project will directly support the technical goals specified in DE-FOA-0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. In this project, the focus is to develop and implement novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; perform material characterizations and oxidation/corrosion tests; and demonstrate our new thermal barrier coating (TBC) systems experimentally under integrated gasification combined cycle (IGCC) environments.

  20. Computational Design and Experimental Validation of New Thermal Barrier Systems

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim


    This project (10/01/2010-9/30/2013), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This project will directly support the technical goals specified in DEFOA- 0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. We will develop and implement novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; perform material characterizations and oxidation/corrosion tests; and demonstrate our new thermal barrier coating (TBC) systems experimentally under integrated gasification combined cycle (IGCC) environments. The durability of the coating will be examined using the proposed Durability Test Rig.

  1. Ensuring Data Consistency Over CMS Distributed Computing System

    CERN Document Server

    Rossman, Paul


    CMS utilizes a distributed infrastructure of computing centers to custodially store data, to provide organized processing resources, and to provide analysis computing resources for users. Integrated over the whole system, even in the first year of data taking, the available disk storage approaches 10 petabytes of space. Maintaining consistency between the data bookkeeping, the data transfer system, and physical storage is an interesting technical and operations challenge. In this paper we will discuss the CMS effort to ensure that data is consistently available at all computing centers. We will discuss the technical tools that monitor the consistency of the catalogs and the physical storage as well as the operations model used to find and solve inconsistencies.

  2. An expert fitness diagnosis system based on elastic cloud computing. (United States)

    Tseng, Kevin C; Wu, Chia-Chuan


    This paper presents an expert diagnosis system based on cloud computing. It classifies a user's fitness level based on supervised machine learning techniques. This system is able to learn and make customized diagnoses according to the user's physiological data, such as age, gender, and body mass index (BMI). In addition, an elastic algorithm based on Poisson distribution is presented to allocate computation resources dynamically. It predicts the required resources in the future according to the exponential moving average of past observations. The experimental results show that Naïve Bayes is the best classifier with the highest accuracy (90.8%) and that the elastic algorithm is able to capture tightly the trend of requests generated from the Internet and thus assign corresponding computation resources to ensure the quality of service.

  3. Evaluation of computer-aided detection and diagnosis systems. (United States)

    Petrick, Nicholas; Sahiner, Berkman; Armato, Samuel G; Bert, Alberto; Correale, Loredana; Delsanto, Silvia; Freedman, Matthew T; Fryd, David; Gur, David; Hadjiiski, Lubomir; Huo, Zhimin; Jiang, Yulei; Morra, Lia; Paquerault, Sophie; Raykar, Vikas; Samuelson, Frank; Summers, Ronald M; Tourassi, Georgia; Yoshida, Hiroyuki; Zheng, Bin; Zhou, Chuan; Chan, Heang-Ping


    Computer-aided detection and diagnosis (CAD) systems are increasingly being used as an aid by clinicians for detection and interpretation of diseases. Computer-aided detection systems mark regions of an image that may reveal specific abnormalities and are used to alert clinicians to these regions during image interpretation. Computer-aided diagnosis systems provide an assessment of a disease using image-based information alone or in combination with other relevant diagnostic data and are used by clinicians as a decision support in developing their diagnoses. While CAD systems are commercially available, standardized approaches for evaluating and reporting their performance have not yet been fully formalized in the literature or in a standardization effort. This deficiency has led to difficulty in the comparison of CAD devices and in understanding how the reported performance might translate into clinical practice. To address these important issues, the American Association of Physicists in Medicine (AAPM) formed the Computer Aided Detection in Diagnostic Imaging Subcommittee (CADSC), in part, to develop recommendations on approaches for assessing CAD system performance. The purpose of this paper is to convey the opinions of the AAPM CADSC members and to stimulate the development of consensus approaches and "best practices" for evaluating CAD systems. Both the assessment of a standalone CAD system and the evaluation of the impact of CAD on end-users are discussed. It is hoped that awareness of these important evaluation elements and the CADSC recommendations will lead to further development of structured guidelines for CAD performance assessment. Proper assessment of CAD system performance is expected to increase the understanding of a CAD system's effectiveness and limitations, which is expected to stimulate further research and development efforts on CAD technologies, reduce problems due to improper use, and eventually improve the utility and efficacy of CAD in

  4. A new taxonomy for distributed computer systems based upon operating system structure (United States)

    Foudriat, E. C.


    Characteristics of the resource structure found in the operating system are considered as a mechanism for classifying distributed computer systems. Since the operating system resources, themselves, are too diversified to provide a consistent classification, the structure upon which resources are built and shared are examined. The location and control character of this indivisibility provides the taxonomy for separating uniprocessors, computer networks, network computers (fully distributed processing systems or decentralized computers) and algorithm and/or data control multiprocessors. The taxonomy is important because it divides machines into a classification that is relevant or important to the client and not the hardware architect. It also defines the character of the kernel O/S structure needed for future computer systems. What constitutes an operating system for a fully distributed processor is discussed in detail.

  5. Supporting Privacy of Computations in Mobile Big Data Systems

    Directory of Open Access Journals (Sweden)

    Sriram Nandha Premnath


    Full Text Available Cloud computing systems enable clients to rent and share computing resources of third party platforms, and have gained widespread use in recent years. Numerous varieties of mobile, small-scale devices such as smartphones, red e-health devices, etc., across users, are connected to one another through the massive internetwork of vastly powerful servers on the cloud. While mobile devices store “private information” of users such as location, payment, health data, etc., they may also contribute “semi-public information” (which may include crowdsourced data such as transit, traffic, nearby points of interests, etc. for data analytics. In such a scenario, a mobile device may seek to obtain the result of a computation, which may depend on its private inputs, crowdsourced data from other mobile devices, and/or any “public inputs” from other servers on the Internet. We demonstrate a new method of delegating real-world computations of resource-constrained mobile clients using an encrypted program known as the garbled circuit. Using the garbled version of a mobile client’s inputs, a server in the cloud executes the garbled circuit and returns the resulting garbled outputs. Our system assures privacy of the mobile client’s input data and output of the computation, and also enables the client to verify that the evaluator actually performed the computation. We analyze the complexity of our system. We measure the time taken to construct the garbled circuit as well as evaluate it for varying number of servers. Using real-world data, we evaluate our system for a practical, privacy preserving search application that locates the nearest point of interest for the mobile client to demonstrate feasibility.

  6. From experiment to design -- Fault characterization and detection in parallel computer systems using computational accelerators (United States)

    Yim, Keun Soo

    This dissertation summarizes experimental validation and co-design studies conducted to optimize the fault detection capabilities and overheads in hybrid computer systems (e.g., using CPUs and Graphics Processing Units, or GPUs), and consequently to improve the scalability of parallel computer systems using computational accelerators. The experimental validation studies were conducted to help us understand the failure characteristics of CPU-GPU hybrid computer systems under various types of hardware faults. The main characterization targets were faults that are difficult to detect and/or recover from, e.g., faults that cause long latency failures (Ch. 3), faults in dynamically allocated resources (Ch. 4), faults in GPUs (Ch. 5), faults in MPI programs (Ch. 6), and microarchitecture-level faults with specific timing features (Ch. 7). The co-design studies were based on the characterization results. One of the co-designed systems has a set of source-to-source translators that customize and strategically place error detectors in the source code of target GPU programs (Ch. 5). Another co-designed system uses an extension card to learn the normal behavioral and semantic execution patterns of message-passing processes executing on CPUs, and to detect abnormal behaviors of those parallel processes (Ch. 6). The third co-designed system is a co-processor that has a set of new instructions in order to support software-implemented fault detection techniques (Ch. 7). The work described in this dissertation gains more importance because heterogeneous processors have become an essential component of state-of-the-art supercomputers. GPUs were used in three of the five fastest supercomputers that were operating in 2011. Our work included comprehensive fault characterization studies in CPU-GPU hybrid computers. In CPUs, we monitored the target systems for a long period of time after injecting faults (a temporally comprehensive experiment), and injected faults into various types of

  7. Automation of the CFD Process on Distributed Computing Systems (United States)

    Tejnil, Ed; Gee, Ken; Rizk, Yehia M.


    A script system was developed to automate and streamline portions of the CFD process. The system was designed to facilitate the use of CFD flow solvers on supercomputer and workstation platforms within a parametric design event. Integrating solver pre- and postprocessing phases, the fully automated ADTT script system marshalled the required input data, submitted the jobs to available computational resources, and processed the resulting output data. A number of codes were incorporated into the script system, which itself was part of a larger integrated design environment software package. The IDE and scripts were used in a design event involving a wind tunnel test. This experience highlighted the need for efficient data and resource management in all parts of the CFD process. To facilitate the use of CFD methods to perform parametric design studies, the script system was developed using UNIX shell and Perl languages. The goal of the work was to minimize the user interaction required to generate the data necessary to fill a parametric design space. The scripts wrote out the required input files for the user-specified flow solver, transferred all necessary input files to the computational resource, submitted and tracked the jobs using the resource queuing structure, and retrieved and post-processed the resulting dataset. For computational resources that did not run queueing software, the script system established its own simple first-in-first-out queueing structure to manage the workload. A variety of flow solvers were incorporated in the script system, including INS2D, PMARC, TIGER and GASP. Adapting the script system to a new flow solver was made easier through the use of object-oriented programming methods. The script system was incorporated into an ADTT integrated design environment and evaluated as part of a wind tunnel experiment. The system successfully generated the data required to fill the desired parametric design space. This stressed the computational

  8. Intelligent computer systems in engineering design principles and applications

    CERN Document Server

    Sunnersjo, Staffan


    This introductory book discusses how to plan and build useful, reliable, maintainable and cost efficient computer systems for automated engineering design. The book takes a user perspective and seeks to bridge the gap between texts on principles of computer science and the user manuals for commercial design automation software. The approach taken is top-down, following the path from definition of the design task and clarification of the relevant design knowledge to the development of an operational system well adapted for its purpose. This introductory text for the practicing engineer working in industry covers most vital aspects of planning such a system. Experiences from applications of automated design systems in practice are reviewed based on a large number of real, industrial cases. The principles behind the most popular methods in design automation are presented with sufficient rigour to give the user confidence in applying them on real industrial problems. This book is also suited for a half semester c...

  9. Computer simulation of confined and flexoelectric liquid crystalline systems

    CERN Document Server

    Barmes, F


    In this Thesis, systems of confined and flexoelectric liquid crystal systems have been studied using molecular computer simulations. The aim of this work was to provide a molecular model of a bistable display cell in which switching is induced through the application of directional electric field pulses. In the first part of this Thesis, the study of confined systems of liquid crystalline particles has been addressed. Computation of the anchoring phase diagrams for three different surface interaction models showed that the hard needle wall and rod-surface potentials induce both planar and homeotropic alignment separated by a bistability region, this being stronger and wider for the rod-surface varant. The results obtained using the rod-sphere surface model, in contrast, showed that tilled surface arrangements can be induced by surface absorption mechanisms. Equivalent studies of hybrid anchored systems showed that a bend director structure can be obtained in a slab with monostable homeotropic anchoring at the...

  10. Computing Architecture of the ALICE Detector Control System

    CERN Document Server

    Augustinus, A; Moreno, A; Kurepin, A N; De Cataldo, G; Pinazza, O; Rosinský, P; Lechman, M; Jirdén, L S


    The ALICE Detector Control System (DCS) is based on a commercial SCADA product, running on a large Windows computer cluster. It communicates with about 1200 network attached devices to assure safe and stable operation of the experiment. In the presentation we focus on the design of the ALICE DCS computer systems. We describe the management of data flow, mechanisms for handling the large data amounts and information exchange with external systems. One of the key operational requirements is an intuitive, error proof and robust user interface allowing for simple operation of the experiment. At the same time the typical operator task, like trending or routine checks of the devices, must be decoupled from the automated operation in order to prevent overload of critical parts of the system. All these requirements must be implemented in an environment with strict security requirements. In the presentation we explain how these demands affected the architecture of the ALICE DCS.

  11. The Design of a System Architecture for Mobile Multimedia Computers

    NARCIS (Netherlands)

    Havinga, Paul J.M.


    This chapter discusses the system architecture of a portable computer, called Mobile Digital Companion, which provides support for handling multimedia applications energy efficiently. Because battery life is limited and battery weight is an important factor for the size and the weight of the Mobile

  12. gTybalt - a free computer algebra system


    Weinzierl, Stefan


    This article documents the free computer algebra system "gTybalt". The program is build on top of other packages, among others GiNaC, TeXmacs and Root. It offers the possibility of interactive symbolic calculations within the C++ programming language. Mathematical formulae are visualized using TeX fonts.

  13. Design of Computer Fault Diagnosis and Troubleshooting System ...

    African Journals Online (AJOL)

    Detection of personal computer (PC) hardware problems is a complicated process which demands high level of knowledge and skills. Depending on the know-how of the technician, a simple problem could take hours or even days to solve. Our aim is to develop an expert system for troubleshooting and diagnosing personal ...

  14. The Plato System: Using the Computer to Teach Russian (United States)

    Curtin, Constance; And Others


    The uses of a computer-based instructional system known as PLATO in teaching Russian, both in audio-lingual and reading-translation courses, at the University of Illinois are described. Examples of a variety of drills are given. An evaluation of the method is made. (RM)

  15. Report on DARPA Workshop on Self Aware Computer Systems (United States)


    Computer Systems is an area of basic research, and we are only in the initial stages of our understanding of what it means: what it means to be self...two main current areas of research are relevant to self-awareness. The first, deriving primarily from the work of Piaget , focuses on the

  16. NRC Class 1E Digital Computer System Guidelines (United States)


    then be "proved" that the vessel cannot be at high temperature state and norma ! t emperature state at the same time. The question whether high, normal...3 of Dependability of critical computer systems. Elsever Applied Science, 1988. [18] J. W. Duran and S. C. Ntafos, "A report on random testing," in

  17. Dynamics of number systems computation with arbitrary precision

    CERN Document Server

    Kurka, Petr


    This book is a source of valuable and useful information on the topics of dynamics of number systems and scientific computation with arbitrary precision. It is addressed to scholars, scientists and engineers, and graduate students. The treatment is elementary and self-contained with relevance both for theory and applications. The basic prerequisite of the book is linear algebra and matrix calculus. .

  18. Computing Preferred Extensions for Argumentation Systems with Sets of Attacking

    DEFF Research Database (Denmark)

    Nielsen, Søren Holbech; Parsons, Simon


    formal properties as that of Dung. One problem posed by Dung's original framework, which was neglected for some time, is how to compute preferred extensions of the argumentation systems. However, in 2001, in a paper by Doutre and Mengin, a procedure was given for enumerating preferred extensions...

  19. Computer decision support system for the stomach cancer diagnosis (United States)

    Polyakov, E. V.; Sukhova, O. G.; Korenevskaya, P. Y.; Ovcharova, V. S.; Kudryavtseva, I. O.; Vlasova, S. V.; Grebennikova, O. P.; Burov, D. A.; Yemelyanova, G. S.; Selchuk, V. Y.


    The paper considers the creation of the computer knowledge base containing the data of histological, cytologic, and clinical researches. The system is focused on improvement of diagnostics quality of stomach cancer - one of the most frequent death causes among oncologic patients.

  20. Survey of computer systems usage in southeastern Nigeria | Opara ...

    African Journals Online (AJOL)

    The shift from industrial age (17th Century) to information age (21st Century) has led to information explosion in this 21st century. Therefore, this has resulted in tremendous advancement in Computer Systems Technology (CST), software engineering and telecommunications. Also, the resultant radical changes as well as ...

  1. A computer-based registration system for geological collections

    NARCIS (Netherlands)

    Germeraad, J.H.; Freudenthal, M.; Boogaard, van den M.; Arps, C.E.S.


    The new computer-based registration system, a project of the National Museum of Geology and Mineralogy in the Netherlands, will considerably increase the accessibility of the Museum collection. This greater access is realized by computerisation of the data in great detail, so that an almost

  2. Computer Algebra Systems: Permitted but Are They Used? (United States)

    Pierce, Robyn; Bardini, Caroline


    Since the 1990s, computer algebra systems (CAS) have been available in Australia as hand-held devices designed for students with the expectation that they will be used in the mathematics classroom. The data discussed in this paper was collected as part of a pilot study that investigated first year university mathematics and statistics students'…

  3. Computer Vision Systems for Hardwood Logs and Lumber (United States)

    Philip A. Araman; Tai-Hoon Cho; D. Zhu; R. Conners


    Computer vision systems being developed at Virginia Tech University with the support and cooperation from the U.S. Forest Service are presented. Researchers at Michigan State University, West Virginia University, and Mississippi State University are also members of the research team working on various parts of this research. Our goals are to help U.S. hardwood...


    Directory of Open Access Journals (Sweden)

    I. Lazurchak


    Full Text Available In this paper the algorithm for constructing the N-dimensional recursive Peano scans. Driven by their two-dimensional and three-dimensional realization of a system of computer mathematics Mathematica 7.0. We are discussing the issue of reduction of the multidimensional space to one-dimensional in the calculation of multiple integrals.

  5. An Intelligent Computer-Based System for Sign Language Tutoring (United States)

    Ritchings, Tim; Khadragi, Ahmed; Saeb, Magdy


    A computer-based system for sign language tutoring has been developed using a low-cost data glove and a software application that processes the movement signals for signs in real-time and uses Pattern Matching techniques to decide if a trainee has closely replicated a teacher's recorded movements. The data glove provides 17 movement signals from…

  6. A proposed computer system on Kano model for new product ...

    African Journals Online (AJOL)

    A proposed computer system on Kano model for new product development and innovation aspect: A case study is conducted by an attractive attribute of automobile. ... The success of a new product development process for a desired customer satisfaction is sensitive to the customer needs assessment process. In most ...

  7. The Influence of Computer-Mediated Communication Systems on Community (United States)

    Rockinson-Szapkiw, Amanda J.


    As higher education institutions enter the intense competition of the rapidly growing global marketplace of online education, the leaders within these institutions are challenged to identify factors critical for developing and for maintaining effective online courses. Computer-mediated communication (CMC) systems are considered critical to…

  8. Computational modeling of the BRI1-receptor system

    NARCIS (Netherlands)

    Esse, van G.W.; Harter, K.; Vries, de S.C.


    Computational models are useful tools to help understand signalling pathways in plant cells. A systems biology approach where models and experimental data are combined can provide experimentally verifiable predictions and novel insights. The brassinosteroid insensitive 1 (BRI1) receptor is one of

  9. FPGAs for next gen DAQ and Computing systems at CERN

    CERN Multimedia

    CERN. Geneva


    The need for FPGAs in DAQ is a given, but newer systems needed to be designed to meet the substantial increase in data rate and the challenges that it brings. FPGAs are also power efficient computing devices. So the work also looks at accelerating HEP algorithms and integration of FPGAs with CPUs taking advantage of programming models like OpenCL. Other explorations involved using OpenCL to model a DAQ system.

  10. Computer-Based Training Technology: Overview and System Selection Criteria (United States)


    Mini Included PLATO ’" Authoring System Microcomputer Included SuperPilot" Authoring Language Microcomputer Capable WISE- Language and System...of Computer Teaching Corp.; MicroTICCIT" is a trademark of Hazeltine Corp.; PLATO " is a tradema k of Control Data Corp.; SuperPILOTT" is a trademark... Socratic dialogues or meaningful coaching, three types of knowledge must be brought together and coordinated in the lesson. The first type of knowledge

  11. Computer simulation of functioning of elements of security systems (United States)

    Godovykh, A. V.; Stepanov, B. P.; Sheveleva, A. A.


    The article is devoted to issues of development of the informational complex for simulation of functioning of the security system elements. The complex is described from the point of view of main objectives, a design concept and an interrelation of main elements. The proposed conception of the computer simulation provides an opportunity to simulate processes of security system work for training security staff during normal and emergency operation.

  12. Computer Security: a Survey of Methods and Systems


    Yampolskiy, Roman V.; Venu Govindaraju


    In this work we have reviewed studies which survey all aspects of computer security including attackers and attacks, software bugs and viruses as well as different intrusion detection systems and ways to evaluate such systems. The aim was to develop a survey of security related issues which would provide adequate information and advice to newcomers to the field as well as a good reference guide for security professionals.

  13. A personal computer-based, multitasking data acquisition system (United States)

    Bailey, Steven A.


    A multitasking, data acquisition system was written to simultaneously collect meteorological radar and telemetry data from two sources. This system is based on the personal computer architecture. Data is collected via two asynchronous serial ports and is deposited to disk. The system is written in both the C programming language and assembler. It consists of three parts: a multitasking kernel for data collection, a shell with pull down windows as user interface, and a graphics processor for editing data and creating coded messages. An explanation of both system principles and program structure is presented.

  14. SD-CAS: Spin Dynamics by Computer Algebra System. (United States)

    Filip, Xenia; Filip, Claudiu


    A computer algebra tool for describing the Liouville-space quantum evolution of nuclear 1/2-spins is introduced and implemented within a computational framework named Spin Dynamics by Computer Algebra System (SD-CAS). A distinctive feature compared with numerical and previous computer algebra approaches to solving spin dynamics problems results from the fact that no matrix representation for spin operators is used in SD-CAS, which determines a full symbolic character to the performed computations. Spin correlations are stored in SD-CAS as four-entry nested lists of which size increases linearly with the number of spins into the system and are easily mapped into analytical expressions in terms of spin operator products. For the so defined SD-CAS spin correlations a set of specialized functions and procedures is introduced that are essential for implementing basic spin algebra operations, such as the spin operator products, commutators, and scalar products. They provide results in an abstract algebraic form: specific procedures to quantitatively evaluate such symbolic expressions with respect to the involved spin interaction parameters and experimental conditions are also discussed. Although the main focus in the present work is on laying the foundation for spin dynamics symbolic computation in NMR based on a non-matrix formalism, practical aspects are also considered throughout the theoretical development process. In particular, specific SD-CAS routines have been implemented using the YACAS computer algebra package (, and their functionality was demonstrated on a few illustrative examples. Copyright © 2010 Elsevier Inc. All rights reserved.

  15. New computer system for the Japan Tier-2 center

    CERN Multimedia

    Hiroyuki Matsunaga


    The ICEPP (International Center for Elementary Particle Physics) of the University of Tokyo has been operating an LCG Tier-2 center dedicated to the ATLAS experiment, and is going to switch over to the new production system which has been recently installed. The system will be of great help to the exciting physics analyses for coming years. The new computer system includes brand-new blade servers, RAID disks, a tape library system and Ethernet switches. The blade server is DELL PowerEdge 1955 which contains two Intel dual-core Xeon (WoodCrest) CPUs running at 3GHz, and a total of 650 servers will be used as compute nodes. Each of the RAID disks is configured to be RAID-6 with 16 Serial ATA HDDs. The equipment as well as the cooling system is placed in a new large computer room, and both are hooked up to UPS (uninterruptible power supply) units for stable operation. As a whole, the system has been built with redundant configuration in a cost-effective way. The next major upgrade will take place in thre...

  16. Human-computer interaction and management information systems

    CERN Document Server

    Galletta, Dennis F


    ""Human-Computer Interaction and Management Information Systems: Applications"" offers state-of-the-art research by a distinguished set of authors who span the MIS and HCI fields. The original chapters provide authoritative commentaries and in-depth descriptions of research programs that will guide 21st century scholars, graduate students, and industry professionals. Human-Computer Interaction (or Human Factors) in MIS is concerned with the ways humans interact with information, technologies, and tasks, especially in business, managerial, organizational, and cultural contexts. It is distinctiv

  17. Neuromorphic computing applications for network intrusion detection systems (United States)

    Garcia, Raymond C.; Pino, Robinson E.


    What is presented here is a sequence of evolving concepts for network intrusion detection. These concepts start with neuromorphic structures for XOR-based signature matching and conclude with computationally based network intrusion detection system with an autonomous structuring algorithm. There is evidence that neuromorphic computation for network intrusion detection is fractal in nature under certain conditions. Specifically, the neural structure can take fractal form when simple neural structuring is autonomous. A neural structure is fractal by definition when its fractal dimension exceeds the synaptic matrix dimension. The authors introduce the use of fractal dimension of the neuromorphic structure as a factor in the autonomous restructuring feedback loop.

  18. Computer models of pipeline systems based on electro hydraulic analogy (United States)

    Kolesnikov, S. V.; Kudinov, V. A.; Trubitsyn, K. V.; Tkachev, V. K.; Stefanyuk, E. V.


    This paper describes the results of the development of mathematical and computer models of complex multi-loop branched pipeline networks for various purposes (water-oil-gas pipelines, heating networks, etc.) based on the electro hydraulic analogy of current spread in conductors and fluids in pipelines described by the same equations. Kirchhoff’s laws used in the calculation of electrical networks are applied in the calculations for pipeline systems. To maximize the approximation of the computer model to the real network concerning its resistance to the process of transferring the medium, the method of automatic identification of the model is applied.

  19. Computational Physics Simulation of Classical and Quantum Systems

    CERN Document Server

    Scherer, Philipp O. J


    This book encapsulates the coverage for a two-semester course in computational physics. The first part introduces the basic numerical methods while omitting mathematical proofs but demonstrating the algorithms by way of numerous computer experiments. The second part specializes in simulation of classical and quantum systems with instructive examples spanning many fields in physics, from a classical rotor to a quantum bit. All program examples are realized as Java applets ready to run in your browser and do not require any programming skills.

  20. Graphics processing units in bioinformatics, computational biology and systems biology. (United States)

    Nobile, Marco S; Cazzaniga, Paolo; Tangherloni, Andrea; Besozzi, Daniela


    Several studies in Bioinformatics, Computational Biology and Systems Biology rely on the definition of physico-chemical or mathematical models of biological systems at different scales and levels of complexity, ranging from the interaction of atoms in single molecules up to genome-wide interaction networks. Traditional computational methods and software tools developed in these research fields share a common trait: they can be computationally demanding on Central Processing Units (CPUs), therefore limiting their applicability in many circumstances. To overcome this issue, general-purpose Graphics Processing Units (GPUs) are gaining an increasing attention by the scientific community, as they can considerably reduce the running time required by standard CPU-based software, and allow more intensive investigations of biological systems. In this review, we present a collection of GPU tools recently developed to perform computational analyses in life science disciplines, emphasizing the advantages and the drawbacks in the use of these parallel architectures. The complete list of GPU-powered tools here reviewed is available at © The Author 2016. Published by Oxford University Press.

  1. Active system area networks for data intensive computations. Final report

    Energy Technology Data Exchange (ETDEWEB)



    The goal of the Active System Area Networks (ASAN) project is to develop hardware and software technologies for the implementation of active system area networks (ASANs). The use of the term ''active'' refers to the ability of the network interfaces to perform application-specific as well as system level computations in addition to their traditional role of data transfer. This project adopts the view that the network infrastructure should be an active computational entity capable of supporting certain classes of computations that would otherwise be performed on the host CPUs. The result is a unique network-wide programming model where computations are dynamically placed within the host CPUs or the NIs depending upon the quality of service demands and network/CPU resource availability. The projects seeks to demonstrate that such an approach is a better match for data intensive network-based applications and that the advent of low-cost powerful embedded processors and configurable hardware makes such an approach economically viable and desirable.

  2. An intelligent multi-media human-computer dialogue system (United States)

    Neal, J. G.; Bettinger, K. E.; Byoun, J. S.; Dobes, Z.; Thielman, C. Y.


    Sophisticated computer systems are being developed to assist in the human decision-making process for very complex tasks performed under stressful conditions. The human-computer interface is a critical factor in these systems. The human-computer interface should be simple and natural to use, require a minimal learning period, assist the user in accomplishing his task(s) with a minimum of distraction, present output in a form that best conveys information to the user, and reduce cognitive load for the user. In pursuit of this ideal, the Intelligent Multi-Media Interfaces project is devoted to the development of interface technology that integrates speech, natural language text, graphics, and pointing gestures for human-computer dialogues. The objective of the project is to develop interface technology that uses the media/modalities intelligently in a flexible, context-sensitive, and highly integrated manner modelled after the manner in which humans converse in simultaneous coordinated multiple modalities. As part of the project, a knowledge-based interface system, called CUBRICON (CUBRC Intelligent CONversationalist) is being developed as a research prototype. The application domain being used to drive the research is that of military tactical air control.


    CERN Multimedia

    I. Fisk


    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...


    Directory of Open Access Journals (Sweden)

    Taras P. Kobylnyk


    Full Text Available The article describes the general characteristics of the most popular computer mathematics systems such as commercial (Maple, Mathematica, Matlab and open source (Scilab, Maxima, GRAN, Sage, as well as the conditions of use of these systems as means of fundamentalization of the educational process of bachelor of informatics. It is considered the role of CMS in bachelor of informatics training. It is identified the approaches of CMS pedagogical use while learning information and physics and mathematics disciplines. There are presented some tasks, in which we must carefully use the «responses» have been received using CMS. It is identified the promising directions of development of computer mathematics systems in high-tech environment.

  5. Computational transport phenomena of fluid-particle systems

    CERN Document Server

    Arastoopour, Hamid; Abbasi, Emad


    This book concerns the most up-to-date advances in computational transport phenomena (CTP), an emerging tool for the design of gas-solid processes such as fluidized bed systems. The authors examine recent work in kinetic theory and CTP and illustrate gas-solid processes’ many applications in the energy, chemical, pharmaceutical, and food industries. They also discuss the kinetic theory approach in developing constitutive equations for gas-solid flow systems and how it has advanced over the last decade as well as the possibility of obtaining innovative designs for multiphase reactors, such as those needed to capture CO2 from flue gases. Suitable as a concise reference and a textbook supplement for graduate courses, Computational Transport Phenomena of Gas-Solid Systems is ideal for practitioners in industries involved with the design and operation of processes based on fluid/particle mixtures, such as the energy, chemicals, pharmaceuticals, and food processing. Explains how to couple the population balance e...

  6. Computer aided systems human engineering: A hypermedia tool (United States)

    Boff, Kenneth R.; Monk, Donald L.; Cody, William J.


    The Computer Aided Systems Human Engineering (CASHE) system, Version 1.0, is a multimedia ergonomics database on CD-ROM for the Apple Macintosh II computer, being developed for use by human system designers, educators, and researchers. It will initially be available on CD-ROM and will allow users to access ergonomics data and models stored electronically as text, graphics, and audio. The CASHE CD-ROM, Version 1.0 will contain the Boff and Lincoln (1988) Engineering Data Compendium, MIL-STD-1472D and a unique, interactive simulation capability, the Perception and Performance Prototyper. Its features also include a specialized data retrieval, scaling, and analysis capability and the state of the art in information retrieval, browsing, and navigation.

  7. Advanced intelligent computational technologies and decision support systems

    CERN Document Server

    Kountchev, Roumen


    This book offers a state of the art collection covering themes related to Advanced Intelligent Computational Technologies and Decision Support Systems which can be applied to fields like healthcare assisting the humans in solving problems. The book brings forward a wealth of ideas, algorithms and case studies in themes like: intelligent predictive diagnosis; intelligent analyzing of medical images; new format for coding of single and sequences of medical images; Medical Decision Support Systems; diagnosis of Down’s syndrome; computational perspectives for electronic fetal monitoring; efficient compression of CT Images; adaptive interpolation and halftoning for medical images; applications of artificial neural networks for real-life problems solving; present and perspectives for Electronic Healthcare Record Systems; adaptive approaches for noise reduction in sequences of CT images etc.

  8. Human-computer systems interaction backgrounds and applications 3

    CERN Document Server

    Kulikowski, Juliusz; Mroczek, Teresa; Wtorek, Jerzy


    This book contains an interesting and state-of the art collection of papers on the recent progress in Human-Computer System Interaction (H-CSI). It contributes the profound description of the actual status of the H-CSI field and also provides a solid base for further development and research in the discussed area. The contents of the book are divided into the following parts: I. General human-system interaction problems; II. Health monitoring and disabled people helping systems; and III. Various information processing systems. This book is intended for a wide audience of readers who are not necessarily experts in computer science, machine learning or knowledge engineering, but are interested in Human-Computer Systems Interaction. The level of particular papers and specific spreading-out into particular parts is a reason why this volume makes fascinating reading. This gives the reader a much deeper insight than he/she might glean from research papers or talks at conferences. It touches on all deep issues that ...

  9. Integrated Geo Hazard Management System in Cloud Computing Technology (United States)

    Hanifah, M. I. M.; Omar, R. C.; Khalid, N. H. N.; Ismail, A.; Mustapha, I. S.; Baharuddin, I. N. Z.; Roslan, R.; Zalam, W. M. Z.


    Geo hazard can result in reducing of environmental health and huge economic losses especially in mountainous area. In order to mitigate geo-hazard effectively, cloud computer technology are introduce for managing geo hazard database. Cloud computing technology and it services capable to provide stakeholder's with geo hazards information in near to real time for an effective environmental management and decision-making. UNITEN Integrated Geo Hazard Management System consist of the network management and operation to monitor geo-hazard disaster especially landslide in our study area at Kelantan River Basin and boundary between Hulu Kelantan and Hulu Terengganu. The system will provide easily manage flexible measuring system with data management operates autonomously and can be controlled by commands to collects and controls remotely by using “cloud” system computing. This paper aims to document the above relationship by identifying the special features and needs associated with effective geohazard database management using “cloud system”. This system later will use as part of the development activities and result in minimizing the frequency of the geo-hazard and risk at that research area.

  10. Computer-Aided Diagnostics of Human Arterial System

    Directory of Open Access Journals (Sweden)

    Klara Capova


    Full Text Available The paper deals with the modelling and the simulation of physiological fluid systems laying emphasise on the human vascular system. The presented simulation method has been developed as a helpful tool for the computer-aided non-invasive diagnostics of living bodies. Using the electromechanical analogy between physiological and electrical values the introduced method makes possible the description and 3D representation of the non-linear characteristics of the human haemodynamics as well. The simulation procedure and the obtained results verified by the experiment enable to visualize all physiological and pathophysiological states of the human vascular system.

  11. Computer tools for systems engineering at LaRC (United States)

    Walters, J. Milam


    The Systems Engineering Office (SEO) has been established to provide life cycle systems engineering support to Langley research Center projects. over the last two years, the computing market has been reviewed for tools which could enhance the effectiveness and efficiency of activities directed towards this mission. A group of interrelated applications have been procured, or are under development including a requirements management tool, a system design and simulation tool, and project and engineering data base. This paper will review the current configuration of these tools and provide information on future milestones and directions.

  12. Predictive Control of Networked Multiagent Systems via Cloud Computing. (United States)

    Liu, Guo-Ping


    This paper studies the design and analysis of networked multiagent predictive control systems via cloud computing. A cloud predictive control scheme for networked multiagent systems (NMASs) is proposed to achieve consensus and stability simultaneously and to compensate for network delays actively. The design of the cloud predictive controller for NMASs is detailed. The analysis of the cloud predictive control scheme gives the necessary and sufficient conditions of stability and consensus of closed-loop networked multiagent control systems. The proposed scheme is verified to characterize the dynamical behavior and control performance of NMASs through simulations. The outcome provides a foundation for the development of cooperative and coordinative control of NMASs and its applications.

  13. Requirements for the evaluation of computational speech segregation systems

    DEFF Research Database (Denmark)

    May, Tobias; Dau, Torsten


    Recent studies on computational speech segregation reported improved speech intelligibility in noise when estimating and applying an ideal binary mask with supervised learning algorithms. However, an important requirement for such systems in technical applications is their robustness to acoustic...... conditions not considered during training. This study demonstrates that the spectro-temporal noise variations that occur during training and testing determine the achievable segregation performance. In particular, such variations strongly affect the identification of acoustical features in the system...... associated with perceptual attributes in speech segregation. The results could help establish a framework for a systematic evaluation of future segregation systems....

  14. Interactive computations: toward risk management in interactive intelligent systems. (United States)

    Skowron, Andrzej; Jankowski, Andrzej

    Understanding the nature of interactions is regarded as one of the biggest challenges in projects related to complex adaptive systems. We discuss foundations for interactive computations in interactive intelligent systems (IIS), developed in the Wistech program and used for modeling complex systems. We emphasize the key role of risk management in problem solving by IIS. The considerations are based on experience gained in real-life projects concerning, e.g., medical diagnosis and therapy support, control of an unmanned helicopter, fraud detection algorithmic trading or fire commander decision support.

  15. Computational Strategies for a System-Level Understanding of Metabolism (United States)

    Cazzaniga, Paolo; Damiani, Chiara; Besozzi, Daniela; Colombo, Riccardo; Nobile, Marco S.; Gaglio, Daniela; Pescini, Dario; Molinari, Sara; Mauri, Giancarlo; Alberghina, Lilia; Vanoni, Marco


    Cell metabolism is the biochemical machinery that provides energy and building blocks to sustain life. Understanding its fine regulation is of pivotal relevance in several fields, from metabolic engineering applications to the treatment of metabolic disorders and cancer. Sophisticated computational approaches are needed to unravel the complexity of metabolism. To this aim, a plethora of methods have been developed, yet it is generally hard to identify which computational strategy is most suited for the investigation of a specific aspect of metabolism. This review provides an up-to-date description of the computational methods available for the analysis of metabolic pathways, discussing their main advantages and drawbacks.  In particular, attention is devoted to the identification of the appropriate scale and level of accuracy in the reconstruction of metabolic networks, and to the inference of model structure and parameters, especially when dealing with a shortage of experimental measurements. The choice of the proper computational methods to derive in silico data is then addressed, including topological analyses, constraint-based modeling and simulation of the system dynamics. A description of some computational approaches to gain new biological knowledge or to formulate hypotheses is finally provided. PMID:25427076

  16. Complex systems relationships between control, communications and computing

    CERN Document Server


    This book gives a wide-ranging description of the many facets of complex dynamic networks and systems within an infrastructure provided by integrated control and supervision: envisioning, design, experimental exploration, and implementation. The theoretical contributions and the case studies presented can reach control goals beyond those of stabilization and output regulation or even of adaptive control. Reporting on work of the Control of Complex Systems (COSY) research program, Complex Systems follows from and expands upon an earlier collection: Control of Complex Systems by introducing novel theoretical techniques for hard-to-control networks and systems. The major common feature of all the superficially diverse contributions encompassed by this book is that of spotting and exploiting possible areas of mutual reinforcement between control, computing and communications. These help readers to achieve not only robust stable plant system operation but also properties such as collective adaptivity, integrity an...

  17. Computational Fluid Dynamics Analysis of an Evaporative Cooling System

    Directory of Open Access Journals (Sweden)

    Kapilan N.


    Full Text Available The use of chlorofluorocarbon based refrigerants in the air-conditioning system increases the global warming and causes the climate change. The climate change is expected to present a number of challenges for the built environment and an evaporative cooling system is one of the simplest and environmentally friendly cooling system. The evaporative cooling system is most widely used in summer and in rural and urban areas of India for human comfort. In evaporative cooling system, the addition of water into air reduces the temperature of the air as the energy needed to evaporate the water is taken from the air. Computational fluid dynamics is a numerical analysis and was used to analyse the evaporative cooling system. The CFD results are matches with the experimental results.

  18. An enhanced healthcare system in mobile cloud computing environment

    Directory of Open Access Journals (Sweden)

    Jemal Hanen


    Full Text Available Abstract Mobile cloud computing (MCC is a new technology for mobile web services. Accordingly, we assume that MCC is likely to be of the heart of healthcare transformation. MCC offers new kinds of services and facilities for patients and caregivers. In this regard, we have tried to propose a new mobile medical web service system. To this end, we implement a medical cloud multi-agent system (MCMAS solution for polyclinic ESSALEMA Sfax—TUNISIA, using Google’s Android operating system. The developed system has been assessing using the CloudSim Simulator. This paper presents initial results of the system in practice. In fact the proposed solution shows that the MCMAS has a commanding capability to cope with the problem of traditional application. The performance of the MCMAS is compared with the traditional system in polyclinic ESSALEMA which showed that this prototype yields better recital than using usual application.

  19. Software Safety Risk in Legacy Safety-Critical Computer Systems (United States)

    Hill, Janice L.; Baggs, Rhoda


    Safety Standards contain technical and process-oriented safety requirements. Technical requirements are those such as "must work" and "must not work" functions in the system. Process-Oriented requirements are software engineering and safety management process requirements. Address the system perspective and some cover just software in the system > NASA-STD-8719.13B Software Safety Standard is the current standard of interest. NASA programs/projects will have their own set of safety requirements derived from the standard. Safety Cases: a) Documented demonstration that a system complies with the specified safety requirements. b) Evidence is gathered on the integrity of the system and put forward as an argued case. [Gardener (ed.)] c) Problems occur when trying to meet safety standards, and thus make retrospective safety cases, in legacy safety-critical computer systems.

  20. Toward Computational Spectroscopy Studies for Large Molecular Systems (United States)

    Biczysko, Malgorzata; Bloino, Julien; Barone, Vincenzo


    Integrated computational approaches built on quantum mechanical (QM) methods combined with time-independent schemes to account for nuclear motion effects are applied to the spectroscopic investigation of molecular systems, from large biomolecules to hybrid supra-molecular systems. Within the time-independent approaches, vibrational spectra are computed including anharmonicities through perturbative corrections while UV-vis line-shapes are simulated accounting for the vibrational structure; in both cases, the environmental effects are taken into account by explicit or continuum models. Extension to larger systems relies on reduced dimensionality approaches and effective schemes to select transitions of interest, available for both vibrational and vibronic spectra. Such procedures are exploited to simulate IR and UV-vis spectra leading in all cases to good agreement with experimental observations and allowing to dissect different effects underlying spectral phenomena, finally, paving a feasible route toward the state-of-the-art computational spectroscopy studies, even for relatively large molecular systems [1,2]. 1. V. Barone, A. Baiardi, M. Biczysko, J. Bloino, C. Cappelli, F. Lipparini Phys. Chem. Chem. Phys, 14, 12404, 2012 2. V. Barone, M. Biczysko, J. Bloino, M. Borkowska-Panek, I. Carnimeo, P. Panek, Int. J. Quantum Chem. 112, 2185, 2012

  1. Computer analysis system of the physician-patient consultation process. (United States)

    Katsuyama, Kimiko; Koyama, Yuichi; Hirano, Yasushi; Mase, Kenji; Kato, Ken; Mizuno, Satoshi; Yamauchi, Kazunobu


    Measurements of the quality of physician-patient communication are important in assessing patient outcomes, but the quality of communication is difficult to quantify. The aim of this paper is to develop a computer analysis system for the physician-patient consultation process (CASC), which will use a quantitative method to quantify and analyze communication exchanges between physicians and patients during the consultation process. CASC is based on the concept of narrative-based medicine using a computer-mediated communication (CMC) technique from a cognitive dialog processing system. Effective and ineffective consultation samples from the works of Saito and Kleinman were tested with CASC in order to establish the validity of CASC for use in clinical practice. After validity was confirmed, three researchers compared their assessments of consultation processes in a physician's office with CASCs. Consultations of 56 migraine patients were recorded with permission, and for this study consultations of 29 patients that included more than 50 words were used. Transcribed data from the 29 consultations input into CASC resulted in two diagrams of concept structure and concept space to assess the quality of consultation. The concordance rate between the assessments by CASC and the researchers was 75 percent. In this study, a computer-based communication analysis system was established that efficiently quantifies the quality of the physician-patient consultation process. The system is promising as an effective tool for evaluating the quality of physician-patient communication in clinical and educational settings.

  2. Parallel Computation of the Regional Ocean Modeling System (ROMS)

    Energy Technology Data Exchange (ETDEWEB)

    Wang, P; Song, Y T; Chao, Y; Zhang, H


    The Regional Ocean Modeling System (ROMS) is a regional ocean general circulation modeling system solving the free surface, hydrostatic, primitive equations over varying topography. It is free software distributed world-wide for studying both complex coastal ocean problems and the basin-to-global scale ocean circulation. The original ROMS code could only be run on shared-memory systems. With the increasing need to simulate larger model domains with finer resolutions and on a variety of computer platforms, there is a need in the ocean-modeling community to have a ROMS code that can be run on any parallel computer ranging from 10 to hundreds of processors. Recently, we have explored parallelization for ROMS using the MPI programming model. In this paper, an efficient parallelization strategy for such a large-scale scientific software package, based on an existing shared-memory computing model, is presented. In addition, scientific applications and data-performance issues on a couple of SGI systems, including Columbia, the world's third-fastest supercomputer, are discussed.

  3. Development of a personal-computer-based intelligent tutoring system (United States)

    Mueller, Stephen J.


    A large number of Intelligent Tutoring Systems (ITSs) have been built since they were first proposed in the early 1970's. Research conducted on the use of the best of these systems has demonstrated their effectiveness in tutoring in selected domains. A prototype ITS for tutoring students in the use of CLIPS language: CLIPSIT (CLIPS Intelligent Tutor) was developed. For an ITS to be widely accepted, not only must it be effective, flexible, and very responsive, it must also be capable of functioning on readily available computers. While most ITSs have been developed on powerful workstations, CLIPSIT is designed for use on the IBM PC/XT/AT personal computer family (and their clones). There are many issues to consider when developing an ITS on a personal computer such as the teaching strategy, user interface, knowledge representation, and program design methodology. Based on experiences in developing CLIPSIT, results on how to address some of these issues are reported and approaches are suggested for maintaining a powerful learning environment while delivering robust performance within the speed and memory constraints of the personal computer.

  4. Workflow Management Systems for Molecular Dynamics on Leadership Computers (United States)

    Wells, Jack; Panitkin, Sergey; Oleynik, Danila; Jha, Shantenu

    Molecular Dynamics (MD) simulations play an important role in a range of disciplines from Material Science to Biophysical systems and account for a large fraction of cycles consumed on computing resources. Increasingly science problems require the successful execution of ''many'' MD simulations as opposed to a single MD simulation. There is a need to provide scalable and flexible approaches to the execution of the workload. We present preliminary results on the Titan computer at the Oak Ridge Leadership Computing Facility that demonstrate a general capability to manage workload execution agnostic of a specific MD simulation kernel or execution pattern, and in a manner that integrates disparate grid-based and supercomputing resources. Our results build upon our extensive experience of distributed workload management in the high-energy physics ATLAS project using PanDA (Production and Distributed Analysis System), coupled with recent conceptual advances in our understanding of workload management on heterogeneous resources. We will discuss how we will generalize these initial capabilities towards a more production level service on DOE leadership resources. This research is sponsored by US DOE/ASCR and used resources of the OLCF computing facility.

  5. Computer controlled MHD power consolidation and pulse generation system

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, R.; Marcotte, K.; Donnelly, M.


    The major goal of this research project is to establish the feasibility of a power conversion technology which will permit the direct synthesis of computer programmable pulse power. Feasibility has been established in this project by demonstration of direct synthesis of commercial frequency power by means of computer control. The power input to the conversion system is assumed to be a Faraday connected MHD generator which may be viewed as a multi-terminal dc source and is simulated for the purpose of this demonstration by a set of dc power supplies. This consolidation/inversion (CI), process will be referred to subsequently as Pulse Amplitude Synthesis and Control (PASC). A secondary goal is to deliver a controller subsystem consisting of a computer, software, and computer interface board which can serve as one of the building blocks for a possible phase II prototype system. This report period work summarizes the accomplishments and covers the high points of the two year project. 6 refs., 41 figs.

  6. Design layout for gas monitoring system II (GMS-2) computer system

    Energy Technology Data Exchange (ETDEWEB)

    Vo, V.; Philipp, B.L.; Manke, M.P.


    This document provides a general overview of the computer systems software that perform the data acquisition and control for the 241-SY-101 Gas Monitoring System II (GMS-2). It outlines the system layout, and contains descriptions of components and the functions they perform. The GMS-2 system was designed and implemented by Los Alamos National Laboratory and supplied to Westinghouse Hanford Company


    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  8. CPU timing routines for a CONVEX C220 computer system (United States)

    Bynum, Mary Ann


    The timing routines available on the CONVEX C220 computer system in the Structural Mechanics Division (SMD) at NASA Langley Research Center are examined. The function of the timing routines, the use of the timing routines in sequential, parallel, and vector code, and the interpretation of the results from the timing routines with respect to the CONVEX model of computing are described. The timing routines available on the SMD CONVEX fall into two groups. The first group includes standard timing routines generally available with UNIX 4.3 BSD operating systems, while the second group includes routines unique to the SMD CONVEX. The standard timing routines described in this report are /bin/csh time,/bin/time, etime, and ctime. The routines unique to the SMD CONVEX are getinfo, second, cputime, toc, and a parallel profiling package made up of palprof, palinit, and palsum.

  9. Issues and challenges of intelligent systems and computational intelligence

    CERN Document Server

    Pozna, Claudiu; Kacprzyk, Janusz


    This carefully edited book contains contributions of prominent and active researchers and scholars in the broadly perceived area of intelligent systems. The book is unique both with respect to the width of coverage of tools and techniques, and to the variety of problems that could be solved by the tools and techniques presented. The editors have been able to gather a very good collection of relevant and original papers by prominent representatives of many areas, relevant both to the theory and practice of intelligent systems, artificial intelligence, computational intelligence, soft computing, and the like. The contributions have been divided into 7 parts presenting first more fundamental and theoretical contributions, and then applications in relevant areas.        

  10. An Evaluation of the TRIPS Computer System (Extended Technical Report) (United States)


    Mario Marino Nitya Ranganathan Behnam Robatmili Aaron Smith James Burrill Stephen W. Keckler Doug Burger Kathryn S. McKinley Computer Architecture and...Marino, Nitya Ranganathan, Behnam Robatmili, Aaron Smith, James Burrill, Stephen W. Keckler, Doug Burger , Kathryn S. McKinley; ASPLOS 2009, Washington DC...Reinhardt. The M5 Simulator: Modeling Networked Systems. In IEEE Micro, pages 52–60, July/August 2006. [2] D. Burger , S. W. Keckler, K. S. McKinley

  11. Learning Runtime Parameters in Computer Systems with Delayed Experience Injection


    Schaarschmidt, Michael; Gessert, Felix; Dalibard, Valentin; Yoneki, Eiko


    Learning effective configurations in computer systems without hand-crafting models for every parameter is a long-standing problem. This paper investigates the use of deep reinforcement learning for runtime parameters of cloud databases under latency constraints. Cloud services serve up to thousands of concurrent requests per second and can adjust critical parameters by leveraging performance metrics. In this work, we use continuous deep reinforcement learning to learn optimal cache expiration...

  12. Carola, a computer system for automatic documentation in anesthesia. (United States)

    Karliczek, G F; de Geus, A F; Wiersma, G; Oosterhaven, S; Jenkins, I


    A computer system has been designed for documentation and data acquisition during open heart surgery. This computer system (called 'Carola') processes all patient data during cardiac surgery. More than 50 analogue or digital signals are scanned. These are derived from a monitoring rack, a Siemens Servo 900B ventilator with its accessory devices and a heart lung machine. All these values are plotted as well as offline data, such as medications, fluids, laboratory results and user comments, on an A3 format anesthetic record using an eight pen flat bed plotter. Simultaneously all data is written onto a cassette tape. These tapes are then transferred to a database for storage and statistical processing. The sampling frequency is every 10 seconds, averages being calculated over one minute periods. The chart is updated once a minute normally or every 15 minutes for slowly changing signals e.g. temperatures. Hardware and software of the computer have modular design. The hardware consists of two Motorola 6809 based microprocessor systems. The software is entirely written in Pascal. The user interface is implemented on a menu driven basis. A terminal with a keyboard is used for the communication with the users, namely anesthetic nurses and anesthesiologists. The system was readily accepted by the users. The menu structure proved to be easy to learn and allowed fast entries, even when the users were not previously accustomed to the use of a keyboard. The clear and detailed presentation of the data on the plotted chart helped to detect trends early and facilitated therapeutic decisions. From december 1983 the first prototype was used on a routine basis, followed by a second unit in June 1984 and a third in December 1985. Up to now more than 12.500 anesthetic hours have been recorded. Since then almost 100% of all anesthetics performed in our cardiothoracic unit have been documented by the computers, including all short procedures without invasive monitoring and all

  13. Internal models and neural computation in the vestibular system


    Green, Andrea M.; Dora E. Angelaki


    The vestibular system is vital for motor control and spatial self-motion perception. Afferents from the otolith organs and the semicircular canals converge with optokinetic, somatosensory and motor-related signals in the vestibular nuclei, which are reciprocally interconnected with the vestibulocerebellar cortex and deep cerebellar nuclei. Here, we review the properties of the many cell types in the vestibular nuclei, as well as some fundamental computations implemented within this brainstem–...

  14. A handheld computer-aided diagnosis system and simulated analysis (United States)

    Su, Mingjian; Zhang, Xuejun; Liu, Brent; Su, Kening; Louie, Ryan


    This paper describes a Computer Aided Diagnosis (CAD) system based on cellphone and distributed cluster. One of the bottlenecks in building a CAD system for clinical practice is the storage and process of mass pathology samples freely among different devices, and normal pattern matching algorithm on large scale image set is very time consuming. Distributed computation on cluster has demonstrated the ability to relieve this bottleneck. We develop a system enabling the user to compare the mass image to a dataset with feature table by sending datasets to Generic Data Handler Module in Hadoop, where the pattern recognition is undertaken for the detection of skin diseases. A single and combination retrieval algorithm to data pipeline base on Map Reduce framework is used in our system in order to make optimal choice between recognition accuracy and system cost. The profile of lesion area is drawn by doctors manually on the screen, and then uploads this pattern to the server. In our evaluation experiment, an accuracy of 75% diagnosis hit rate is obtained by testing 100 patients with skin illness. Our system has the potential help in building a novel medical image dataset by collecting large amounts of gold standard during medical diagnosis. Once the project is online, the participants are free to join and eventually an abundant sample dataset will soon be gathered enough for learning. These results demonstrate our technology is very promising and expected to be used in clinical practice.

  15. System and method for controlling power consumption in a computer system based on user satisfaction (United States)

    Yang, Lei; Dick, Robert P; Chen, Xi; Memik, Gokhan; Dinda, Peter A; Shy, Alex; Ozisikyilmaz, Berkin; Mallik, Arindam; Choudhary, Alok


    Systems and methods for controlling power consumption in a computer system. For each of a plurality of interactive applications, the method changes a frequency at which a processor of the computer system runs, receives an indication of user satisfaction, determines a relationship between the changed frequency and the user satisfaction of the interactive application, and stores the determined relationship information. The determined relationship can distinguish between different users and different interactive applications. A frequency may be selected from the discrete frequencies at which the processor of the computer system runs based on the determined relationship information for a particular user and a particular interactive application running on the processor of the computer system. The processor may be adapted to run at the selected frequency.

  16. Computer-assisted design/computer-assisted manufacturing systems: A revolution in restorative dentistry. (United States)

    Sajjad, Arbaz


    For the better part of the past 20 years, dentistry has seen the development of many new all-ceramic materials and restorative techniques fueled by the desire to capture the ever elusive esthetic perfection. This has resulted in the fusion of the latest in material science and the pen ultimate in computer-assisted design/computer-assisted manufacturing (CAD/CAM) technology. This case report describes the procedure for restoring the esthetic appearance of both the left and right maxillary peg-shaped lateral incisors with a metal-free sintered finely structured feldspar ceramic material using the latest laboratory CAD/CAM system. The use of CAD/CAM technology makes it possible to produce restorations faster with precision- fit and good esthetics overcoming the errors associated with traditional ceramo-metal technology. The incorporation of this treatment modality would mean that the dentist working procedures will have to be adapted in the methods of CAD/CAM technology.

  17. Computational power and generative capacity of genetic systems. (United States)

    Igamberdiev, Abir U; Shklovskiy-Kordi, Nikita E


    Semiotic characteristics of genetic sequences are based on the general principles of linguistics formulated by Ferdinand de Saussure, such as the arbitrariness of sign and the linear nature of the signifier. Besides these semiotic features that are attributable to the basic structure of the genetic code, the principle of generativity of genetic language is important for understanding biological transformations. The problem of generativity in genetic systems arises to a possibility of different interpretations of genetic texts, and corresponds to what Alexander von Humboldt called "the infinite use of finite means". These interpretations appear in the individual development as the spatiotemporal sequences of realizations of different textual meanings, as well as the emergence of hyper-textual statements about the text itself, which underlies the process of biological evolution. These interpretations are accomplished at the level of the readout of genetic texts by the structures defined by Efim Liberman as "the molecular computer of cell", which includes DNA, RNA and the corresponding enzymes operating with molecular addresses. The molecular computer performs physically manifested mathematical operations and possesses both reading and writing capacities. Generativity paradoxically resides in the biological computational system as a possibility to incorporate meta-statements about the system, and thus establishes the internal capacity for its evolution. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Internal models and neural computation in the vestibular system. (United States)

    Green, Andrea M; Angelaki, Dora E


    The vestibular system is vital for motor control and spatial self-motion perception. Afferents from the otolith organs and the semicircular canals converge with optokinetic, somatosensory and motor-related signals in the vestibular nuclei, which are reciprocally interconnected with the vestibulocerebellar cortex and deep cerebellar nuclei. Here, we review the properties of the many cell types in the vestibular nuclei, as well as some fundamental computations implemented within this brainstem-cerebellar circuitry. These include the sensorimotor transformations for reflex generation, the neural computations for inertial motion estimation, the distinction between active and passive head movements, as well as the integration of vestibular and proprioceptive information for body motion estimation. A common theme in the solution to such computational problems is the concept of internal models and their neural implementation. Recent studies have shed new insights into important organizational principles that closely resemble those proposed for other sensorimotor systems, where their neural basis has often been more difficult to identify. As such, the vestibular system provides an excellent model to explore common neural processing strategies relevant both for reflexive and for goal-directed, voluntary movement as well as perception.

  19. Computing and Network Systems Administration, Operations Research, and System Dynamics Modeling: A Proposed Research Framework

    Directory of Open Access Journals (Sweden)

    Michael W. Totaro


    Full Text Available Information and computing infrastructures (ICT involve levels of complexity that are highly dynamic in nature. This is due in no small measure to the proliferation of technologies, such as: cloud computing and distributed systems architectures, data mining and multidimensional analysis, and large scale enterprise systems, to name a few. Effective computing and network systems administration is integral to the stability and scalability of these complex software, hardware and communication systems. Systems administration involves the design, analysis, and continuous improvement of the performance or operation of information and computing systems. Additionally, social and administrative responsibilities have become nearly as integral for the systems administrator as are the technical demands that have been imposed for decades. The areas of operations research (OR and system dynamics (SD modeling offer system administrators a rich array of analytical and optimization tools that have been developed from diverse disciplines, which include: industrial, scientific, engineering, economic and financial, to name a few. This paper proposes a research framework by which OR and SD modeling techniques may prove useful to computing and network systems administration, which include: linear programming, network analysis, integer programming, nonlinear optimization, Markov processes, queueing modeling, simulation, decision analysis, heuristic techniques, and system dynamics modeling.

  20. Sub-system Evaluation Report, Computer Security Corporation Sentinel. Version 3.13 (United States)


    The Computer Security Corporation’s, Sentinel Security System product was evaluated against the identification and authentication, documents the evaluation of this product. Keywords: Computer security ; Computer programs; NCSC; TCSEC; Sub-systems; Sentinel security system; Computer security corporation; Test and evaluation.

  1. Hospital information systems: measuring end user computing satisfaction (EUCS). (United States)

    Aggelidis, Vassilios P; Chatzoglou, Prodromos D


    Over the past decade, hospitals in Greece have made significant investments in adopting and implementing new hospital information systems (HISs). Whether these investments will prove beneficial for these organizations depends on the support that will be provided to ensure the effective use of the information systems implemented and also on the satisfaction of its users, which is one of the most important determinants of the success of these systems. Measuring end-user computing satisfaction has a long history within the IS discipline. A number of attempts have been made to evaluate the overall post hoc impact of HIS, focusing on the end-users and more specifically on their satisfaction and the parameters that determine it. The purpose of this paper is to build further upon the existing body of the relevant knowledge by testing past models and suggesting new conceptual perspectives on how end-user computing satisfaction (EUCS) is formed among hospital information system users. All models are empirically tested using data from hospital information system (HIS) users (283). Correlation, explanatory and confirmation factor analysis was performed to test the reliability and validity of the measurement models. The structural equation modeling technique was also used to evaluate the causal models. The empirical results of the study provide support for the EUCS model (incorporating new factors) and enhance the generalizability of the EUCS instrument and its robustness as a valid measure of computing satisfaction and a surrogate for system success in a variety of cultural and linguistic settings. Although the psychometric properties of EUCS appear to be robust across studies and user groups, it should not be considered as the final chapter in the validation and refinement of these scales. Continuing efforts should be made to validate and extend the instrument. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Brain-computer interface after nervous system injury. (United States)

    Burns, Alexis; Adeli, Hojjat; Buford, John A


    Brain-computer interface (BCI) has proven to be a useful tool for providing alternative communication and mobility to patients suffering from nervous system injury. BCI has been and will continue to be implemented into rehabilitation practices for more interactive and speedy neurological recovery. The most exciting BCI technology is evolving to provide therapeutic benefits by inducing cortical reorganization via neuronal plasticity. This article presents a state-of-the-art review of BCI technology used after nervous system injuries, specifically: amyotrophic lateral sclerosis, Parkinson's disease, spinal cord injury, stroke, and disorders of consciousness. Also presented is transcending, innovative research involving new treatment of neurological disorders. © The Author(s) 2014.

  3. Brain-computer interface systems: progress and prospects. (United States)

    Allison, Brendan Z; Wolpaw, Elizabeth Winter; Wolpaw, Jonathan R


    Brain-computer interface (BCI) systems support communication through direct measures of neural activity without muscle activity. BCIs may provide the best and sometimes the only communication option for users disabled by the most severe neuromuscular disorders and may eventually become useful to less severely disabled and/or healthy individuals across a wide range of applications. This review discusses the structure and functions of BCI systems, clarifies terminology and addresses practical applications. Progress and opportunities in the field are also identified and explicated.

  4. A local area computer network expert system framework (United States)

    Dominy, Robert


    Over the past years an expert system called LANES designed to detect and isolate faults in the Goddard-wide Hybrid Local Area Computer Network (LACN) was developed. As a result, the need for developing a more generic LACN fault isolation expert system has become apparent. An object oriented approach was explored to create a set of generic classes, objects, rules, and methods that would be necessary to meet this need. The object classes provide a convenient mechanism for separating high level information from low level network specific information. This approach yeilds a framework which can be applied to different network configurations and be easily expanded to meet new needs.

  5. A computer simulator for development of engineering system design methodologies (United States)

    Padula, S. L.; Sobieszczanski-Sobieski, J.


    A computer program designed to simulate and improve engineering system design methodology is described. The simulator mimics the qualitative behavior and data couplings occurring among the subsystems of a complex engineering system. It eliminates the engineering analyses in the subsystems by replacing them with judiciously chosen analytical functions. With the cost of analysis eliminated, the simulator is used for experimentation with a large variety of candidate algorithms for multilevel design optimization to choose the best ones for the actual application. Thus, the simulator serves as a development tool for multilevel design optimization strategy. The simulator concept, implementation, and status are described and illustrated with examples.

  6. Implementation of Computer Assisted Test Selection System in Local Governments

    Directory of Open Access Journals (Sweden)

    Abdul Azis Basri


    Full Text Available As an evaluative way of selection of civil servant system in all government areas, Computer Assisted Test selection system was started to apply in 2013. In phase of implementation for first time in all areas in 2014, this system selection had trouble in several areas, such as registration procedure and passing grade. The main objective of this essay was to describe implementation of new selection system for civil servants in the local governments and to seek level of effectiveness of this selection system. This essay used combination of study literature and field survey which data collection was made by interviews, observations, and documentations from various sources, and to analyze the collected data, this essay used reduction, display data and verification for made the conclusion. The result of this essay showed, despite there a few parts that be problem of this system such as in the registration phase but almost all phases of implementation of CAT selection system in local government areas can be said was working clearly likes in preparation, implementation and result processing phase. And also this system was fulfilled two of three criterias of effectiveness for selection system, they were accuracy and trusty. Therefore, this selection system can be said as an effective way to select new civil servant. As suggestion, local governments have to make prime preparation in all phases of test and make a good feedback as evaluation mechanism and together with central government to seek, fix and improve infrastructures as supporting tool and competency of local residents.

  7. Health Information System in a Cloud Computing Context. (United States)

    Sadoughi, Farahnaz; Erfannia, Leila


    Healthcare as a worldwide industry is experiencing a period of growth based on health information technology. The capabilities of cloud systems make it as an option to develop eHealth goals. The main objectives of the present study was to evaluate the advantages and limitations of health information systems implementation in a cloud-computing context that was conducted as a systematic review in 2016. Science direct, Scopus, Web of science, IEEE, PubMed and Google scholar were searched according study criteria. Among 308 articles initially found, 21 articles were entered in the final analysis. All the studies had considered cloud computing as a positive tool to help advance health technology, but none had insisted too much on its limitations and threats. Electronic health record systems have been mostly studied in the fields of implementation, designing, and presentation of models and prototypes. According to this research, the main advantages of cloud-based health information systems could be categorized into the following groups: economic benefits and advantages of information management. The main limitations of the implementation of cloud-based health information systems could be categorized into the 4 groups of security, legal, technical, and human restrictions. Compared to earlier studies, the present research had the advantage of dealing with the issue of health information systems in a cloud platform. The high frequency of studies conducted on the implementation of cloud-based health information systems revealed health industry interest in the application of this technology. Security was a subject discussed in most studies due to health information sensitivity. In this investigation, some mechanisms and solutions were discussed concerning the mentioned systems, which would provide a suitable area for future scientific research on this issue. The limitations and solutions discussed in this systematic study would help healthcare managers and decision

  8. Neural Computations in a Dynamical System with Multiple Time Scales (United States)

    Mi, Yuanyuan; Lin, Xiaohan; Wu, Si


    Neural systems display rich short-term dynamics at various levels, e.g., spike-frequency adaptation (SFA) at the single-neuron level, and short-term facilitation (STF) and depression (STD) at the synapse level. These dynamical features typically cover a broad range of time scales and exhibit large diversity in different brain regions. It remains unclear what is the computational benefit for the brain to have such variability in short-term dynamics. In this study, we propose that the brain can exploit such dynamical features to implement multiple seemingly contradictory computations in a single neural circuit. To demonstrate this idea, we use continuous attractor neural network (CANN) as a working model and include STF, SFA and STD with increasing time constants in its dynamics. Three computational tasks are considered, which are persistent activity, adaptation, and anticipative tracking. These tasks require conflicting neural mechanisms, and hence cannot be implemented by a single dynamical feature or any combination with similar time constants. However, with properly coordinated STF, SFA and STD, we show that the network is able to implement the three computational tasks concurrently. We hope this study will shed light on the understanding of how the brain orchestrates its rich dynamics at various levels to realize diverse cognitive functions. PMID:27679569

  9. Proposing an Abstracted Interface and Protocol for Computer Systems.

    Energy Technology Data Exchange (ETDEWEB)

    Resnick, David Richard [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ignatowski, Mike [AMD


    While it made sense for historical reasons to develop different interfaces and protocols for memory channels, CPU to CPU interactions, and I/O devices, ongoing developments in the computer industry are leading to more converged requirements and physical implementations for these interconnects. As it becomes increasingly common for advanced components to contain a variety of computational devices as well as memory, the distinction between processors, memory, accelerators, and I/O devices become s increasingly blur red. As a result, the interface requirements among such components are converging. There is also a wide range of new disruptive technologies that will impact the computer market in the coming years , including 3D integration and emerging NVRAM memory. Optimal exploitation of these technologies cannot be done with the existing memory , storage, and I/O interface standards. The computer industry has historically made major advances when industry players have been able to add innovation behind a standard interface. The standard interface provides a large market for their products and enable s relatively quick and widespread adoption. To enable a new wave of innovation in the form of advanced memory products and accelerators, we need a new standard interface explicitly design ed to provide both the performance and flexibility to support new system integration solutions.

  10. Proposing an Abstracted Interface and Protocol for Computer Systems.

    Energy Technology Data Exchange (ETDEWEB)

    Resnick, David Richard [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Ignatowski, Mike [AMD Research


    While it made sense for historical reasons to develop different interfaces and protocols for memory channels, CPU to CPU interactions, and I/O devices, ongoing developments in the computer industry are leading to more converged requirements and physical implementations for these interconnects. As it becomes increasingly common for advanced components to contain a variety of computational devices as well as memory, the distinction between processors, memory, accelerators, and I/O devices become s increasingly blurred. As a result, the interface requirements among such components are converging. There is also a wide range of new disruptive technologies that will impact the computer market in the coming years , including 3D integration and emerging NVRAM memory. Optimal exploitation of these technologies cannot be done with the existing memory, storage, and I/O interface standards. The computer industry has historically made major advances when industry players have been able to add innovation behind a standard interface. The standard interface provides a large market for their products and enables relatively quick and widespread adoption. To enable a new wave of innovation in the form of advanced memory products and accelerators, we need a new standard interface explicitly designed to provide both the performance and flexibility to support new system integration solutions.

  11. 2002 Computing and Interdisciplinary Systems Office Review and Planning Meeting (United States)

    Lytle, John; Follen, Gregory; Lopez, Isaac; Veres, Joseph; Lavelle, Thomas; Sehra, Arun; Freeh, Josh; Hah, Chunill


    The technologies necessary to enable detailed numerical simulations of complete propulsion systems are being developed at the NASA Glenn Research Center in cooperation with NASA Glenn s Propulsion program, NASA Ames, industry, academia and other government agencies. Large scale, detailed simulations will be of great value to the nation because they eliminate some of the costly testing required to develop and certify advanced propulsion systems. In addition, time and cost savings will be achieved by enabling design details to be evaluated early in the development process before a commitment is made to a specific design. This year s review meeting describes the current status of the NPSS and the Object Oriented Development Kit with specific emphasis on the progress made over the past year on air breathing propulsion applications for aeronautics and space transportation applications. Major accomplishments include the first 3-D simulation of the primary flow path of a large turbofan engine in less than 15 hours, and the formal release of the NPSS Version 1.5 that includes elements of rocket engine systems and a visual based syntax layer. NPSS and the Development Kit are managed by the Computing and Interdisciplinary Systems Office (CISO) at the NASA Glenn Research Center and financially supported in fiscal year 2002 by the Computing, Networking and Information Systems (CNIS) project managed at NASA Ames, the Glenn Aerospace Propulsion and Power Program and the Advanced Space Transportation Program.

  12. Investigation of a Markov Model for Computer System Security Threats

    Directory of Open Access Journals (Sweden)

    Alexey A. A. Magazev


    Full Text Available In this work, a model for computer system security threats formulated in terms of Markov processes is investigated. In the framework of this model the functioning of the computer system is considered as a sequence of failures and recovery actions which appear as results of information security threats acting on the system. We provide a detailed description of the model: the explicit analytical formulas for the probabilities of computer system states at any arbitrary moment of time are derived, some limiting cases are discussed, and the long-run dynamics of the system is analysed. The dependence of the security state probability (i.e. the state for which threats are absent on the probabilities of threats is separately investigated. In particular, it is shown that this dependence is qualitatively different for odd and even moments of time. For instance, in the case of one threat the security state probability demonstrates non-monotonic dependence on the probability of threat at even moments of time; this function admits at least one local minimum in its domain of definition. It is believed that the mentioned feature is important because it allows to locate the most dangerous areas of threats where the security state probability can be lower then the permissible level. Finally, we introduce an important characteristic of the model, called the relaxation time, by means of which we construct the permitting domain of the security parameters. Also the prospects of the received results application to the problem of finding the optimal values of the security parameters is discussed.

  13. Cloud Computing Platform for an Online Model Library System

    Directory of Open Access Journals (Sweden)

    Mingang Chen


    Full Text Available The rapid developing of digital content industry calls for online model libraries. For the efficiency, user experience, and reliability merits of the model library, this paper designs a Web 3D model library system based on a cloud computing platform. Taking into account complex models, which cause difficulties in real-time 3D interaction, we adopt the model simplification and size adaptive adjustment methods to make the system with more efficient interaction. Meanwhile, a cloud-based architecture is developed to ensure the reliability and scalability of the system. The 3D model library system is intended to be accessible by online users with good interactive experiences. The feasibility of the solution has been tested by experiments.

  14. Dynamic Security Assessment Of Computer Networks In Siem-Systems

    Directory of Open Access Journals (Sweden)

    Elena Vladimirovna Doynikova


    Full Text Available The paper suggests an approach to the security assessment of computer networks. The approach is based on attack graphs and intended for Security Information and Events Management systems (SIEM-systems. Key feature of the approach consists in the application of the multilevel security metrics taxonomy. The taxonomy allows definition of the system profile according to the input data used for the metrics calculation and techniques of security metrics calculation. This allows specification of the security assessment in near real time, identification of previous and future attacker steps, identification of attackers goals and characteristics. A security assessment system prototype is implemented for the suggested approach. Analysis of its operation is conducted for several attack scenarios.

  15. Stochastic equations for complex systems theoretical and computational topics

    CERN Document Server

    Bessaih, Hakima


    Mathematical analyses and computational predictions of the behavior of complex systems are needed to effectively deal with weather and climate predictions, for example, and the optimal design of technical processes. Given the random nature of such systems and the recognized relevance of randomness, the equations used to describe such systems usually need to involve stochastics.  The basic goal of this book is to introduce the mathematics and application of stochastic equations used for the modeling of complex systems. A first focus is on the introduction to different topics in mathematical analysis. A second focus is on the application of mathematical tools to the analysis of stochastic equations. A third focus is on the development and application of stochastic methods to simulate turbulent flows as seen in reality.  This book is primarily oriented towards mathematics and engineering PhD students, young and experienced researchers, and professionals working in the area of stochastic differential equations ...

  16. Computer-aided communication satellite system analysis and optimization (United States)

    Stagl, T. W.; Morgan, N. H.; Morley, R. E.; Singh, J. P.


    The capabilities and limitations of the various published computer programs for fixed/broadcast communication satellite system synthesis and optimization are discussed. A satellite Telecommunication analysis and Modeling Program (STAMP) for costing and sensitivity analysis work in application of communication satellites to educational development is given. The modifications made to STAMP include: extension of the six beam capability to eight; addition of generation of multiple beams from a single reflector system with an array of feeds; an improved system costing to reflect the time value of money, growth in earth terminal population with time, and to account for various measures of system reliability; inclusion of a model for scintillation at microwave frequencies in the communication link loss model; and, an updated technological environment.

  17. Application of Nearly Linear Solvers to Electric Power System Computation (United States)

    Grant, Lisa L.

    To meet the future needs of the electric power system, improvements need to be made in the areas of power system algorithms, simulation, and modeling, specifically to achieve a time frame that is useful to industry. If power system time-domain simulations could run in real-time, then system operators would have situational awareness to implement online control and avoid cascading failures, significantly improving power system reliability. Several power system applications rely on the solution of a very large linear system. As the demands on power systems continue to grow, there is a greater computational complexity involved in solving these large linear systems within reasonable time. This project expands on the current work in fast linear solvers, developed for solving symmetric and diagonally dominant linear systems, in order to produce power system specific methods that can be solved in nearly-linear run times. The work explores a new theoretical method that is based on ideas in graph theory and combinatorics. The technique builds a chain of progressively smaller approximate systems with preconditioners based on the system's low stretch spanning tree. The method is compared to traditional linear solvers and shown to reduce the time and iterations required for an accurate solution, especially as the system size increases. A simulation validation is performed, comparing the solution capabilities of the chain method to LU factorization, which is the standard linear solver for power flow. The chain method was successfully demonstrated to produce accurate solutions for power flow simulation on a number of IEEE test cases, and a discussion on how to further improve the method's speed and accuracy is included.

  18. Computer system design description for the spare pump mini-dacs data acquisition and control system

    Energy Technology Data Exchange (ETDEWEB)

    Vargo, G.F. Jr.


    The attached document outlines the computer software design for the mini data acquisition and control system (DACS), that supports the testing of the spare pump for Tank 241-SY-101, at the maintenance and storage facility (MASF).

  19. The computational design of Geological Disposal Technology Integration System

    Energy Technology Data Exchange (ETDEWEB)

    Ishihara, Yoshinao; Iwamoto, Hiroshi; Kobayashi, Shigeki [Mitsubishi Heavy Industries Ltd., Tokyo (Japan); Neyama, Atsushi [Computer Software Development, Co. (Japan); Endo, Shuji; Shindo, Tomonori [Ryoyu System Technology Co. (Japan)


    In order to develop 'Geological Disposal Technology Integration System' that is intended to systematize as knowledge base for fundamental study, the computational design of an indispensable database and image processing function to 'Geological Disposal Technology Integration System' was done, the prototype was made for trial purposes, and the function was confirmed. (1) Database of Integration System which systematized necessary information and relating information as an examination of a whole of repository composition and managed were constructed, and the system function was constructed as a system composed of image processing, analytical information management, the repository component management, and the system security function. (2) The range of the data treated with this system and information was examined, the design examination of the database structure was done, and the design examination of the image processing function of the data preserved in an integrated database was done. (3) The prototype of the database concerning a basic function, the system operation interface, and the image processing function was manufactured to verify the feasibility of the 'Geological Disposal Technology Integration System' based on the result of the design examination and the function was confirmed. (author)


    Directory of Open Access Journals (Sweden)

    N. E. Filyukov


    Full Text Available The paper deals with design of a web-based system for Computer-Aided Manufacturing (CAM. Remote applications and databases located in the "private cloud" are proposed to be the basis of such system. The suggested approach contains: service - oriented architecture, using web applications and web services as modules, multi-agent technologies for implementation of information exchange functions between the components of the system and the usage of PDM - system for managing technology projects within the CAM. The proposed architecture involves CAM conversion into the corporate information system that will provide coordinated functioning of subsystems based on a common information space, as well as parallelize collective work on technology projects and be able to provide effective control of production planning. A system has been developed within this architecture which gives the possibility for a rather simple technological subsystems connect to the system and implementation of their interaction. The system makes it possible to produce CAM configuration for a particular company on the set of developed subsystems and databases specifying appropriate access rights for employees of the company. The proposed approach simplifies maintenance of software and information support for CAM subsystems due to their central location in the data center. The results can be used as a basis for CAM design and testing within the learning process for development and modernization of the system algorithms, and then can be tested in the extended enterprise.

  1. Standard practice for classification of computed radiology systems

    CERN Document Server

    American Society for Testing and Materials. Philadelphia


    1.1 This practice describes the evaluation and classification of a computed radiography (CR) system, a particular phosphor imaging plate (IP), system scanner and software, in combination with specified metal screens for industrial radiography. It is intended to ensure that the evaluation of image quality, as far as this is influenced by the scanner/IP system, meets the needs of users. 1.2 The practice defines system tests to be used to classify the systems of different suppliers and make them comparable for users. 1.3 The CR system performance is described by signal and noise parameters. For film systems, the signal is represented by gradient and the noise by granularity. The signal-to-noise ratio is normalized by the basic spatial resolution of the system and is part of classification. The normalization is given by the scanning aperture of 100 µm diameter for the micro-photometer, which is defined in Test Method E1815 for film system classification. This practice describes how the parameters shall be meas...

  2. Modelling and simulation of information systems on computer: methodological advantages. (United States)

    Huet, B; Martin, J


    Modelling and simulation of information systems by the means of miniatures on computer aim at two general objectives: (a) as an aid to design and realization of information systems; and (b) a tool to improve the dialogue between the designer and the users. An operational information system has two components bound by a dynamic relationship, an information system and a behavioural system. Thanks to the behaviour system, modelling and simulation allow the designer to integrate into the projects a large proportion of the system's implicit specification. The advantages of modelling to the information system relate to: (a) The conceptual phase: initial objectives are compared with the results of simulation and sometimes modified. (b) The external specifications: simulation is particularly useful for personalising man-machine relationships in each application. (c) The internal specifications: if the miniatures are built on the concept of process, the global design and the software are tested and also the simulation refines the configuration and directs the choice of hardware. (d) The implementation: stimulation reduces costs, time and allows testing. Progress in modelling techniques will undoubtedly lead to better information systems.

  3. Integrated Computer Systems. Enterprise Resource Planning (E.R.P.

    Directory of Open Access Journals (Sweden)

    Dan Mircea TRANĂ


    Full Text Available At the beginning of the XXI century society, knowledge based society, the management of economic organizations can only be achieved through optimal IT systems. They can be seen as an extension of increasingly complex information systems and provide effective leadership only if they are integrated in the economic system of the organization. We have previously shown some of the features that recommend integrated IT systems to be controlled and used, as well as main principles for building the integrated computer systems, strategies that can be applied in the designing of this type of IT system. Advantages of management integrated IT systems can be best supported by examples, and therefore we intend to present a special category, but increasingly used, of integrated IT systems: Enterprise Resource Planning (ERP. They are “distributed IT systems based on client / server and developed for the processing of transactions and facilitating the integration of business processes with suppliers, customers and other business partners.”

  4. The Impact of Cloud Computing on Information Systems Agility

    Directory of Open Access Journals (Sweden)

    Mohamed Sawas


    Full Text Available As businesses are encountering frequent harsh economic conditions, concepts such as outsourcing, agile and lean management, change management and cost reduction are constantly gaining more attention. This is because these concepts are all aimed at saving on budgets and facing unexpected changes. Latest technologies like cloud computing promise to turn IT, that has always been viewed as a cost centre, into a source of saving money and driving flexibility and agility to the business. The purpose of this paper is to first compile a set of attributes that govern the agility benefits added to information systems by cloud computing and then develop a survey-based instrument to measure these agility benefits. Our research analysis employs non-probability sampling based on a combination of convenience and judgment. This approach was used to obtain a representative sample of participants from potential companies belonging to various industries such as oil & gas, banking, private, government and semi-governmental organizations. This research will enable decision makers to measure agility enhancements and hence compare the agility of Information Systems before and after deploying cloud computing.

  5. Computational Design and Experimental Validation of New Thermal Barrier Systems

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim


    This project (10/01/2010-9/30/2013), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This proposal will directly support the technical goals specified in DE-FOA-0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. We will develop novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; we will perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; we will perform material characterizations and oxidation/corrosion tests; and we will demonstrate our new Thermal barrier coating (TBC) systems experimentally under Integrated gasification combined cycle (IGCC) environments. The durability of the coating will be examined using the proposed High Temperature/High Pressure Durability Test Rig under real syngas product compositions.

  6. Computer Simulation of Embryonic Systems: What can a ... (United States)

    (1) Standard practice for assessing developmental toxicity is the observation of apical endpoints (intrauterine death, fetal growth retardation, structural malformations) in pregnant rats/rabbits following exposure during organogenesis. EPA’s computational toxicology research program (ToxCast) generated vast in vitro cellular and molecular effects data on >1858 chemicals in >600 high-throughput screening (HTS) assays. The diversity of assays has been increased for developmental toxicity with several HTS platforms, including the devTOX-quickPredict assay from Stemina Biomarker Discovery utilizing the human embryonic stem cell line (H9). Translating these HTS data into higher order-predictions of developmental toxicity is a significant challenge. Here, we address the application of computational systems models that recapitulate the kinematics of dynamical cell signaling networks (e.g., SHH, FGF, BMP, retinoids) in a modeling environment. Examples include angiogenesis (angiodysplasia) and dysmorphogenesis. Being numerically responsive to perturbation, these models are amenable to data integration for systems Toxicology and Adverse Outcome Pathways (AOPs). The AOP simulation outputs predict potential phenotypes based on the in vitro HTS data ToxCast. A heuristic computational intelligence framework that recapitulates the kinematics of dynamical cell signaling networks in the embryo, together with the in vitro profiling data, produce quantitative pr

  7. Approaches to the implementation of the activity approach in teaching computer science students with mobile computing systems

    Directory of Open Access Journals (Sweden)

    Марина Александровна Григорьева


    Full Text Available This article examines the need to incorporate active approach in learning science, the creation and application in educational practice methodical system based on the use of mobile computing systems

  8. Evolutionary Computation and Its Applications in Neural and Fuzzy Systems

    Directory of Open Access Journals (Sweden)

    Biaobiao Zhang


    Full Text Available Neural networks and fuzzy systems are two soft-computing paradigms for system modelling. Adapting a neural or fuzzy system requires to solve two optimization problems: structural optimization and parametric optimization. Structural optimization is a discrete optimization problem which is very hard to solve using conventional optimization techniques. Parametric optimization can be solved using conventional optimization techniques, but the solution may be easily trapped at a bad local optimum. Evolutionary computation is a general-purpose stochastic global optimization approach under the universally accepted neo-Darwinian paradigm, which is a combination of the classical Darwinian evolutionary theory, the selectionism of Weismann, and the genetics of Mendel. Evolutionary algorithms are a major approach to adaptation and optimization. In this paper, we first introduce evolutionary algorithms with emphasis on genetic algorithms and evolutionary strategies. Other evolutionary algorithms such as genetic programming, evolutionary programming, particle swarm optimization, immune algorithm, and ant colony optimization are also described. Some topics pertaining to evolutionary algorithms are also discussed, and a comparison between evolutionary algorithms and simulated annealing is made. Finally, the application of EAs to the learning of neural networks as well as to the structural and parametric adaptations of fuzzy systems is also detailed.

  9. Quantum computation of a complex system: The kicked Harper model (United States)

    Lévi, B.; Georgeot, B.


    The simulation of complex quantum systems on a quantum computer is studied, taking the kicked Harper model as an example. This well-studied system has a rich variety of dynamical behavior depending on parameters, displays interesting phenomena such as fractal spectra, mixed phase space, dynamical localization, anomalous diffusion, or partial delocalization, and can describe electrons in a magnetic field. Three different quantum algorithms are presented and analyzed, enabling us to simulate efficiently the evolution operator of this system with different precision using different resources. Depending on the parameters chosen, the system is near integrable, localized, or partially delocalized. In each case we identify transport or spectral quantities which can be obtained more efficiently on a quantum computer than on a classical one. In most cases, a polynomial gain compared to classical algorithms is obtained, which can be quadratic or less depending on the parameter regime. We also present the effects of static imperfections on the quantities selected and show that depending on the regime of parameters, very different behaviors are observed. Some quantities can be obtained reliably with moderate levels of imperfection even for large number of qubits, whereas others are exponentially sensitive to the number of qubits. In particular, the imperfection threshold for delocalization becomes exponentially small in the partially delocalized regime. Our results show that interesting behavior can be observed with as little as 7-8qubits and can be reliably measured in presence of moderate levels of internal imperfections.

  10. Integrated computational imaging system for enhanced polarimetric measurements (United States)

    Haider, Shahid A.; Kazemzadeh, Farnoud; Clausi, David A.; Wong, Alexander


    Polarimetry is a common technique used in chemistry for solution characterization and analysis, giving insight into the molecular structure of a solution measured through the rotation of linearly polarized light. This rotation is characterized by the Boits law. Without large optical path lengths, or high concentrations of solution, these optical rotations are typically very small, requiring elaborate and costly apparatuses. To ensure that the rotation measurements are accurate, these devices usually perform complex optical procedures or time-averaged point measurements to ensure that any intensity variation seen is a product of optical rotation and not from inherent noise sources in the system, such as sensor or shot noise. Time averaging is a lengthy process and rarely utilizes all of the information available on the sensor. To this end, we have developed a novel integrated, miniature, computational imaging system that enhances polarimetric measurements by taking advantage of the full spot size observed on an array detector. This computational imaging system is capable of using a single acquisition at unity gain to enhance the polarimetric measurements using a probabilistic framework, which accounts for inherent noise and optical characteristics in the acquisition process, to take advantage of spatial intensity relations. This approach is faster than time-averaging methods and can better account for any measurement uncertainties. In preliminary experiments, this system has produced comparably consistent measurements across multiple trials with the same chemical solution than time averaging techniques.

  11. Computer modeling the ATLAS Trigger/DAQ system performance

    CERN Document Server

    Cranfield, R; Kaczmarska, A; Korcyl, K; Vermeulen, J C; Wheeler, S


    In this paper simulation ("computer modeling") of the Trigger/DAQ system of the ATLAS experiment at the LHC accelerator is discussed. The system will consist of a few thousand end-nodes, which are interconnected by a large Local Area Network. The nodes will run various applications under the Linux OS. The purpose of computer modeling is to verify the rate handling capability of the system designed and to find potential problem areas. The models of the system components are kept as simple as possible but are sufficiently detailed to reproduce behavioral aspects relevant to the issues studied. Values of the model parameters have been determined using small dedicated setups. This calibration phase has been followed by a validation process. More complex setups have been wired-up and relevant measurement results were obtained. These setups were also modeled and the results were compared to the measurement results. Discrepancies were leading to modification and extension of the set of parameters. After gaining conf...

  12. Computational design and experimental validation of new thermal barrier systems

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Shengmin [Louisiana State Univ., Baton Rouge, LA (United States)


    The focus of this project is on the development of a reliable and efficient ab initio based computational high temperature material design method which can be used to assist the Thermal Barrier Coating (TBC) bond-coat and top-coat design. Experimental evaluations on the new TBCs are conducted to confirm the new TBCs’ properties. Southern University is the subcontractor on this project with a focus on the computational simulation method development. We have performed ab initio density functional theory (DFT) method and molecular dynamics simulation on screening the top coats and bond coats for gas turbine thermal barrier coating design and validation applications. For experimental validations, our focus is on the hot corrosion performance of different TBC systems. For example, for one of the top coatings studied, we examined the thermal stability of TaZr2.75O8 and confirmed it’s hot corrosion performance.

  13. Application of computational systems biology to explore environmental toxicity hazards

    DEFF Research Database (Denmark)

    Audouze, Karine Marie Laure; Grandjean, Philippe


    Background: Computer-based modeling is part of a new approach to predictive toxicology.Objectives: We investigated the usefulness of an integrated computational systems biology approach in a case study involving the isomers and metabolites of the pesticide dichlorodiphenyltrichloroethane (DDT......) to ascertain their possible links to relevant adverse effects.Methods: We extracted chemical-protein association networks for each DDT isomer and its metabolites using ChemProt, a disease chemical biology database that includes both binding and gene expression data, and we explored protein-protein interactions...... using a human interactome network. To identify associated dysfunctions and diseases, we integrated protein-disease annotations into the protein complexes using the Online Mendelian Inheritance in Man database and the Comparative Toxicogenomics Database.Results: We found 175 human proteins linked to p...

  14. Computer vision in roadway transportation systems: a survey (United States)

    Loce, Robert P.; Bernal, Edgar A.; Wu, Wencheng; Bala, Raja


    There is a worldwide effort to apply 21st century intelligence to evolving our transportation networks. The goals of smart transportation networks are quite noble and manifold, including safety, efficiency, law enforcement, energy conservation, and emission reduction. Computer vision is playing a key role in this transportation evolution. Video imaging scientists are providing intelligent sensing and processing technologies for a wide variety of applications and services. There are many interesting technical challenges including imaging under a variety of environmental and illumination conditions, data overload, recognition and tracking of objects at high speed, distributed network sensing and processing, energy sources, as well as legal concerns. This paper presents a survey of computer vision techniques related to three key problems in the transportation domain: safety, efficiency, and security and law enforcement. A broad review of the literature is complemented by detailed treatment of a few selected algorithms and systems that the authors believe represent the state-of-the-art.

  15. Computation of robustly stabilizing PID controllers for interval systems. (United States)

    Matušů, Radek; Prokop, Roman


    The paper is focused on the computation of all possible robustly stabilizing Proportional-Integral-Derivative (PID) controllers for plants with interval uncertainty. The main idea of the proposed method is based on Tan's (et al.) technique for calculation of (nominally) stabilizing PI and PID controllers or robustly stabilizing PI controllers by means of plotting the stability boundary locus in either P-I plane or P-I-D space. Refinement of the existing method by consideration of 16 segment plants instead of 16 Kharitonov plants provides an elegant and efficient tool for finding all robustly stabilizing PID controllers for an interval system. The validity and relatively effortless application of presented theoretical concepts are demonstrated through a computation and simulation example in which the uncertain mathematical model of an experimental oblique wing aircraft is robustly stabilized.

  16. Designing Guiding Systems for Brain-Computer Interfaces (United States)

    Kosmyna, Nataliya; Lécuyer, Anatole


    Brain–Computer Interface (BCI) community has focused the majority of its research efforts on signal processing and machine learning, mostly neglecting the human in the loop. Guiding users on how to use a BCI is crucial in order to teach them to produce stable brain patterns. In this work, we explore the instructions and feedback for BCIs in order to provide a systematic taxonomy to describe the BCI guiding systems. The purpose of our work is to give necessary clues to the researchers and designers in Human–Computer Interaction (HCI) in making the fusion between BCIs and HCI more fruitful but also to better understand the possibilities BCIs can provide to them. PMID:28824400

  17. Designing Guiding Systems for Brain-Computer Interfaces

    Directory of Open Access Journals (Sweden)

    Nataliya Kosmyna


    Full Text Available Brain–Computer Interface (BCI community has focused the majority of its research efforts on signal processing and machine learning, mostly neglecting the human in the loop. Guiding users on how to use a BCI is crucial in order to teach them to produce stable brain patterns. In this work, we explore the instructions and feedback for BCIs in order to provide a systematic taxonomy to describe the BCI guiding systems. The purpose of our work is to give necessary clues to the researchers and designers in Human–Computer Interaction (HCI in making the fusion between BCIs and HCI more fruitful but also to better understand the possibilities BCIs can provide to them.

  18. Computational lighting by an LED-based system (United States)

    Chien, Ming-Chin; Tien, Chung-Hao


    A methodology analogous to a general lens design rule was proposed to step-by-step optimize the spectral power distribution (SPD) of a composite light-emitting diodes (LEDs) cluster. The computation is conducted for arbitrary SPD combination for the applications in radiometry, photometry and colorimetry, respectively. Based on the matrix computation, a spectrally tunable source is implemented to strategically manipulate the chromaticity, system efficiency and light quality according to all kinds of operational purposes. By the R/G/B/A/CW light engine with graphic utility interface, the cluster engine was experimentally validated to offer a wide-ranged operation in ambient temperature (Ta = 10°C to 100°C) with high color quality scale (CQS < 85 points) as well as high luminous efficiency (LE < 100 lm/watt) over the chromaticity point from 2800K to 8000K.

  19. Overreaction to External Attacks on Computer Systems Could Be More Harmful than the Viruses Themselves. (United States)

    King, Kenneth M.


    Discussion of the recent computer virus attacks on computers with vulnerable operating systems focuses on the values of educational computer networks. The need for computer security procedures is emphasized, and the ethical use of computer hardware and software is discussed. (LRW)


    CERN Multimedia

    I. Fisk


    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...


    CERN Multimedia

    I. Fisk


      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...


    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...


    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...


    CERN Multimedia

    Contributions from I. Fisk


    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...


    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  6. Systems approaches to computational modeling of the oral microbiome

    Directory of Open Access Journals (Sweden)

    Dimiter V. Dimitrov


    Full Text Available Current microbiome research has generated tremendous amounts of data providing snapshots of molecular activity in a variety of organisms, environments, and cell types. However, turning this knowledge into whole system level of understanding on pathways and processes has proven to be a challenging task. In this review we highlight the applicability of bioinformatics and visualization techniques to large collections of data in order to better understand the information that contains related diet – oral microbiome – host mucosal transcriptome interactions. In particular we focus on systems biology of Porphyromonas gingivalis in the context of high throughput computational methods tightly integrated with translational systems medicine. Those approaches have applications for both basic research, where we can direct specific laboratory experiments in model organisms and cell cultures, to human disease, where we can validate new mechanisms and biomarkers for prevention and treatment of chronic disorders

  7. Knowledge-Based Systems in Biomedicine and Computational Life Science

    CERN Document Server

    Jain, Lakhmi


    This book presents a sample of research on knowledge-based systems in biomedicine and computational life science. The contributions include: ·         personalized stress diagnosis system ·         image analysis system for breast cancer diagnosis ·         analysis of neuronal cell images ·         structure prediction of protein ·         relationship between two mental disorders ·         detection of cardiac abnormalities ·         holistic medicine based treatment ·         analysis of life-science data  

  8. Computational physics simulation of classical and quantum systems

    CERN Document Server

    Scherer, Philipp O J


    This textbook presents basic numerical methods and applies them to a large variety of physical models in multiple computer experiments. Classical algorithms and more recent methods are explained. Partial differential equations are treated generally comparing important methods, and equations of motion are solved by a large number of simple as well as more sophisticated methods. Several modern algorithms for quantum wavepacket motion are compared. The first part of the book discusses the basic numerical methods, while the second part simulates classical and quantum systems. Simple but non-trivial examples from a broad range of physical topics offer readers insights into the numerical treatment but also the simulated problems. Rotational motion is studied in detail, as are simple quantum systems. A two-level system in an external field demonstrates elementary principles from quantum optics and simulation of a quantum bit. Principles of molecular dynamics are shown. Modern bounda ry element methods are presented ...

  9. Search systems and computer-implemented search methods

    Energy Technology Data Exchange (ETDEWEB)

    Payne, Deborah A.; Burtner, Edwin R.; Hampton, Shawn D.; Gillen, David S.; Henry, Michael J.


    Search systems and computer-implemented search methods are described. In one aspect, a search system includes a communications interface configured to access a plurality of data items of a collection, wherein the data items include a plurality of image objects individually comprising image data utilized to generate an image of the respective data item. The search system may include processing circuitry coupled with the communications interface and configured to process the image data of the data items of the collection to identify a plurality of image content facets which are indicative of image content contained within the images and to associate the image objects with the image content facets and a display coupled with the processing circuitry and configured to depict the image objects associated with the image content facets.

  10. The ARES test system for palm OS handheld computers. (United States)

    Elsmore, Timothy F; Reeves, Dennis L; Reeves, Andrea N


    The ARES (ANAM Readiness Evaluation System) is a cognitive testing system designed for operation on palm OS handheld computers i.e., Personal Digital Assistants (PDA). It provides an inexpensive and portable testing platform for field and clinical applications. ARES test batteries can be configured from a library of tests derived from the ANAM test system. ARES features include support of multiple users on a single PDA, a Microsoft Windows test battery authoring program, and a program for downloading, viewing, graphing, and archiving data. In validity tests, the same subjects were tested on identical ARES and conventional ANAM NeuroCog test batteries. Scores from the two platforms correlated highly, but absolute scores differed slightly. In reliability testing with the ARES Warrior battery, ARES scores were highly correlated in daily tests.

  11. CSI Flight Computer System and experimental test results (United States)

    Sparks, Dean W., Jr.; Peri, F., Jr.; Schuler, P.


    This paper describes the CSI Computer System (CCS) and the experimental tests performed to validate its functionality. This system is comprised of two major components: the space flight qualified Excitation and Damping Subsystem (EDS) which performs controls calculations; and the Remote Interface Unit (RIU) which is used for data acquisition, transmission, and filtering. The flight-like RIU is the interface between the EDS and the sensors and actuators positioned on the particular structure under control. The EDS and RIU communicate over the MIL-STD-1553B, a space flight qualified bus. To test the CCS under realistic conditions, it was connected to the Phase-0 CSI Evolutionary Model (CEM) at NASA Langley Research Center. The following schematic shows how the CCS is connected to the CEM. Various tests were performed which validated the ability of the system to perform control/structures experiments.

  12. Integrated databases and computer systems for studying eukaryotic gene expression. (United States)

    Kolchanov, N A; Ponomarenko, M P; Frolov, A S; Ananko, E A; Kolpakov, F A; Ignatieva, E V; Podkolodnaya, O A; Goryachkovskaya, T N; Stepanenko, I L; Merkulova, T I; Babenko, V V; Ponomarenko, Y V; Kochetov, A V; Podkolodny, N L; Vorobiev, D V; Lavryushev, S V; Grigorovich, D A; Kondrakhin, Y V; Milanesi, L; Wingender, E; Solovyev, V; Overton, G C


    The goal of the work was to develop a WWW-oriented computer system providing a maximal integration of informational and software resources on the regulation of gene expression and navigation through them. Rapid growth of the variety and volume of information accumulated in the databases on regulation of gene expression necessarily requires the development of computer systems for automated discovery of the knowledge that can be further used for analysis of regulatory genomic sequences. The GeneExpress system developed includes the following major informational and software modules: (1) Transcription Regulation (TRRD) module, which contains the databases on transcription regulatory regions of eukaryotic genes and TRRD Viewer for data visualization; (2) Site Activity Prediction (ACTIVITY), the module for analysis of functional site activity and its prediction; (3) Site Recognition module, which comprises (a) B-DNA-VIDEO system for detecting the conformational and physicochemical properties of DNA sites significant for their recognition, (b) Consensus and Weight Matrices (ConsFrec) and (c) Transcription Factor Binding Sites Recognition (TFBSR) systems for detecting conservative contextual regions of functional sites and their recognition; (4) Gene Networks (GeneNet), which contains an object-oriented database accumulating the data on gene networks and signal transduction pathways, and the Java-based Viewer for exploration and visualization of the GeneNet information; (5) mRNA Translation (Leader mRNA), designed to analyze structural and contextual properties of mRNA 5'-untranslated regions (5'-UTRs) and predict their translation efficiency; (6) other program modules designed to study the structure-function organization of regulatory genomic sequences and regulatory proteins. GeneExpress is available at http://wwwmgs.bionet.nsc. ru/systems/GeneExpress/ and the links to the mirror site(s) can be found at ++.

  13. Multiscale analysis of nonlinear systems using computational homology

    Energy Technology Data Exchange (ETDEWEB)

    Konstantin Mischaikow, Rutgers University/Georgia Institute of Technology, Michael Schatz, Georgia Institute of Technology, William Kalies, Florida Atlantic University, Thomas Wanner,George Mason University


    Characterization - We extended our previous work on studying the time evolution of patterns associated with phase separation in conserved concentration fields. (6) Probabilistic Homology Validation - work on microstructure characterization is based on numerically studying the homology of certain sublevel sets of a function, whose evolution is described by deterministic or stochastic evolution equations. (7) Computational Homology and Dynamics - Topological methods can be used to rigorously describe the dynamics of nonlinear systems. We are approaching this problem from several perspectives and through a variety of systems. (8) Stress Networks in Polycrystals - we have characterized stress networks in polycrystals. This part of the project is aimed at developing homological metrics which can aid in distinguishing not only microstructures, but also derived mechanical response fields. (9) Microstructure-Controlled Drug Release - This part of the project is concerned with the development of topological metrics in the context of controlled drug delivery systems, such as drug-eluting stents. We are particularly interested in developing metrics which can be used to link the processing stage to the resulting microstructure, and ultimately to the achieved system response in terms of drug release profiles. (10) Microstructure of Fuel Cells - we have been using our computational homology software to analyze the topological structure of the void, metal and ceramic components of a Solid Oxide Fuel Cell.

  14. Multiscale analysis of nonlinear systems using computational homology

    Energy Technology Data Exchange (ETDEWEB)

    Konstantin Mischaikow; Michael Schatz; William Kalies; Thomas Wanner


    Characterization - We extended our previous work on studying the time evolution of patterns associated with phase separation in conserved concentration fields. (6) Probabilistic Homology Validation - work on microstructure characterization is based on numerically studying the homology of certain sublevel sets of a function, whose evolution is described by deterministic or stochastic evolution equations. (7) Computational Homology and Dynamics - Topological methods can be used to rigorously describe the dynamics of nonlinear systems. We are approaching this problem from several perspectives and through a variety of systems. (8) Stress Networks in Polycrystals - we have characterized stress networks in polycrystals. This part of the project is aimed at developing homological metrics which can aid in distinguishing not only microstructures, but also derived mechanical response fields. (9) Microstructure-Controlled Drug Release - This part of the project is concerned with the development of topological metrics in the context of controlled drug delivery systems, such as drug-eluting stents. We are particularly interested in developing metrics which can be used to link the processing stage to the resulting microstructure, and ultimately to the achieved system response in terms of drug release profiles. (10) Microstructure of Fuel Cells - we have been using our computational homology software to analyze the topological structure of the void, metal and ceramic components of a Solid Oxide Fuel Cell.

  15. Time-Domain Terahertz Computed Axial Tomography NDE System (United States)

    Zimdars, David


    NASA has identified the need for advanced non-destructive evaluation (NDE) methods to characterize aging and durability in aircraft materials to improve the safety of the nation's airline fleet. 3D THz tomography can play a major role in detection and characterization of flaws and degradation in aircraft materials, including Kevlar-based composites and Kevlar and Zylon fabric covers for soft-shell fan containment where aging and durability issues are critical. A prototype computed tomography (CT) time-domain (TD) THz imaging system has been used to generate 3D images of several test objects including a TUFI tile (a thermal protection system tile used on the Space Shuttle and possibly the Orion or similar capsules). This TUFI tile had simulated impact damage that was located and the depth of damage determined. The CT motion control gan try was designed and constructed, and then integrated with a T-Ray 4000 control unit and motion controller to create a complete CT TD-THz imaging system prototype. A data collection software script was developed that takes multiple z-axis slices in sequence and saves the data for batch processing. The data collection software was integrated with the ability to batch process the slice data with the CT TD-THz image reconstruction software. The time required to take a single CT slice was decreased from six minutes to approximately one minute by replacing the 320 ps, 100-Hz waveform acquisition system with an 80 ps, 1,000-Hz waveform acquisition system. The TD-THZ computed tomography system was built from pre-existing commercial off-the-shelf subsystems. A CT motion control gantry was constructed from COTS components that can handle larger samples. The motion control gantry allows inspection of sample sizes of up to approximately one cubic foot (.0.03 cubic meters). The system reduced to practice a CT-TDTHz system incorporating a COTS 80- ps/l-kHz waveform scanner. The incorporation of this scanner in the system allows acquisition of 3D

  16. Computer Advisory System in the Domain of Copper Alloys Manufacturing

    Directory of Open Access Journals (Sweden)

    Wilk-Kołodziejczyk D.


    Full Text Available The main scope of the article is the development of a computer system, which should give advices at problem of cooper alloys manufacturing. This problem relates with choosing of an appropriate type of bronze (e.g. the BA 1044 bronze with possible modification (e.g. calcium carbide modifications: Ca + C or CaC2 and possible heat treatment operations (quenching, tempering in order to obtain desired mechanical properties of manufactured material described by tensile strength - Rm, yield strength - Rp0.2 and elongation - A5. By construction of the computer system being the goal of presented here work Case-based Reasoning is proposed to be used. Case-based Reasoning is the methodology within Artificial Intelligence techniques, which enables solving new problems basing on experiences that are solutions obtained in the past. Case-based Reasoning also enables incremental learning, because every new experience is retained each time in order to be available for future processes of problem solving. Proposed by the developed system solution can be used by a technologist as a rough solution for cooper alloys manufacturing problem, which requires further tests in order to confirm it correctness.

  17. Computer-based desktop system for surgical videotape editing. (United States)

    Vincent-Hamelin, E; Sarmiento, J M; de la Puente, J M; Vicente, M


    The educational role of surgical video presentations should be optimized by linking surgical images to graphic evaluation of indications, techniques, and results. We describe a PC-based video production system for personal editing of surgical tapes, according to the objectives of each presentation. The hardware requirement is a personal computer (100 MHz processor, 1-Gb hard disk, 16 Mb RAM) with a PC-to-TV/video transfer card plugged into a slot. Computer-generated numerical data, texts, and graphics are transformed into analog signals displayed on TV/video. A Genlock interface (a special interface card) synchronizes digital and analog signals, to overlay surgical images to electronic illustrations. The presentation is stored as digital information or recorded on a tape. The proliferation of multimedia tools is leading us to adapt presentations to the objectives of lectures and to integrate conceptual analyses with dynamic image-based information. We describe a system that handles both digital and analog signals, production being recorded on a tape. Movies may be managed in a digital environment, with either an "on-line" or "off-line" approach. System requirements are high, but handling a single device optimizes editing without incurring such complexity that management becomes impractical to surgeons. Our experience suggests that computerized editing allows linking surgical scientific and didactic messages on a single communication medium, either a videotape or a CD-ROM.

  18. Computational Models for Creating Homogeneous Magnetic Field Generation Systems

    Directory of Open Access Journals (Sweden)

    Gerlys M. Villalobos-Fontalvo


    Full Text Available It is increasingly common to use magnetic fields at the cellular level to assess their interaction with biological tissues. The stimulation is usually done with Helmholtz coils which generate a uniform magnetic field in the center of the system. However, assessing cellular behavior with different magnetic field characteristics can be a long and expensive process. For this, it can be used computational models to previously estimate the cellular behavior due to variety of field characteristics prior to in-vitro stimulation in a laboratory. In this paper, we present a methodology for the development of three computational models of homogeneous magnetic field generation systems for possible application in cell stimulation. The models were developed in the Ansys Workbench environment and it was evaluated the magnetic flux density behavior at different configurations. The results were validated with theoretical calculations from the Biot-Savart law. Validated models will be coupled to Ansys APDL environment in order to assess the harmonic response of the system.

  19. A computer graphics system for visualizing spacecraft in orbit (United States)

    Eyles, Don E.


    To carry out unanticipated operations with resources already in space is part of the rationale for a permanently manned space station in Earth orbit. The astronauts aboard a space station will require an on-board, spatial display tool to assist the planning and rehearsal of upcoming operations. Such a tool can also help astronauts to monitor and control such operations as they occur, especially in cases where first-hand visibility is not possible. A computer graphics visualization system designed for such an application and currently implemented as part of a ground-based simulation is described. The visualization system presents to the user the spatial information available in the spacecraft's computers by drawing a dynamic picture containing the planet Earth, the Sun, a star field, and up to two spacecraft. The point of view within the picture can be controlled by the user to obtain a number of specific visualization functions. The elements of the display, the methods used to control the display's point of view, and some of the ways in which the system can be used are described.

  20. Multi-threaded, discrete event simulation of distributed computing systems (United States)

    Legrand, Iosif; MONARC Collaboration


    The LHC experiments have envisaged computing systems of unprecedented complexity, for which is necessary to provide a realistic description and modeling of data access patterns, and of many jobs running concurrently on large scale distributed systems and exchanging very large amounts of data. A process oriented approach for discrete event simulation is well suited to describe various activities running concurrently, as well the stochastic arrival patterns specific for such type of simulation. Threaded objects or "Active Objects" can provide a natural way to map the specific behaviour of distributed data processing into the simulation program. The simulation tool developed within MONARC is based on Java (TM) technology which provides adequate tools for developing a flexible and distributed process oriented simulation. Proper graphics tools, and ways to analyze data interactively, are essential in any simulation project. The design elements, status and features of the MONARC simulation tool are presented. The program allows realistic modeling of complex data access patterns by multiple concurrent users in large scale computing systems in a wide range of possible architectures, from centralized to highly distributed. Comparison between queuing theory and realistic client-server measurements is also presented.