WorldWideScience

Sample records for on-site computer system

  1. The feasibility of mobile computing for on-site inspection.

    Energy Technology Data Exchange (ETDEWEB)

    Horak, Karl Emanuel; DeLand, Sharon Marie; Blair, Dianna Sue

    2014-09-01

    With over 5 billion cellphones in a world of 7 billion inhabitants, mobile phones are the most quickly adopted consumer technology in the history of the world. Miniaturized, power-efficient sensors, especially video-capable cameras, are becoming extremely widespread, especially when one factors in wearable technology like Apples Pebble, GoPro video systems, Google Glass, and lifeloggers. Tablet computers are becoming more common, lighter weight, and power-efficient. In this report the authors explore recent developments in mobile computing and their potential application to on-site inspection for arms control verification and treaty compliance determination. We examine how such technology can effectively be applied to current and potential future inspection regimes. Use cases are given for both host-escort and inspection teams. The results of field trials and their implications for on-site inspections are discussed.

  2. On-site early-warning system for Bishkek (Kyrgyzstan

    Directory of Open Access Journals (Sweden)

    Dino Bindi

    2015-04-01

    Full Text Available In this work, the development of an on-site early warning system for Bishkek (Kyrgyzstan is outlined. Several low cost sensors equipped with MEMS accelerometers are installed in eight buildings distributed within the urban area. The different sensing units communicate each other via wireless links and the seismic data are streamed in real-time to the data center using internet. Since each single sensing unit has computing capabilities, software for data processing can be installed to perform decentralized actions. In particular, each sensing unit can perform event detection task and run software for on-site early warning. If a description for the vulnerability of the building is uploaded in the sensing unit, this piece of information can be exploited to introduce the expected probability of damage in the early-warning protocol customized for a specific structure.

  3. Information model for on-site inspection system

    Energy Technology Data Exchange (ETDEWEB)

    Bray, O.H.; Deland, S.

    1997-01-01

    This report describes the information model that was jointly developed as part of two FY93 LDRDs: (1) Information Integration for Data Fusion, and (2) Interactive On-Site Inspection System: An Information System to Support Arms Control Inspections. This report describes the purpose and scope of the two LDRD projects and reviews the prototype development approach, including the use of a GIS. Section 2 describes the information modeling methodology. Section 3 provides a conceptual data dictionary for the OSIS (On-Site Information System) model, which can be used in conjunction with the detailed information model provided in the Appendix. Section 4 discussions the lessons learned from the modeling and the prototype. Section 5 identifies the next steps--two alternate paths for future development. The long-term purpose of the On-Site Inspection LDRD was to show the benefits of an information system to support a wide range of on-site inspection activities for both offensive and defensive inspections. The database structure and the information system would support inspection activities under nuclear, chemical, biological, and conventional arms control treaties. This would allow a common database to be shared for all types of inspections, providing much greater cross-treaty synergy.

  4. 76 FR 32227 - DST Systems, Inc., Including On-Site Leased Workers From Comsys Information Technology Services...

    Science.gov (United States)

    2011-06-03

    ... information processing, computer software services, and business solutions, to the financial services... Employment and Training Administration DST Systems, Inc., Including On-Site Leased Workers From Comsys Information Technology Services, Megaforce, and Kelly Services Kansas City, MO; DST Technologies, a...

  5. Greenhouse gas emissions from on-site wastewater treatment systems

    Science.gov (United States)

    Somlai-Haase, Celia; Knappe, Jan; Gill, Laurence

    2016-04-01

    Nearly one third of the Irish population relies on decentralized domestic wastewater treatment systems which involve the discharge of effluent into the soil via a percolation area (drain field). In such systems, wastewater from single households is initially treated on-site either by a septic tank and an additional packaged secondary treatment unit, in which the influent organic matter is converted into carbon dioxide (CO2) and methane (CH4) by microbial mediated processes. The effluent from the tanks is released into the soil for further treatment in the unsaturated zone where additional CO2 and CH4 are emitted to the atmosphere as well as nitrous oxide (N2O) from the partial denitrification of nitrate. Hence, considering the large number of on-site systems in Ireland and internationally, these are potential significant sources of greenhouse gas (GHG) emissions, and yet have received almost no direct field measurement. Here we present the first attempt to quantify and qualify the production and emissions of GHGs from a septic tank system serving a single house in the County Westmeath, Ireland. We have sampled the water for dissolved CO2, CH4 and N2O and measured the gas flux from the water surface in the septic tank. We have also carried out long-term flux measurements of CO2 from the drain field, using an automated soil gas flux system (LI-8100A, Li-Cor®) covering a whole year semi-continuously. This has enabled the CO2 emissions from the unsaturated zone to be correlated against different meteorological parameters over an annual cycle. In addition, we have integrated an ultraportable GHG analyser (UGGA, Los Gatos Research Inc.) into the automated soil gas flux system to measure CH4 flux. Further, manual sampling has also provided a better understanding of N2O emissions from the septic tank system.

  6. Computer systems

    Science.gov (United States)

    Olsen, Lola

    1992-01-01

    In addition to the discussions, Ocean Climate Data Workshop hosts gave participants an opportunity to hear about, see, and test for themselves some of the latest computer tools now available for those studying climate change and the oceans. Six speakers described computer systems and their functions. The introductory talks were followed by demonstrations to small groups of participants and some opportunities for participants to get hands-on experience. After this familiarization period, attendees were invited to return during the course of the Workshop and have one-on-one discussions and further hands-on experience with these systems. Brief summaries or abstracts of introductory presentations are addressed.

  7. On-Site Inspection RadioIsotopic Spectroscopy (Osiris) System Development

    Energy Technology Data Exchange (ETDEWEB)

    Caffrey, Gus J. [Idaho National Laboratory, Idaho Falls, ID (United States); Egger, Ann E. [Idaho National Laboratory, Idaho Falls, ID (United States); Krebs, Kenneth M. [Idaho National Laboratory, Idaho Falls, ID (United States); Milbrath, B. D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Jordan, D. V. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Warren, G. A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wilmer, N. G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-09-01

    We have designed and tested hardware and software for the acquisition and analysis of high-resolution gamma-ray spectra during on-site inspections under the Comprehensive Nuclear-Test-Ban Treaty (CTBT). The On-Site Inspection RadioIsotopic Spectroscopy—Osiris—software filters the spectral data to display only radioisotopic information relevant to CTBT on-site inspections, e.g.,132I. A set of over 100 fission-product spectra was employed for Osiris testing. These spectra were measured, where possible, or generated by modeling. The synthetic test spectral compositions include non-nuclear-explosion scenarios, e.g., a severe nuclear reactor accident, and nuclear-explosion scenarios such as a vented underground nuclear test. Comparing its computer-based analyses to expert visual analyses of the test spectra, Osiris correctly identifies CTBT-relevant fission product isotopes at the 95% level or better.The Osiris gamma-ray spectrometer is a mechanically-cooled, battery-powered ORTEC Transpec-100, chosen to avoid the need for liquid nitrogen during on-site inspections. The spectrometer was used successfully during the recent 2014 CTBT Integrated Field Exercise in Jordan. The spectrometer is controlled and the spectral data analyzed by a Panasonic Toughbook notebook computer. To date, software development has been the main focus of the Osiris project. In FY2016-17, we plan to modify the Osiris hardware, integrate the Osiris software and hardware, and conduct rigorous field tests to ensure that the Osiris system will function correctly during CTBT on-site inspections. The planned development will raise Osiris to technology readiness level TRL-8; transfer the Osiris technology to a commercial manufacturer, and demonstrate Osiris to potential CTBT on-site inspectors.

  8. On-Site Renewable Energy and Green Buildings: A System-Level Analysis.

    Science.gov (United States)

    Al-Ghamdi, Sami G; Bilec, Melissa M

    2016-05-03

    Adopting a green building rating system (GBRSs) that strongly considers use of renewable energy can have important environmental consequences, particularly in developing countries. In this paper, we studied on-site renewable energy and GBRSs at the system level to explore potential benefits and challenges. While we have focused on GBRSs, the findings can offer additional insight for renewable incentives across sectors. An energy model was built for 25 sites to compute the potential solar and wind power production on-site and available within the building footprint and regional climate. A life-cycle approach and cost analysis were then completed to analyze the environmental and economic impacts. Environmental impacts of renewable energy varied dramatically between sites, in some cases, the environmental benefits were limited despite the significant economic burden of those renewable systems on-site and vice versa. Our recommendation for GBRSs, and broader policies and regulations, is to require buildings with higher environmental impacts to achieve higher levels of energy performance and on-site renewable energy utilization, instead of fixed percentages.

  9. 75 FR 41522 - Novell, Inc., Including On-Site Leased Workers From Affiliated Computer Services, Inc., (ACS...

    Science.gov (United States)

    2010-07-16

    ... Computer Services, Inc., (ACS), Provo, UT; Amended Certification Regarding Eligibility To Apply for Worker... reports that workers leased from Affiliated Computer Services, Inc., (ACS) were employed on-site at the...., (ACS) working on-site at the Provo, Utah location of Novell, Inc. The amended notice applicable to...

  10. Computer programming and computer systems

    CERN Document Server

    Hassitt, Anthony

    1966-01-01

    Computer Programming and Computer Systems imparts a "reading knowledge? of computer systems.This book describes the aspects of machine-language programming, monitor systems, computer hardware, and advanced programming that every thorough programmer should be acquainted with. This text discusses the automatic electronic digital computers, symbolic language, Reverse Polish Notation, and Fortran into assembly language. The routine for reading blocked tapes, dimension statements in subroutines, general-purpose input routine, and efficient use of memory are also elaborated.This publication is inten

  11. The interactive on-site inspection system: An information management system to support arms control inspections

    Energy Technology Data Exchange (ETDEWEB)

    DeLand, S.M.; Widney, T.W.; Horak, K.E.; Caudell, R.B.; Grose, E.M.

    1996-12-01

    The increasing use of on-site inspection (OSI) to meet the nation`s obligations with recently signed treaties requires the nation to manage a variety of inspection requirements. This document describes a prototype automated system to assist in the preparation and management of these inspections.

  12. Study of component technologies for fuel cell on-site integrated energy systems

    Science.gov (United States)

    Lee, W. D.; Mathias, S.

    1980-01-01

    Heating, ventilation and air conditioning equipment are integrated with three types of fuel cells. System design and computer simulations are developed to utilize the thermal energy discharge of the fuel in the most cost effective manner. The fuel provides all of the electric needs and a loss of load probability analysis is used to ensure adequate power plant reliability. Equipment cost is estimated for each of the systems analyzed. A levelized annual cost reflecting owning and operating costs including the cost of money was used to select the most promising integrated system configurations. Cash flows are presented for the most promising 16 systems. Several systems for the 96 unit apartment complex (a retail store was also studied) were cost competitive with both gas and electric based conventional systems. Thermal storage is shown to be beneficial and the optimum absorption chiller sizing (waste heat recovery) in connection with electric chillers are developed. Battery storage was analyzed since the system is not electric grid connected. Advanced absorption chillers were analyzed as well. Recommendations covering financing, technical development, and policy issues are given to accelerate the commercialization of the fuel cell for on-site power generation in buildings.

  13. Integrated airlift bioreactor system for on-site small wastewater treatment.

    Science.gov (United States)

    Chen, S L; Li, F; Qiao, Y; Yang, H G; Ding, F X

    2005-01-01

    An integrated airlift bioreactor system was developed, which mainly consists of a multi-stage loop reactor and a gas-liquid-solid separation baffle and possesses dual functions as bioreactor and settler. This integrated system was used for on-site treatment of industrial glycol wastewater in lab-scale. The strategy of gradually increasing practical wastewater concentration while maintaining the co-substrate glucose wastewater concentration helped to accelerate the microbial acclimation process. Investigation of microbial acclimation, operation parameters evaluation and microbial observation has demonstrated the economical and technical feasibility of this integrated airlift bioreactor system for on-site small industrial wastewater treatment.

  14. Portable, fully autonomous, ion chromatography system for on-site analyses.

    Science.gov (United States)

    Elkin, Kyle R

    2014-07-25

    The basic operating principles of a portable, fully autonomous, ion chromatography system are described. The system affords the user the ability to collect and analyze samples continuously for 27 days, or about 1930 injections before needing any user intervention. Within the 13 kg system, is a fully computer controlled autosampling, chromatography and data acquisition system. An eluent reflux device (ERD), which integrates eluent suppression and generation in a single multi-chambered device, is used to minimize eluent consumption. During operation, about 1 μL of water per minute is lost to waste while operating standard-bore chromatography at 0.5 mL min(-1) due to eluent refluxing. Over the course of 27 days, about 100mL of rinse water is consumed, effectively eliminating waste production. Data showing the reproducibility (below 1% relative standard deviation over 14 days) of the device is also presented. Chromatographic analyses of common anions (Cl(-), NO3(-), SO4(2-), PO4(3-)), is accomplished in under 15 min using a low backpressure guard column with ∼ 25 mM KOH isocratic elution. For detection, a small capacitively-coupled contactless conductivity detector (C4D) is employed, able to report analytes in the sub to low micromolar range. Preconcentration of the injected samples gives a 50-fold decrease in detection limits, primarily utilized for in-situ detection of phosphate (LOQ 10 μg L(-1)). Field analyses are shown for multiple on-site analyses of stream water indifferent weather conditions. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Computer system identification

    OpenAIRE

    Lesjak, Borut

    2008-01-01

    The concept of computer system identity in computer science bears just as much importance as does the identity of an individual in a human society. Nevertheless, the identity of a computer system is incomparably harder to determine, because there is no standard system of identification we could use and, moreover, a computer system during its life-time is quite indefinite, since all of its regular and necessary hardware and software upgrades soon make it almost unrecognizable: after a number o...

  16. Pi-EEWS: a low cost prototype for on-site earthquake early warning system

    Science.gov (United States)

    Pazos, Antonio; Vera, Angel; Morgado, Arturo; Rioja, Carlos; Davila, Jose Martin; Cabieces, Roberto

    2017-04-01

    The Royal Spanish Navy Observatory (ROA), with the participation of the Cadiz University (UCA), have been developed the ALERTES-SC3 EEWS (regional approach) based on the SeisComP3 software package. This development has been done in the frame of the Spanish ALERT-ES (2011-2013) and ALERTES-RIM (2014-2016) projects, and now a days it is being tested in real time for south Iberia. Additionally, the ALERTES-SC3 system integrates an on-site EEWS software, developed by ROA-UCA, which is running for testing in real time in some seismic broad band stations of the WM network. Regional EEWS are not able to provide alerts in the area closet to the epicentre (blind zone), so a dense on-site EEWS is necessary. As it was mentioned, ALERTES-SC3 inludes the on-site software running on several WM stations but a more dense on-site stations are necessary to cover the blind zones. In order to densify this areas, inside of the "blind zones", a low cost on-site prototype "Pi-EEWS", based on a Raspberry Pi card and low cost acelerometers. In this work the main design ideas, the components and its capabilities will be shown.

  17. Technology development for phosphoric acid fuel cell powerplant (phase 2). [on site integrated energy systems

    Science.gov (United States)

    Christner, L.

    1980-01-01

    Progress is reported in the development of material, cell components, and reformers for on site integrated energy systems. Internal resistance and contact resistance were improved. Dissolved gases (O2, N2, and CO2) were found to have no effect on the electrochemical corrosion of phenolic composites. Stack performance was increased by 100 mV over the average 1979 level.

  18. Heat recovery subsystem and overall system integration of fuel cell on-site integrated energy systems

    Science.gov (United States)

    Mougin, L. J.

    1983-01-01

    The best HVAC (heating, ventilating and air conditioning) subsystem to interface with the Engelhard fuel cell system for application in commercial buildings was determined. To accomplish this objective, the effects of several system and site specific parameters on the economic feasibility of fuel cell/HVAC systems were investigated. An energy flow diagram of a fuel cell/HVAC system is shown. The fuel cell system provides electricity for an electric water chiller and for domestic electric needs. Supplemental electricity is purchased from the utility if needed. An excess of electricity generated by the fuel cell system can be sold to the utility. The fuel cell system also provides thermal energy which can be used for absorption cooling, space heating and domestic hot water. Thermal storage can be incorporated into the system. Thermal energy is also provided by an auxiliary boiler if needed to supplement the fuel cell system output. Fuel cell/HVAC systems were analyzed with the TRACE computer program.

  19. Reliability of on-site greywater treatment systems in Mediterranean and arid environments - a case study.

    Science.gov (United States)

    Alfiya, Y; Gross, A; Sklarz, M; Friedler, E

    2013-01-01

    On-site greywater (GW) treatment and reuse is gaining popularity. However, a main point of concern is that inadequate treatment of such water may lead to negative environmental and health effects. Maintenance of single-family home GW systems is usually performed by home owners with limited professional support. Therefore, unless GW systems are reliable, environmental and public health might be compromised. This study is aimed at investigating the reliability of on-site recirculated vertical flow constructed wetlands (RVFCW) in 20 single-family homes. In order to ensure reliability, the failure-tree approach was adopted during the design and construction of the systems. The performance of the systems was monitored for 1.5 years, by evaluating treated GW flow and quality, and by recording all malfunctions and maintenance work. Only 39 failures occurred during this period, of which four caused irrigation with impaired quality GW, while the rest led to no irrigation. The mean time between failures (MTBF) was 305 days; two out of the 20 systems suffered from seven malfunctions (each), while nine systems did not fail at all. Thus, it can be postulated that if on-site GW treatment systems are designed with the right controls, and if scheduled (basic and relatively infrequent) maintenance is performed, GW reuse can be safe to the environment and human health.

  20. Tensor computations in computer algebra systems

    CERN Document Server

    Korolkova, A V; Sevastyanov, L A

    2014-01-01

    This paper considers three types of tensor computations. On their basis, we attempt to formulate criteria that must be satisfied by a computer algebra system dealing with tensors. We briefly overview the current state of tensor computations in different computer algebra systems. The tensor computations are illustrated with appropriate examples implemented in specific systems: Cadabra and Maxima.

  1. Distributed computer control systems

    Energy Technology Data Exchange (ETDEWEB)

    Suski, G.J.

    1986-01-01

    This book focuses on recent advances in the theory, applications and techniques for distributed computer control systems. Contents (partial): Real-time distributed computer control in a flexible manufacturing system. Semantics and implementation problems of channels in a DCCS specification. Broadcast protocols in distributed computer control systems. Design considerations of distributed control architecture for a thermal power plant. The conic toolset for building distributed systems. Network management issues in distributed control systems. Interprocessor communication system architecture in a distributed control system environment. Uni-level homogenous distributed computer control system and optimal system design. A-nets for DCCS design. A methodology for the specification and design of fault tolerant real time systems. An integrated computer control system - architecture design, engineering methodology and practical experience.

  2. Assessment of On-site sanitation system on local groundwater regime in an alluvial aquifer

    Science.gov (United States)

    Quamar, Rafat; Jangam, C.; Veligeti, J.; Chintalapudi, P.; Janipella, R.

    2017-06-01

    The present study is an attempt to study the impact of the On-site sanitation system on the groundwater sources in its vicinity. The study has been undertaken in the Agra city of Yamuna sub-basin. In this context, sampling sites (3 nos) namely Pandav Nagar, Ayodhya Kunj and Laxmi Nagar were selected for sampling. The groundwater samples were analyzed for major cations, anions and faecal coliform. Critical parameters namely chloride, nitrate and Faecal coliform were considered to assess the impact of the On-site sanitation systems. The analytical results shown that except for chloride, most of the samples exceeded the Bureau of Indian Standard limits for drinking water for all the other analyzed parameters, i.e., nitrate and faecal coliform in the first two sites. In Laxmi Nagar, except for faecal coliform, all the samples are below the BIS limits. In all the three sites, faecal coliform was found in majority of the samples. A comparison of present study indicates that the contamination of groundwater in alluvial setting is less as compared to hard rock where On-site sanitation systems have been implemented.

  3. Analysis of a fuel cell on-site integrated energy system for a residential complex

    Science.gov (United States)

    Simons, S. N.; Maag, W. L.

    1979-01-01

    The energy use and costs of the on-site integrated energy system (OS/IES) which provides electric power from an on-site power plant and recovers heat that would normally be rejected to the environment is compared to a conventional system purchasing electricity from a utility and a phosphoric acid fuel cell powered system. The analysis showed that for a 500-unit apartment complex a fuel OS/IES would be about 10% more energy conservative in terms of total coal consumption than a diesel OS/IES system or a conventional system. The fuel cell OS/IES capital costs could be 30 to 55% greater than the diesel OS/IES capital costs for the same life cycle costs. The life cycle cost of a fuel cell OS/IES would be lower than that for a conventional system as long as the cost of electricity is greater than $0.05 to $0.065/kWh. An analysis of several parametric combinations of fuel cell power plant and state-of-art energy recovery systems and annual fuel requirement calculations for four locations were made. It was shown that OS/IES component choices are a major factor in fuel consumption, with the least efficient system using 25% more fuel than the most efficient. Central air conditioning and heat pumps result in minimum fuel consumption while individual air conditioning units increase it, and in general the fuel cell of highest electrical efficiency has the lowest fuel consumption.

  4. ALMA correlator computer systems

    Science.gov (United States)

    Pisano, Jim; Amestica, Rodrigo; Perez, Jesus

    2004-09-01

    We present a design for the computer systems which control, configure, and monitor the Atacama Large Millimeter Array (ALMA) correlator and process its output. Two distinct computer systems implement this functionality: a rack- mounted PC controls and monitors the correlator, and a cluster of 17 PCs process the correlator output into raw spectral results. The correlator computer systems interface to other ALMA computers via gigabit Ethernet networks utilizing CORBA and raw socket connections. ALMA Common Software provides the software infrastructure for this distributed computer environment. The control computer interfaces to the correlator via multiple CAN busses and the data processing computer cluster interfaces to the correlator via sixteen dedicated high speed data ports. An independent array-wide hardware timing bus connects to the computer systems and the correlator hardware ensuring synchronous behavior and imposing hard deadlines on the control and data processor computers. An aggregate correlator output of 1 gigabyte per second with 16 millisecond periods and computational data rates of approximately 1 billion floating point operations per second define other hard deadlines for the data processing computer cluster.

  5. Fault tolerant computing systems

    CERN Document Server

    Randell, B

    1981-01-01

    Fault tolerance involves the provision of strategies for error detection, damage assessment, fault treatment and error recovery. A survey is given of the different sorts of strategies used in highly reliable computing systems, together with an outline of recent research on the problems of providing fault tolerance in parallel and distributed computing systems. (15 refs).

  6. Computer controlled antenna system

    Science.gov (United States)

    Raumann, N. A.

    1972-01-01

    The application of small computers using digital techniques for operating the servo and control system of large antennas is discussed. The advantages of the system are described. The techniques were evaluated with a forty foot antenna and the Sigma V computer. Programs have been completed which drive the antenna directly without the need for a servo amplifier, antenna position programmer or a scan generator.

  7. 78 FR 48467 - Delphi Automotive Systems, LLC, Products and Service Solutions Division, Including On-Site Leased...

    Science.gov (United States)

    2013-08-08

    ... Employment and Training Administration Delphi Automotive Systems, LLC, Products and Service Solutions... workers of Delphi Automotive Systems, LLC, Product and Service Solutions Division, Original Equipment... of ] Delphi Automotive Systems, LLC, Product and Service Solutions Division, including on-site...

  8. 76 FR 65212 - Caterpillar, Inc., Large Power Systems Division, Including On-Site Leased Workers From Gray...

    Science.gov (United States)

    2011-10-20

    ... Employment and Training Administration Caterpillar, Inc., Large Power Systems Division, Including On- Site... Adjustment Assistance on November 2, 2009, applicable to Caterpillar, Inc., Large Power Systems Division... Caterpillar, Inc., Large Power Systems Division. The Department has determined that these workers...

  9. ALERTES-SC3 Early Warning System prototype for South Iberian Peninsula: on-site approach.

    Science.gov (United States)

    Pazos, Antonio; Lopez de Mesa, Mireya; Gallego Carrasco, Javier; Martín Davila, José; Rioja del Rio, Carlos; Morgado, Arturo; Vera, Angel; Ciberia, Angel; Cabieces, Roberto; Strollo, Angelo; Hanka, Winfried; Carranza, Marta

    2016-04-01

    In recent years several Earthquake Early Warning Systems (EEWS) have been developed for different parts of the world. The area between SW Cape St. Vicente and the Strait of Gibraltar is one of the most seismically active zones in the Ibero-Maghrebian region, with predominantly moderate and superficial seismicity, but also big events with associated tsunamis are well documented in the area, like the 1755 Lisbon earthquake. In the frame of the ALERT-ES (2011-2013) and ALERTES-RIM (2014-2016) Spanish projects, the ALERTES-SC3 EEWS, regional approach, prototype has been developed at the Royal Spanish Navy Observatory (ROA) and is being tested in near real time for south Iberia. This prototype, based on the SeisComP3 software package, is largely based on algorithms derived from the analysis of the first seconds of the P wave records. Calculation of several parameters are carried out, mainly the characteristic period (τc) and the displacement peak (Pd), but also the velocity peak (Pv), the maximum period (τPmáx), among others. In order to warm the areas closest to the hypocentre, places located inside the "blind zone", a on-site EEWS has also been developed by ROA and integrated in the ALERTES-SC3 prototype. From the on-site approach, a warm level is declared from one station as a function of the estimated characteristic period (τc) and the displacement Peak (Pd), although the earthquake location and therefore the lead time available remains unknown. This on-site EEWS is being tested in several Western Mediterranean net (WM) stations as ARNO (Arenosillo, Huelva,Spain) or CHAS (Chafarinas island, North Africa, Spain). Also an on-site low cost station is being developed based in low cost accelerometers. In this work the current state of the on-site EEWS developed, its integration in the ALERTES-SC3 EEWS system and the low cost seismic stations are shown.

  10. Attacks on computer systems

    Directory of Open Access Journals (Sweden)

    Dejan V. Vuletić

    2012-01-01

    Full Text Available Computer systems are a critical component of the human society in the 21st century. Economic sector, defense, security, energy, telecommunications, industrial production, finance and other vital infrastructure depend on computer systems that operate at local, national or global scales. A particular problem is that, due to the rapid development of ICT and the unstoppable growth of its application in all spheres of the human society, their vulnerability and exposure to very serious potential dangers increase. This paper analyzes some typical attacks on computer systems.

  11. A Review of On-Site Wastewater Treatment Systems in Western Australia from 1997 to 2011

    Directory of Open Access Journals (Sweden)

    Maria Gunady

    2015-01-01

    Full Text Available On-site wastewater treatment systems (OWTS are widely used in Western Australia (WA to treat and dispose of household wastewater in areas where centralized sewerage systems are unavailable. Septic tanks, aerobic treatment units (ATUs, and composting toilets with greywater systems are among the most well established and commonly used OWTS. However, there are concerns that some OWTS installed in WA are either performing below expected standards or failing. Poorly performing OWTS are often attributed to inadequate installation, inadequate maintenance, poor public awareness, insufficient local authority resources, ongoing wastewater management issues, or inadequate adoption of standards, procedures, and guidelines. This paper is to review the installations and failures of OWTS in WA. Recommendations to the Department of Health Western Australia (DOHWA and Local Government (LG in regard to management strategies and institutional arrangements of OWTS are also highlighted.

  12. A review of on-site wastewater treatment systems in Western Australia from 1997 to 2011.

    Science.gov (United States)

    Gunady, Maria; Shishkina, Natalia; Tan, Henry; Rodriguez, Clemencia

    2015-01-01

    On-site wastewater treatment systems (OWTS) are widely used in Western Australia (WA) to treat and dispose of household wastewater in areas where centralized sewerage systems are unavailable. Septic tanks, aerobic treatment units (ATUs), and composting toilets with greywater systems are among the most well established and commonly used OWTS. However, there are concerns that some OWTS installed in WA are either performing below expected standards or failing. Poorly performing OWTS are often attributed to inadequate installation, inadequate maintenance, poor public awareness, insufficient local authority resources, ongoing wastewater management issues, or inadequate adoption of standards, procedures, and guidelines. This paper is to review the installations and failures of OWTS in WA. Recommendations to the Department of Health Western Australia (DOHWA) and Local Government (LG) in regard to management strategies and institutional arrangements of OWTS are also highlighted.

  13. Resilient computer system design

    CERN Document Server

    Castano, Victor

    2015-01-01

    This book presents a paradigm for designing new generation resilient and evolving computer systems, including their key concepts, elements of supportive theory, methods of analysis and synthesis of ICT with new properties of evolving functioning, as well as implementation schemes and their prototyping. The book explains why new ICT applications require a complete redesign of computer systems to address challenges of extreme reliability, high performance, and power efficiency. The authors present a comprehensive treatment for designing the next generation of computers, especially addressing safety-critical, autonomous, real time, military, banking, and wearable health care systems.   §  Describes design solutions for new computer system - evolving reconfigurable architecture (ERA) that is free from drawbacks inherent in current ICT and related engineering models §  Pursues simplicity, reliability, scalability principles of design implemented through redundancy and re-configurability; targeted for energy-,...

  14. 75 FR 76038 - Zach System Corporation a Subdivision of Zambon Company, SPA Including On-Site Leased Workers of...

    Science.gov (United States)

    2010-12-07

    ... On-Site Leased Workers of Turner Industries and Go Johnson, La Porte, TX; Amended Certification... Corporation, a subdivision of Zach System SPA, La Porte, Texas, including on-site leased workers from Turner Industries and Go Johnson, La Porte, Texas. The Department's notice of determination was published in...

  15. An on-site alert level early warning system for Italy

    Science.gov (United States)

    Caruso, Alessandro; Colombelli, Simona; Elia, Luca; Zollo, Aldo

    2017-04-01

    An Earthquake Early Warning (EEW) system is a real-time seismic monitoring infrastructure that has the capability to provide warnings to target cities before the arrival of the strongest shaking waves. In order to provide a rapid alert when targets are very close to the epicenter of the events, we developed an on-site EEW approach and evaluated its performance at the nation-wide scale of Italy. We use a single-station, P-wave based method that measures in real-time two ground motion quantities along the early P-wave signal: the initial Peak Displacement (Pd) and the average period parameter (τc). In output, the system provides the predicted ground shaking intensity at the monitored site, the alert level (as defined by Zollo et al., 2010) and a qualitative classification of both earthquake magnitude and source-to-receiver distance. We applied the on-site EEW methodology to a dataset of Italian earthquakes, recorded by the Italian accelerometric network, with magnitude ranging from 3.8 to 6, and evaluated the performance of the system in terms of correct warning and lead-times (i.e., time available for security actions at the target). The results of this retrospective analysis show that, for the large majority of the analyzed cases, the method is able to deliver a correct warning shortly after the P-wave detection, with more than 80% of successful intensity predictions at the target site. The lead-times increase with distance, with a value of 2-6 seconds at 30 km, 8-10 seconds at 50 km and 15-18 seconds at 100 km.

  16. On-site Rapid Diagnosis of Intracranial Hematoma using Portable Multi-slice Microwave Imaging System

    Science.gov (United States)

    Mobashsher, Ahmed Toaha; Abbosh, A. M.

    2016-11-01

    Rapid, on-the-spot diagnostic and monitoring systems are vital for the survival of patients with intracranial hematoma, as their conditions drastically deteriorate with time. To address the limited accessibility, high costs and static structure of currently used MRI and CT scanners, a portable non-invasive multi-slice microwave imaging system is presented for accurate 3D localization of hematoma inside human head. This diagnostic system provides fast data acquisition and imaging compared to the existing systems by means of a compact array of low-profile, unidirectional antennas with wideband operation. The 3D printed low-cost and portable system can be installed in an ambulance for rapid on-site diagnosis by paramedics. In this paper, the multi-slice head imaging system’s operating principle is numerically analysed and experimentally validated on realistic head phantoms. Quantitative analyses demonstrate that the multi-slice head imaging system is able to generate better quality reconstructed images providing 70% higher average signal to clutter ratio, 25% enhanced maximum signal to clutter ratio and with around 60% hematoma target localization compared to the previous head imaging systems. Nevertheless, numerical and experimental results demonstrate that previous reported 2D imaging systems are vulnerable to localization error, which is overcome in the presented multi-slice 3D imaging system. The non-ionizing system, which uses safe levels of very low microwave power, is also tested on human subjects. Results of realistic phantom and subjects demonstrate the feasibility of the system in future preclinical trials.

  17. Computer network defense system

    Science.gov (United States)

    Urias, Vincent; Stout, William M. S.; Loverro, Caleb

    2017-08-22

    A method and apparatus for protecting virtual machines. A computer system creates a copy of a group of the virtual machines in an operating network in a deception network to form a group of cloned virtual machines in the deception network when the group of the virtual machines is accessed by an adversary. The computer system creates an emulation of components from the operating network in the deception network. The components are accessible by the group of the cloned virtual machines as if the group of the cloned virtual machines was in the operating network. The computer system moves network connections for the group of the virtual machines in the operating network used by the adversary from the group of the virtual machines in the operating network to the group of the cloned virtual machines, enabling protecting the group of the virtual machines from actions performed by the adversary.

  18. A Portable and Autonomous Mass Spectrometric System for On-Site Environmental Gas Analysis.

    Science.gov (United States)

    Brennwald, Matthias S; Schmidt, Mark; Oser, Julian; Kipfer, Rolf

    2016-12-20

    We developed a portable mass spectrometric system ("miniRuedi") for quantificaton of the partial pressures of He, Ne (in dry gas), Ar, Kr, N2, O2, CO2, and CH4 in gaseous and aqueous matrices in environmental systems with an analytical uncertainty of 1-3%. The miniRuedi does not require any purification or other preparation of the sampled gases and therefore allows maintenance-free and autonomous operation. The apparatus is most suitable for on-site gas analysis during field work and at remote locations due to its small size (60 cm × 40 cm × 14 cm), low weight (13 kg), and low power consumption (50 W). The gases are continuously sampled and transferred through a capillary pressure reduction system into a vacuum chamber, where they are analyzed using a quadrupole mass spectrometer with a time resolution of ≲1 min. The low gas consumption rate (<0.1 mL/min) minimizes interference with the natural mass balance of gases in environmental systems, and allows the unbiased quantification of dissolved-gas concentrations in water by gas/water equilibration using membrane contractors (gas-equilibrium membrane-inlet mass spectrometry, GE-MIMS). The performance of the miniRuedi is demonstrated in laboratory and field tests, and its utility is illustrated in field applications related to soil-gas formation, lake/atmosphere gas exchange, and seafloor gas emanations.

  19. Computer system operation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Young Jae; Lee, Hae Cho; Lee, Ho Yeun; Kim, Young Taek; Lee, Sung Kyu; Park, Jeong Suk; Nam, Ji Wha; Kim, Soon Kon; Yang, Sung Un; Sohn, Jae Min; Moon, Soon Sung; Park, Bong Sik; Lee, Byung Heon; Park, Sun Hee; Kim, Jin Hee; Hwang, Hyeoi Sun; Lee, Hee Ja; Hwang, In A. [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1993-12-01

    The report described the operation and the trouble shooting of main computer and KAERINet. The results of the project are as follows; 1. The operation and trouble shooting of the main computer system. (Cyber 170-875, Cyber 960-31, VAX 6320, VAX 11/780). 2. The operation and trouble shooting of the KAERINet. (PC to host connection, host to host connection, file transfer, electronic-mail, X.25, CATV etc.). 3. The development of applications -Electronic Document Approval and Delivery System, Installation the ORACLE Utility Program. 22 tabs., 12 figs. (Author) .new.

  20. Residential on site solar heating systems: a project evaluation using the capital asset pricing model

    Energy Technology Data Exchange (ETDEWEB)

    Schutz, S.R.

    1978-12-01

    An energy source ready for immediate use on a commercial scale is solar energy in the form of On Site Solar Heating (OSSH) systems. These systems collect solar energy with rooftop panels, store excess energy in water storage tanks and can, in certain circumstances, provide 100% of the space heating and hot water required by the occupants of the residential or commercial structure on which the system is located. Such systems would take advantage of a free and inexhaustible energy source--sunlight. The principal drawback of such systems is the high initial capital cost. The solution would normally be a carefully worked out corporate financing plan. However, at the moment it is individual homeowners and not corporations who are attempting to finance these systems. As a result, the terms of finance are excessively stringent and constitute the main obstacle to the large scale market penetration of OSSH. This study analyzes the feasibility of OSSH as a private utility investment. Such systems would be installed and owned by private utilities and would displace other investment projects, principally electric generating plants. The return on OSSH is calculated on the basis of the cost to the consumer of the equivalent amount of electrical energy that is displaced by the OSSH system. The hurdle rate for investment in OSSH is calculated using the Sharpe--Lintner Capital Asset Pricing Model. The results of this study indicate that OSSH is a low risk investment having an appropriate hurdle rate of 7.9%. At this rate, OSSH investment appears marginally acceptable in northern California and unambiguously acceptable in southern California. The results also suggest that utility investment in OSSH should lead to a higher degree of financial leverage for utility companies without a concurrent deterioration in the risk class of utility equity.

  1. The On-Site Status of the Kstar Helium Refrigeration System

    Science.gov (United States)

    Chang, H.-S.; Park, D. S.; Joo, J. J.; Moon, K. M.; Cho, K. W.; Kim, Y. S.; Bak, J. S.; Kim, H. M.; Cho, M. C.; Kwon, I. K.; Fauve, E.; Bernhardt, J.-M.; Dauguet, P.; Beauvisage, J.; Andrieu, F.; Yang, S.-H.; Baguer, G. M. Gistau

    2008-03-01

    Since the first design of the KSTAR helium refrigeration system (HRS) in year 2000, many modifications and changes have been applied due to both system optimization and improved knowledge of the KSTAR cold components. The present specification of the HRS had been fixed on March, 2005. Consequent manufacturing of main equipment, such as "Compressor Station" (C/S), "Cold Box" (C/B), and "Distribution Box ♯1" (D/B ♯1) was completed by or under the supervision of Air Liquide DTA by the end of year 2006. The major components of the C/S are 2 low and 2 high pressure compressor units and an oil-removal system. The cooling power of the C/B at 4.5 K equivalent is 9 kW achieved by using 6 turbo-expanders. The D/B ♯1 is a cryostat housing 49 cryogenic valves, 2 supercritical helium circulators, 1 cold compressor, and 7 heat exchangers immersed in a 6 m3 liquid helium storage. In this proceeding, the on-site installation and commissioning status of the HRS will be presented. In addition, the final specification and design features of the HRS and the

  2. Modeling effluent distribution and nitrate transport through an on-site wastewater system.

    Science.gov (United States)

    Hassan, G; Reneau, R B; Hagedorn, C; Jantrania, A R

    2008-01-01

    Properly functioning on-site wastewater systems (OWS) are an integral component of the wastewater system infrastructure necessary to renovate wastewater before it reaches surface or ground waters. There are a large number of factors, including soil hydraulic properties, effluent quality and dispersal, and system design, that affect OWS function. The ability to evaluate these factors using a simulation model would improve the capability to determine the impact of wastewater application on the subsurface soil environment. An existing subsurface drip irrigation system (SDIS) dosed with sequential batch reactor effluent (SBRE) was used in this study. This system has the potential to solve soil and site problems that limit OWS and to reduce the potential for environmental degradation. Soil water potentials (Psi(s)) and nitrate (NO(3)) migration were simulated at 55- and 120-cm depths within and downslope of the SDIS using a two-dimensional code in HYDRUS-3D. Results show that the average measured Psi(s) were -121 and -319 cm, whereas simulated values were -121 and -322 cm at 55- and 120-cm depths, respectively, indicating unsaturated conditions. Average measured NO(3) concentrations were 0.248 and 0.176 mmol N L(-1), whereas simulated values were 0.237 and 0.152 mmol N L(-1) at 55- and 120-cm depths, respectively. Observed unsaturated conditions decreased the potential for NO(3) to migrate in more concentrated plumes away from the SDIS. The agreement (high R(2) values approximately 0.97) between the measured and simulated Psi(s) and NO(3) concentrations indicate that HYDRUS-3D adequately simulated SBRE flow and NO(3) transport through the soil domain under a range of environmental and effluent application conditions.

  3. Risk-Cost Estimation of On-Site Wastewater Treatment System Failures Using Extreme Value Analysis.

    Science.gov (United States)

    Kohler, Laura E; Silverstein, JoAnn; Rajagopalan, Balaji

    2017-05-01

      Owner resistance to increasing regulation of on-site wastewater treatment systems (OWTS), including obligatory inspections and upgrades, moratoriums and cease-and-desist orders in communities around the U.S. demonstrate the challenges associated with managing risks of inadequate performance of owner-operated wastewater treatment systems. As a result, determining appropriate and enforceable performance measures in an industry with little history of these requirements is challenging. To better support such measures, we develop a statistical method to predict lifetime failure risks, expressed as costs, in order to identify operational factors associated with costly repairs and replacement. A binomial logistic regression is used to fit data from public records of reported OWTS failures, in Boulder County, Colorado, which has 14 300 OWTS to determine the probability that an OWTS will be in a low- or high-risk category for lifetime repair and replacement costs. High-performing or low risk OWTS with repairs and replacements below the threshold of $9000 over a 40-year life are associated with more frequent inspections and upgrades following home additions. OWTS with a high risk of exceeding the repair cost threshold of $18 000 are further analyzed in a variation of extreme value analysis (EVA), Points Over Threshold (POT) where the distribution of risk-cost exceedance values are represented by a generalized Pareto distribution. The resulting threshold cost exceedance estimates for OWTS in the high-risk category over a 40-year expected life ranged from $18 000 to $44 000.

  4. Computer Vision Systems

    Science.gov (United States)

    Gunasekaran, Sundaram

    Food quality is of paramount consideration for all consumers, and its importance is perhaps only second to food safety. By some definition, food safety is also incorporated into the broad categorization of food quality. Hence, the need for careful and accurate evaluation of food quality is at the forefront of research and development both in the academia and industry. Among the many available methods for food quality evaluation, computer vision has proven to be the most powerful, especially for nondestructively extracting and quantifying many features that have direct relevance to food quality assessment and control. Furthermore, computer vision systems serve to rapidly evaluate the most readily observable foods quality attributes - the external characteristics such as color, shape, size, surface texture etc. In addition, it is now possible, using advanced computer vision technologies, to “see” inside a food product and/or package to examine important quality attributes ordinarily unavailable to human evaluators. With rapid advances in electronic hardware and other associated imaging technologies, the cost-effectiveness and speed of computer vision systems have greatly improved and many practical systems are already in place in the food industry.

  5. Computational systems chemical biology.

    Science.gov (United States)

    Oprea, Tudor I; May, Elebeoba E; Leitão, Andrei; Tropsha, Alexander

    2011-01-01

    There is a critical need for improving the level of chemistry awareness in systems biology. The data and information related to modulation of genes and proteins by small molecules continue to accumulate at the same time as simulation tools in systems biology and whole body physiologically based pharmacokinetics (PBPK) continue to evolve. We called this emerging area at the interface between chemical biology and systems biology systems chemical biology (SCB) (Nat Chem Biol 3: 447-450, 2007).The overarching goal of computational SCB is to develop tools for integrated chemical-biological data acquisition, filtering and processing, by taking into account relevant information related to interactions between proteins and small molecules, possible metabolic transformations of small molecules, as well as associated information related to genes, networks, small molecules, and, where applicable, mutants and variants of those proteins. There is yet an unmet need to develop an integrated in silico pharmacology/systems biology continuum that embeds drug-target-clinical outcome (DTCO) triplets, a capability that is vital to the future of chemical biology, pharmacology, and systems biology. Through the development of the SCB approach, scientists will be able to start addressing, in an integrated simulation environment, questions that make the best use of our ever-growing chemical and biological data repositories at the system-wide level. This chapter reviews some of the major research concepts and describes key components that constitute the emerging area of computational systems chemical biology.

  6. 76 FR 46852 - Workers From Kelly Services, Working On-Site at Delphi Automotive Systems, LLC, Powertrain...

    Science.gov (United States)

    2011-08-03

    ... workers from Kelly Services working on-site at Delphi Automotive Systems, LLC, El Paso, Texas. The workers...-site at Delphi Automotive Systems, LLC, Powertrain Division, El Paso, Texas. The amended notice...-site at Delphi Automotive Systems, LLC, Powertrain Division, El Paso, Texas, who became totally...

  7. The CTBT's International Monitoring System and On-Site Inspection Capabilities: a Status Report

    Science.gov (United States)

    Zerbo, Lassina

    2017-01-01

    At its 20th anniversary the Comprehensive Nuclear-Test-Ban Treaty has now gathered 183 State Signatories, of which 166 have ratified. But 8 States remain to ratify before we reach entry into force. In the meantime the CTBT verification regime has accumulated two decades worth of experience, and has achieved proven results. The regime includes a global system for monitoring the earth, the oceans and the atmosphere and an on-site inspection (OSI) capability. It uses seismic, hydroacoustic, infrasound and radionuclide technologies to do so. More than 90% of the 337 facilities of the International Monitoring System (IMS) have been installed and are sending data to the International Data Centre (IDC) in Vienna, Austria for processing. These IMS data along with IDC processed and reviewed products are available to all States that have signed the Treaty. The monitoring system has been put to test and demonstrated its effectiveness by detecting, locating and reporting on the DPRK announced nuclear tests in 2006, 2009, 2013 and twice in 2016. In addition to detecting radioxenon consistent with the nuclear tests in 2006 and 2013 the IMS radionuclide network also added value in the response to the tragic events in Fukushima in 2011. We continue to find new civil and scientific applications of the IMS that are made available to the international community to deal with major societal issues such as sustainable development, disaster risk reduction and climate change. OSI capabilities continue to be developed and tested. The Integrated Field Exercise in Jordan in 2014 demonstrated that they have reached a high level of operational readiness. The CTBT has been a catalyst for the development of new scientific fields in particular in the noble gas monitoring technology. CTBTO seeks to continuously improve its technologies and methods through interaction with the scientific community.

  8. 75 FR 76487 - Haldex Brake Corporation, Commercial Vehicle Systems, Including On-Site Leased Workers of...

    Science.gov (United States)

    2010-12-08

    ...-Site Leased Workers of Johnston Integration Technologies, a Subsidiary of Johnston Companies, Iola, KS... company reports that workers leased from Johnston Integration Technologies, a subsidiary of Johnston... from Johnston Integration Technologies, a subsidiary of Johnston Companies working on-site at the...

  9. Comparative study of the microbial quality of greywater treated by three on-site treatment systems.

    Science.gov (United States)

    Friedler, E; Kovalio, R; Ben-Zvi, A

    2006-06-01

    This paper analyses the performance of a pilot scale treatment plant, treating light domestic greywater. The treatment included three parallel treatment units: stand-alone sand filtration (SFEB), RBC followed by sand filtration (SFRBC), and an MBR equipped with UF membranes (MBR). The performance of the SFEB unit was rather poor. The RBC and MBR units produced effluent of excellent quality, with COD of 42 and 40 mg l(-1), BOD of 1.8 and 1.1 mg l(-1), and turbidity of 0.6 and 0.2 NTU respectively. The SFEB failed to remove heterotrophic microorganisms (HPC), while the SFRBC and the MBR exhibited 2.1 and 3.6 logs removal, leading to effluent concentrations of 1.1 x 10(3) and 8.8 x 10(3) cfu ml(-1) respectively. Faecal coliforms (FC) counts were 3.4 x 10(5) 1.4 x 10(5) 1.1 x 10(3) and 3.5 x 10(2) cfu 100 ml(-1) in raw greywater, and in the SFEB, SFRBC and MBR effluents respectively. Further, in 60% of the samples no FC were detected in the MBR effluent. In order to simulate residence times in full scale systems, effluents were disinfected and stored for 0.5 h, 3 h, 6 h (normal operation), and one week (extreme event). The average chlorine demand was 8.1, 3.8 and 2.9 mg l(-1) for SFEB, SFRBC and MBR effluents respectively. Low residual chlorine (0.15-0.22 mg l(-1)) remained in all effluents even after a week-long storage. Disinfection reduced HPC by 5, 2 and 2 orders of magnitude in the SFEB, SFRBC and MBR effluents respectively, with no regrowth in short contact times (up to 6 hours). Some regrowth was observed after a week-long storage leading to 10(6), 10(4) and 10(3) cfu ml(-1) (SFEB SFRBC and MBR respectively). Disinfection reduced FC counts in all three types of effluent to 0 cfu 100 ml(-1), whilst no FC regrowth was observed after week-long storage. The results show that both RBC and MBR treatment units are viable options for on-site greywater reuse. The disinfection experiments strongly indicate that the health risk associated with the reuse of these effluents

  10. 75 FR 34172 - Rexam Closure Systems, Inc., a Subsidiary of Rexam PLC, Including On-Site Leased Workers From...

    Science.gov (United States)

    2010-06-16

    ... Unemployment Insurance (UI) Wages Are Paid Through Owens Illinois Manufacturing, Hamlet, NC; Amended... workers of Rexam Closure Systems, Inc., a subsidiary of Rexam PLC, Hamlet, North Carolina. The notice was... from Olston Staffing were employed on-site at the Hamlet, North Carolina location of Rexam Closure...

  11. Evaluation of shallow-placed low pressure distribution systems in soils marginally suited for on-site waste treatment

    OpenAIRE

    Ijzerman, M. Marian

    1990-01-01

    Two shallow-placed low pressure distribution (LPD) systems were evaluated in soils that were marginally suited for a conventional on-site wastewater disposal system (OSWDS) because of low hydraulic conductivity and shallow depth of soil to bedrock. The soils used for this study were Edom (fine, illitic, mesic, Typic Hapludult) and Penn-Bucks soil (fine-loamy, mixed, mesic, ultic Hapludult). In the Edam soil, the LPD system was installed with four subsystem designs operating: a ...

  12. The Computational Sensorimotor Systems Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — The Computational Sensorimotor Systems Lab focuses on the exploration, analysis, modeling and implementation of biological sensorimotor systems for both scientific...

  13. Evaluation of Approaches for Managing Nitrate Loading from On-Site Wastewater Systems near La Pine, Oregon

    Science.gov (United States)

    Morgan, David S.; Hinkle, Stephen R.; Weick, Rodney J.

    2007-01-01

    This report presents the results of a study by the U.S. Geological Survey, done in cooperation with the Oregon Department of Environmental Quality and Deschutes County, to develop a better understanding of the effects of nitrogen from on-site wastewater disposal systems on the quality of ground water near La Pine in southern Deschutes County and northern Klamath County, Oregon. Simulation models were used to test the conceptual understanding of the system and were coupled with optimization methods to develop the Nitrate Loading Management Model, a decision-support tool that can be used to efficiently evaluate alternative approaches for managing nitrate loading from on-site wastewater systems. The conceptual model of the system is based on geologic, hydrologic, and geochemical data collected for this study, as well as previous hydrogeologic and water quality studies and field testing of on-site wastewater systems in the area by other agencies. On-site wastewater systems are the only significant source of anthropogenic nitrogen to shallow ground water in the study area. Between 1960 and 2005 estimated nitrate loading from on-site wastewater systems increased from 3,900 to 91,000 pounds of nitrogen per year. When all remaining lots are developed (in 2019 at current building rates), nitrate loading is projected to reach nearly 150,000 pounds of nitrogen per year. Low recharge rates (2-3 inches per year) and ground-water flow velocities generally have limited the extent of nitrate occurrence to discrete plumes within 20-30 feet of the water table; however, hydraulic-gradient and age data indicate that, given sufficient time and additional loading, nitrate will migrate to depths where many domestic wells currently obtain water. In 2000, nitrate concentrations greater than 4 milligrams nitrogen per liter (mg N/L) were detected in 10 percent of domestic wells sampled by Oregon Department of Environmental Quality. Numerical simulation models were constructed at transect (2

  14. Secure computing on reconfigurable systems

    NARCIS (Netherlands)

    Fernandes Chaves, R.J.

    2007-01-01

    This thesis proposes a Secure Computing Module (SCM) for reconfigurable computing systems. SC provides a protected and reliable computational environment, where data security and protection against malicious attacks to the system is assured. SC is strongly based on encryption algorithms and on the

  15. Secure computing on reconfigurable systems

    NARCIS (Netherlands)

    Fernandes Chaves, R.J.

    2007-01-01

    This thesis proposes a Secure Computing Module (SCM) for reconfigurable computing systems. SC provides a protected and reliable computational environment, where data security and protection against malicious attacks to the system is assured. SC is strongly based on encryption algorithms and on the a

  16. Computer systems a programmer's perspective

    CERN Document Server

    Bryant, Randal E

    2016-01-01

    Computer systems: A Programmer’s Perspective explains the underlying elements common among all computer systems and how they affect general application performance. Written from the programmer’s perspective, this book strives to teach readers how understanding basic elements of computer systems and executing real practice can lead them to create better programs. Spanning across computer science themes such as hardware architecture, the operating system, and systems software, the Third Edition serves as a comprehensive introduction to programming. This book strives to create programmers who understand all elements of computer systems and will be able to engage in any application of the field--from fixing faulty software, to writing more capable programs, to avoiding common flaws. It lays the groundwork for readers to delve into more intensive topics such as computer architecture, embedded systems, and cybersecurity. This book focuses on systems that execute an x86-64 machine code, and recommends th...

  17. Central nervous system and computation.

    Science.gov (United States)

    Guidolin, Diego; Albertin, Giovanna; Guescini, Michele; Fuxe, Kjell; Agnati, Luigi F

    2011-12-01

    Computational systems are useful in neuroscience in many ways. For instance, they may be used to construct maps of brain structure and activation, or to describe brain processes mathematically. Furthermore, they inspired a powerful theory of brain function, in which the brain is viewed as a system characterized by intrinsic computational activities or as a "computational information processor. "Although many neuroscientists believe that neural systems really perform computations, some are more cautious about computationalism or reject it. Thus, does the brain really compute? Answering this question requires getting clear on a definition of computation that is able to draw a line between physical systems that compute and systems that do not, so that we can discern on which side of the line the brain (or parts of it) could fall. In order to shed some light on the role of computational processes in brain function, available neurobiological data will be summarized from the standpoint of a recently proposed taxonomy of notions of computation, with the aim of identifying which brain processes can be considered computational. The emerging picture shows the brain as a very peculiar system, in which genuine computational features act in concert with noncomputational dynamical processes, leading to continuous self-organization and remodeling under the action of external stimuli from the environment and from the rest of the organism.

  18. Recycling of treated domestic effluent from an on-site wastewater treatment system for hydroponics.

    Science.gov (United States)

    Oyama, N; Nair, J; Ho, G E

    2005-01-01

    An alternative method to conserve water and produce crops in arid regions is through hydroponics. Application of treated wastewater for hydroponics will help in stripping off nutrients from wastewater, maximising reuse through reduced evaporation losses, increasing control on quality of water and reducing risk of pathogen contamination. This study focuses on the efficiency of treated wastewater from an on-site aerobic wastewater treatment unit. The experiment aimed to investigate 1) nutrient reduction 2) microbial reduction and 3) growth rate of plants fed on wastewater compared to a commercial hydroponics medium. The study revealed that the chemical and microbial quality of wastewater after hydroponics was safe and satisfactory for irrigation and plant growth rate in wastewater hydroponics was similar to those grown in a commercial medium.

  19. Ubiquitous Computing Systems

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind; Friday, Adrian

    2009-01-01

    First introduced two decades ago, the term ubiquitous computing is now part of the common vernacular. Ubicomp, as it is commonly called, has grown not just quickly but broadly so as to encompass a wealth of concepts and technology that serves any number of purposes across all of human endeavor......, an original ubicomp pioneer, Ubiquitous Computing Fundamentals brings together eleven ubiquitous computing trailblazers who each report on his or her area of expertise. Starting with a historical introduction, the book moves on to summarize a number of self-contained topics. Taking a decidedly human...... perspective, the book includes discussion on how to observe people in their natural environments and evaluate the critical points where ubiquitous computing technologies can improve their lives. Among a range of topics this book examines: How to build an infrastructure that supports ubiquitous computing...

  20. Capability-based computer systems

    CERN Document Server

    Levy, Henry M

    2014-01-01

    Capability-Based Computer Systems focuses on computer programs and their capabilities. The text first elaborates capability- and object-based system concepts, including capability-based systems, object-based approach, and summary. The book then describes early descriptor architectures and explains the Burroughs B5000, Rice University Computer, and Basic Language Machine. The text also focuses on early capability architectures. Dennis and Van Horn's Supervisor; CAL-TSS System; MIT PDP-1 Timesharing System; and Chicago Magic Number Machine are discussed. The book then describes Plessey System 25

  1. New computing systems and their impact on computational mechanics

    Science.gov (United States)

    Noor, Ahmed K.

    1989-01-01

    Recent advances in computer technology that are likely to impact computational mechanics are reviewed. The technical needs for computational mechanics technology are outlined. The major features of new and projected computing systems, including supersystems, parallel processing machines, special-purpose computing hardware, and small systems are described. Advances in programming environments, numerical algorithms, and computational strategies for new computing systems are reviewed, and a novel partitioning strategy is outlined for maximizing the degree of parallelism on multiprocessor computers with a shared memory.

  2. MACSTOR, an on-site, dry, spent-fuel storage system developed by AECL for use by U. S. utilities

    Energy Technology Data Exchange (ETDEWEB)

    Durante, R.; Feinroth, H.; Pattanyus, P. (AECL Technologies, Bethesda, MD (United States))

    1992-01-01

    The continuing delay in the U.S. Department of Energy's Yucca Mountain and monitored retrievable storage spent-fuel disposal and storage programs has prompted U.S. utilities to consider expanding on-site storage of spent reactor fuel. Long-term, on-site storage has certain advantages to U.S. utilities since it eliminates the need for costly and difficult shipping and puts control of the spent fuel completely under the direction of the owner-utility. AECL Technologies (AECL), through its research company and Canada deuterium uranium (CANDU) engineering services division, has been developing on-site storage for Canadian heavy water nuclear plants for almost 20 yr. AECL has developed a design for a dry storage unit, designated MACSTOR (modular air-cooled storage), that can accommodate U.S. light water reactor (LWR) fuel elements and could become a candidate for the U.S. market. This paper describes MACSTOR and its evolution from the original silos and CANSTOR system that was developed and used in Canada. These systems are subject to regulatory controls by the Atomic Energy Control Board of Canada and have proven to be safe, convenient, and cost effective.

  3. Computer Security Systems Enable Access.

    Science.gov (United States)

    Riggen, Gary

    1989-01-01

    A good security system enables access and protects information from damage or tampering, but the most important aspects of a security system aren't technical. A security procedures manual addresses the human element of computer security. (MLW)

  4. Demonstration of an on-site PAFC cogeneration system with waste heat utilization by a new gas absorption chiller

    Energy Technology Data Exchange (ETDEWEB)

    Urata, Tatsuo [Tokyo Gas Company, LTD, Tokyo (Japan)

    1996-12-31

    Analysis and cost reduction of fuel cells is being promoted to achieve commercial on-site phosphoric acid fuel cells (on-site FC). However, for such cells to be effectively utilized, a cogeneration system designed to use the heat generated must be developed at low cost. Room heating and hot-water supply are the most simple and efficient uses of the waste heat of fuel cells. However, due to the short room-heating period of about 4 months in most areas in Japan, the sites having demand for waste heat of fuel cells throughout the year will be limited to hotels and hospitals Tokyo Gas has therefore been developing an on-site FC and the technology to utilize tile waste heat of fuel cells for room cooling by means of an absorption refrigerator. The paper describes the results of fuel cell cogeneration tests conducted on a double effect gas absorption chiller heater with auxiliary waste heat recovery (WGAR) that Tokyo Gas developed in its Energy Technology Research Laboratory.

  5. Energy efficient distributed computing systems

    CERN Document Server

    Lee, Young-Choon

    2012-01-01

    The energy consumption issue in distributed computing systems raises various monetary, environmental and system performance concerns. Electricity consumption in the US doubled from 2000 to 2005.  From a financial and environmental standpoint, reducing the consumption of electricity is important, yet these reforms must not lead to performance degradation of the computing systems.  These contradicting constraints create a suite of complex problems that need to be resolved in order to lead to 'greener' distributed computing systems.  This book brings together a group of outsta

  6. Dynamical Systems Some Computational Problems

    CERN Document Server

    Guckenheimer, J; Guckenheimer, John; Worfolk, Patrick

    1993-01-01

    We present several topics involving the computation of dynamical systems. The emphasis is on work in progress and the presentation is informal -- there are many technical details which are not fully discussed. The topics are chosen to demonstrate the various interactions between numerical computation and mathematical theory in the area of dynamical systems. We present an algorithm for the computation of stable manifolds of equilibrium points, describe the computation of Hopf bifurcations for equilibria in parametrized families of vector fields, survey the results of studies of codimension two global bifurcations, discuss a numerical analysis of the Hodgkin and Huxley equations, and describe some of the effects of symmetry on local bifurcation.

  7. Fuel cell on-site integrated energy system parametric analysis of a residential complex

    Science.gov (United States)

    Simons, S. N.

    1977-01-01

    The use of phosphoric acid fuel cell powerplant to provide all the electricity required by an 81-unit garden apartment complex is studied. Byproduct heat is recovered and provides some of the heat required by the complex. The onsite integrated energy system contains energy conversion equipment including combinations of compression and absorption chillers, heat pumps, electric resistance heaters, and thermal storage. The annual fuel requirement for several onsite integrated energy systems as well as the fuel cell breakeven cost for one specific system were calculated. It is found that electrical efficiency cannot be traded off against thermal efficiency without paying a penalty in system efficiency.

  8. Computational Systems Chemical Biology

    OpenAIRE

    Oprea, Tudor I.; Elebeoba E. May; Leitão, Andrei; Tropsha, Alexander

    2011-01-01

    There is a critical need for improving the level of chemistry awareness in systems biology. The data and information related to modulation of genes and proteins by small molecules continue to accumulate at the same time as simulation tools in systems biology and whole body physiologically-based pharmacokinetics (PBPK) continue to evolve. We called this emerging area at the interface between chemical biology and systems biology systems chemical biology, SCB (Oprea et al., 2007).

  9. Hybridity in Embedded Computing Systems

    Institute of Scientific and Technical Information of China (English)

    虞慧群; 孙永强

    1996-01-01

    An embedded system is a system that computer is used as a component in a larger device.In this paper,we study hybridity in embedded systems and present an interval based temporal logic to express and reason about hybrid properties of such kind of systems.

  10. Computer algebra in systems biology

    CERN Document Server

    Laubenbacher, Reinhard

    2007-01-01

    Systems biology focuses on the study of entire biological systems rather than on their individual components. With the emergence of high-throughput data generation technologies for molecular biology and the development of advanced mathematical modeling techniques, this field promises to provide important new insights. At the same time, with the availability of increasingly powerful computers, computer algebra has developed into a useful tool for many applications. This article illustrates the use of computer algebra in systems biology by way of a well-known gene regulatory network, the Lac Operon in the bacterium E. coli.

  11. Integrated Smartphone-App-Chip System for On-Site Parts-Per-Billion-Level Colorimetric Quantitation of Aflatoxins.

    Science.gov (United States)

    Li, Xiaochun; Yang, Fan; Wong, Jessica X H; Yu, Hua-Zhong

    2017-09-05

    We demonstrate herein an integrated, smartphone-app-chip (SPAC) system for on-site quantitation of food toxins, as demonstrated with aflatoxin B1 (AFB1), at parts-per-billion (ppb) level in food products. The detection is based on an indirect competitive immunoassay fabricated on a transparent plastic chip with the assistance of a microfluidic channel plate. A 3D-printed optical accessory attached to a smartphone is adapted to align the assay chip and to provide uniform illumination for imaging, with which high-quality images of the assay chip are captured by the smartphone camera and directly processed using a custom-developed Android app. The performance of this smartphone-based detection system was tested using both spiked and moldy corn samples; consistent results with conventional enzyme-linked immunosorbent assay (ELISA) kits were obtained. The achieved detection limit (3 ± 1 ppb, equivalent to μg/kg) and dynamic response range (0.5-250 ppb) meet the requested testing standards set by authorities in China and North America. We envision that the integrated SPAC system promises to be a simple and accurate method of food toxin quantitation, bringing much benefit for rapid on-site screening.

  12. Develop and test fuel cell powered on-site integrated total energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Kaufman, A.; Werth, J.

    1988-12-01

    This report describes the design, fabrication and testing of a 25kW phosphoric acid fuel cell system aimed at stationary applications, and the technology development underlying that system. The 25kW fuel cell ran at rated power in both the open and closed loop mode in the summer of 1988. Problems encountered and solved include acid replenishment leakage, gas cross-leakage and edge-leakage in bipolar plates, corrosion of metallic cooling plates and current collectors, cooling groove depth variations, coolant connection leaks, etc. 84 figs., 7 tabs.

  13. Students "Hacking" School Computer Systems

    Science.gov (United States)

    Stover, Del

    2005-01-01

    This article deals with students hacking school computer systems. School districts are getting tough with students "hacking" into school computers to change grades, poke through files, or just pit their high-tech skills against district security. Dozens of students have been prosecuted recently under state laws on identity theft and unauthorized…

  14. Students "Hacking" School Computer Systems

    Science.gov (United States)

    Stover, Del

    2005-01-01

    This article deals with students hacking school computer systems. School districts are getting tough with students "hacking" into school computers to change grades, poke through files, or just pit their high-tech skills against district security. Dozens of students have been prosecuted recently under state laws on identity theft and unauthorized…

  15. On-site analysis of modified surface using dual beam system

    Energy Technology Data Exchange (ETDEWEB)

    Naramoto, Hiroshi; Aoki, Yasushi; Yamamoto, Shunya; Goppelt-Langer, P.; Gan Mingle; Zeng Jianer; Takeshita, Hidefumi [Japan Atomic Energy Research Inst., Takasaki, Gunma (Japan). Takasaki Radiation Chemistry Research Establishment

    1997-03-01

    Recent results obtained using a dual ion beam system at JAERI/Takasaki are reported. In this system, both of ion implantation and ion beam analysis can be made alternatively or simultaneously at low temperatures. In sapphire implanted with {sup 51}V{sup +} ions, the amorphization process is analyzed referring to the <0001> aligned spectra taken at different temperatures. The discussion is made on the defect profiles different from the simple accumulation of standard Gaussian form. The depth showing the maximum damage at the initial stage of implantation is quite shallow compared with those reported before. The thermal annealing behaviors of lattice damage and the implanted V atoms are also different between the samples implanted at low and room temperatures. In the former one fine particles of vanadium oxide are formed coherently with the easy recovery in high dose sample but in the latter the mixed oxide alloy is formed. (author)

  16. Shift in the microbial ecology of a hospital hot water system following the introduction of an on-site monochloramine disinfection system.

    Science.gov (United States)

    Baron, Julianne L; Vikram, Amit; Duda, Scott; Stout, Janet E; Bibby, Kyle

    2014-01-01

    Drinking water distribution systems, including premise plumbing, contain a diverse microbiological community that may include opportunistic pathogens. On-site supplemental disinfection systems have been proposed as a control method for opportunistic pathogens in premise plumbing. The majority of on-site disinfection systems to date have been installed in hospitals due to the high concentration of opportunistic pathogen susceptible occupants. The installation of on-site supplemental disinfection systems in hospitals allows for evaluation of the impact of on-site disinfection systems on drinking water system microbial ecology prior to widespread application. This study evaluated the impact of supplemental monochloramine on the microbial ecology of a hospital's hot water system. Samples were taken three months and immediately prior to monochloramine treatment and monthly for the first six months of treatment, and all samples were subjected to high throughput Illumina 16S rRNA region sequencing. The microbial community composition of monochloramine treated samples was dramatically different than the baseline months. There was an immediate shift towards decreased relative abundance of Betaproteobacteria, and increased relative abundance of Firmicutes, Alphaproteobacteria, Gammaproteobacteria, Cyanobacteria and Actinobacteria. Following treatment, microbial populations grouped by sampling location rather than sampling time. Over the course of treatment the relative abundance of certain genera containing opportunistic pathogens and genera containing denitrifying bacteria increased. The results demonstrate the driving influence of supplemental disinfection on premise plumbing microbial ecology and suggest the value of further investigation into the overall effects of premise plumbing disinfection strategies on microbial ecology and not solely specific target microorganisms.

  17. Robot computer problem solving system

    Science.gov (United States)

    Becker, J. D.; Merriam, E. W.

    1974-01-01

    The conceptual, experimental, and practical aspects of the development of a robot computer problem solving system were investigated. The distinctive characteristics were formulated of the approach taken in relation to various studies of cognition and robotics. Vehicle and eye control systems were structured, and the information to be generated by the visual system is defined.

  18. Portable automatic bioaerosol sampling system for rapid on-site detection of targeted airborne microorganisms.

    Science.gov (United States)

    Usachev, Evgeny V; Pankova, Anna V; Rafailova, Elina A; Pyankov, Oleg V; Agranovski, Igor E

    2012-10-26

    Bioaerosols could cause various severe human and animal diseases and their opportune and qualitative precise detection and control is becoming a significant scientific and technological topic for consideration. Over the last few decades bioaerosol detection has become an important bio-defense related issue. Many types of portable and stationary bioaerosol samplers have been developed and, in some cases, integrated into automated detection systems utilizing various microbiological techniques for analysis of collected microbes. This paper describes a personal sampler used in conjunction with a portable real-time PCR technique. It was found that a single fluorescent dye could be successfully used in multiplex format for qualitative detection of numerous targeted bioaerosols in one PCR tube making the suggested technology a reliable "first alert" device. This approach has been specifically developed and successfully verified for rapid detection of targeted microorganisms by portable PCR devices, which is especially important under field conditions, where the number of microorganisms of interest usually exceeds the number of available PCR reaction tubes. The approach allows detecting targeted microorganisms and triggering some corresponding sanitary and quarantine procedures to localize possible spread of dangerous infections. Following detailed analysis of the sample under controlled laboratory conditions could be used to exactly identify which particular microorganism out of a targeted group has been rapidly detected in the field. It was also found that the personal sampler has a collection efficiency higher than 90% even for small-sized viruses (>20 nm) and stable performance over extended operating periods. In addition, it was found that for microorganisms used in this project (bacteriophages MS2 and T4) elimination of nucleic acids isolation and purification steps during sample preparation does not lead to the system sensitivity reduction, which is extremely

  19. Real-time and on-site γ-ray radiation response testing system for semiconductor devices and its applications

    Energy Technology Data Exchange (ETDEWEB)

    Mu, Yifei, E-mail: Y.Mu@student.liverpool.ac.uk [Department of Electrical Engineering and Electronics, University of Liverpool, Liverpool L69 3GJ (United Kingdom); Zhao, Ce Zhou, E-mail: cezhou.zhao@xjtlu.edu.cn [Department of Electrical and Electronic Engineering, Xi’an Jiaotong-Liverpool University, Suzhou 215123 (China); Qi, Yanfei, E-mail: yanfei.qi01@xjtlu.edu.cn [Department of Electrical and Electronic Engineering, Xi’an Jiaotong-Liverpool University, Suzhou 215123 (China); Lam, Sang, E-mail: s.lam@xjtlu.edu.cn [Department of Electrical and Electronic Engineering, Xi’an Jiaotong-Liverpool University, Suzhou 215123 (China); Zhao, Chun, E-mail: garyzhao@ust.hk [Nano and Advanced Materials Institute, Hong Kong University of Science and Technology, Kowloon (Hong Kong); Lu, Qifeng, E-mail: qifeng@liverpool.ac.uk [Department of Electrical Engineering and Electronics, University of Liverpool, Liverpool L69 3GJ (United Kingdom); Cai, Yutao, E-mail: yutao.cai@xjtlu.edu.cn [Department of Electrical and Electronic Engineering, Xi’an Jiaotong-Liverpool University, Suzhou 215123 (China); Mitrovic, Ivona Z., E-mail: ivona@liverpool.ac.uk [Department of Electrical Engineering and Electronics, University of Liverpool, Liverpool L69 3GJ (United Kingdom); Taylor, Stephen, E-mail: s.taylor@liverpool.ac.uk [Department of Electrical Engineering and Electronics, University of Liverpool, Liverpool L69 3GJ (United Kingdom); Chalker, Paul R., E-mail: pchalker@liverpool.ac.uk [Center for Materials and Structures, School of Engineering, University of Liverpool, Liverpool L69 3GH (United Kingdom)

    2016-04-01

    The construction of a turnkey real-time and on-site radiation response testing system for semiconductor devices is reported. Components of an on-site radiation response probe station, which contains a 1.11 GBq Cs{sup 137} gamma (γ)-ray source, and equipment of a real-time measurement system are described in detail for the construction of the whole system. The real-time measurement system includes a conventional capacitance–voltage (C–V) and stress module, a pulse C–V and stress module, a conventional current–voltage (I–V) and stress module, a pulse I–V and stress module, a DC on-the-fly (OTF) module and a pulse OTF module. Electrical characteristics of MOS capacitors or MOSFET devices are measured by each module integrated in the probe station under continuous γ-ray exposure and the measurement results are presented. The dose rates of different gate dielectrics are calculated by a novel calculation model based on the Cs{sup 137} γ-ray source placed in the probe station. For the sake of operators’ safety, an equivalent dose rate of 70 nSv/h at a given operation distance is indicated by a dose attenuation model in the experimental environment. HfO{sub 2} thin films formed by atomic layer deposition are employed to investigate the radiation response of the high-κ material by using the conventional C–V and pulse C–V modules. The irradiation exposure of the sample is carried out with a dose rate of 0.175 rad/s and ±1 V bias in the radiation response testing system. Analysis of flat-band voltage shifts (ΔV{sub FB}) of the MOS capacitors suggests that the on-site and real-time/pulse measurements detect more serious degradation of the HfO{sub 2} thin films compared with the off-site irradiation and conventional measurement techniques.

  20. Operating systems. [of computers

    Science.gov (United States)

    Denning, P. J.; Brown, R. L.

    1984-01-01

    A counter operating system creates a hierarchy of levels of abstraction, so that at a given level all details concerning lower levels can be ignored. This hierarchical structure separates functions according to their complexity, characteristic time scale, and level of abstraction. The lowest levels include the system's hardware; concepts associated explicitly with the coordination of multiple tasks appear at intermediate levels, which conduct 'primitive processes'. Software semaphore is the mechanism controlling primitive processes that must be synchronized. At higher levels lie, in rising order, the access to the secondary storage devices of a particular machine, a 'virtual memory' scheme for managing the main and secondary memories, communication between processes by way of a mechanism called a 'pipe', access to external input and output devices, and a hierarchy of directories cataloguing the hardware and software objects to which access must be controlled.

  1. Computer System Design System-on-Chip

    CERN Document Server

    Flynn, Michael J

    2011-01-01

    The next generation of computer system designers will be less concerned about details of processors and memories, and more concerned about the elements of a system tailored to particular applications. These designers will have a fundamental knowledge of processors and other elements in the system, but the success of their design will depend on the skills in making system-level tradeoffs that optimize the cost, performance and other attributes to meet application requirements. This book provides a new treatment of computer system design, particularly for System-on-Chip (SOC), which addresses th

  2. On-site residence time in a driven diffusive system: violation and recovery of mean-field

    CERN Document Server

    Messelink, Joris J B; Vahabi, Mahsa; MacKintosh, Fred C; Sharma, Abhinav

    2016-01-01

    We investigate simple one-dimensional driven diffusive systems with open boundaries. We are interested in the average on-site residence time defined as the time a particle spends on a given site before moving on to the next site. Using mean-field theory, we obtain an analytical expression for the on-site residence times. By comparing the analytic predictions with numerics, we demonstrate that the mean-field significantly underestimates the residence time due to the neglect of time correlations in the local density of particles. The temporal correlations are particularly long-lived near the average shock position, where the density changes abruptly from low to high. By using Domain wall theory (DWT), we obtain highly accurate estimates of the residence time for different boundary conditions. We apply our analytical approach to residence times in a totally asymmetric exclusion process (TASEP), TASEP coupled to Langmuir kinetics (TASEP + LK), and TASEP coupled to mutually interactive LK (TASEP + MILK). The high ...

  3. A Study on Usage of on-site Multi-monitoring System in Laser Processing of Paper Materials

    Science.gov (United States)

    Piili, Heidi

    Laser technology provides advantages for paper material processing as it is non-contact method and provides freedom of geometry and reliable technology for non-stop production. Reason for low utilization of lasers in paper manufacturing is lack of published research. This is main reason to study utilization of on-site multi-monitoring system (MMS) in characterization of interaction between laser beam and paper materials. Target of MMS is to be able to control processing of paper, but also to get better understanding of basic phenomena. Laser equipment used was TRUMPF TLF 2700 CO2 laser (wavelength 10.6 μm) with power range of 190-2500 W. MMS consisted of spectrometer, pyrometer and active illumination imaging system. This on-site study was carried out by treating dried kraft pulp (grammage of 67 g m-2) with different laser power levels, focal plane position settings and interaction times. It was concluded that spectrometer and pyrometer are best devices in MMS; set-up of them to laser process is easy, they detect data fast enough and analysis of data is easy afterwards. Active illumination imaging system is capable for capturing images of different phases of interaction but analysis of images is time-consuming. When active illumination imaging system is combined with spectrometer and pyrometer i.e. using of MMS, it reveals basic phenomena occurring during interaction. For example, it was noticed that holes created after laser exposure are formed gradually. Firstly, small hole is formed to interaction area and after that hole expands, until interaction is ended.

  4. On Dependability of Computing Systems

    Institute of Scientific and Technical Information of China (English)

    XU Shiyi

    1999-01-01

    With the rapid development and wideapplications of computing systems on which more reliance has been put, adependable system will be much more important than ever. This paper isfirst aimed at giving informal but precise definitions characterizingthe various attributes of dependability of computing systems and thenthe importance of (and the relationships among) all the attributes areexplained.Dependability is first introduced as a global concept which subsumes theusual attributes of reliability, availability, maintainability, safetyand security. The basic definitions given here are then commended andsupplemented by detailed material and additional explanations in thesubsequent sections.The presentation has been structured as follows so as to attract thereader's attention to the important attributions of dependability.* Search for a few number of concise concepts enabling thedependability attributes to be expressed as clearly as possible.* Use of terms which are identical or as close as possible tothose commonly used nowadays.This paper is also intended to provoke people's interest in designing adependable computing system.

  5. Standard Guide for On-Site Inspection and Verification of Operation of Solar Domestic Hot Water Systems

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1987-01-01

    1.1 This guide covers procedures and test methods for conducting an on-site inspection and acceptance test of an installed domestic hot water system (DHW) using flat plate, concentrating-type collectors or tank absorber systems. 1.2 It is intended as a simple and economical acceptance test to be performed by the system installer or an independent tester to verify that critical components of the system are functioning and to acquire baseline data reflecting overall short term system heat output. 1.3 This guide is not intended to generate accurate measurements of system performance (see ASHRAE standard 95-1981 for a laboratory test) or thermal efficiency. 1.4 The values stated in SI units are to be regarded as the standard. The values given in parentheses are for information only. 1.5 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine th...

  6. Computational Intelligence for Engineering Systems

    CERN Document Server

    Madureira, A; Vale, Zita

    2011-01-01

    "Computational Intelligence for Engineering Systems" provides an overview and original analysis of new developments and advances in several areas of computational intelligence. Computational Intelligence have become the road-map for engineers to develop and analyze novel techniques to solve problems in basic sciences (such as physics, chemistry and biology) and engineering, environmental, life and social sciences. The contributions are written by international experts, who provide up-to-date aspects of the topics discussed and present recent, original insights into their own experien

  7. Computers in Information Sciences: On-Line Systems.

    Science.gov (United States)

    COMPUTERS, *BIBLIOGRAPHIES, *ONLINE SYSTEMS, * INFORMATION SCIENCES , DATA PROCESSING, DATA MANAGEMENT, COMPUTER PROGRAMMING, INFORMATION RETRIEVAL, COMPUTER GRAPHICS, DIGITAL COMPUTERS, ANALOG COMPUTERS.

  8. Modelling the effects of on-site greywater reuse and low flush toilets on municipal sewer systems.

    Science.gov (United States)

    Penn, R; Schütze, M; Friedler, E

    2013-01-15

    On-site greywater reuse (GWR) and installation of water-efficient toilets (WET) reduce urban freshwater demand. Research on GWR and WET has generally overlooked the effects that GWR may have on municipal sewer systems. This paper discusses and quantifies these effects. The effects of GWR and WET, positive and negative, were studied by modelling a representative urban sewer system. GWR scenarios were modelled and analysed using the SIMBA simulation system. The results show that, as expected, the flow, velocity and proportional depth decrease as GWR increases. Nevertheless, the reduction is not evenly distributed throughout the day but mainly occurs during the morning and evening peaks. Examination of the effects of reduced toilet flush volumes revealed that in some of the GWR scenarios flows, velocities and proportional depths in the sewer were reduced, while in other GWR scenarios discharge volumes, velocities and proportional depths did not change. Further, it is indicated that as a result of GWR and installation of WET, sewer blockage rates are not expected to increase significantly. The results support the option to construct new sewer systems with smaller pipe diameters. The analysis shows that as the penetration of GWR systems increase, and with the installation of WET, concentrations of pollutants also increase. In GWR scenarios (when toilet flush volume is not reduced) the increase in pollutant concentrations is lower than the proportional reduction of sewage flow. Moreover, the results show that the spatial distribution of houses reusing GW does not significantly affect the parameters examined.

  9. Shift in the microbial ecology of a hospital hot water system following the introduction of an on-site monochloramine disinfection system.

    Directory of Open Access Journals (Sweden)

    Julianne L Baron

    Full Text Available Drinking water distribution systems, including premise plumbing, contain a diverse microbiological community that may include opportunistic pathogens. On-site supplemental disinfection systems have been proposed as a control method for opportunistic pathogens in premise plumbing. The majority of on-site disinfection systems to date have been installed in hospitals due to the high concentration of opportunistic pathogen susceptible occupants. The installation of on-site supplemental disinfection systems in hospitals allows for evaluation of the impact of on-site disinfection systems on drinking water system microbial ecology prior to widespread application. This study evaluated the impact of supplemental monochloramine on the microbial ecology of a hospital's hot water system. Samples were taken three months and immediately prior to monochloramine treatment and monthly for the first six months of treatment, and all samples were subjected to high throughput Illumina 16S rRNA region sequencing. The microbial community composition of monochloramine treated samples was dramatically different than the baseline months. There was an immediate shift towards decreased relative abundance of Betaproteobacteria, and increased relative abundance of Firmicutes, Alphaproteobacteria, Gammaproteobacteria, Cyanobacteria and Actinobacteria. Following treatment, microbial populations grouped by sampling location rather than sampling time. Over the course of treatment the relative abundance of certain genera containing opportunistic pathogens and genera containing denitrifying bacteria increased. The results demonstrate the driving influence of supplemental disinfection on premise plumbing microbial ecology and suggest the value of further investigation into the overall effects of premise plumbing disinfection strategies on microbial ecology and not solely specific target microorganisms.

  10. Field study of the composition of greywater and comparison of microbiological indicators of water quality in on-site systems.

    Science.gov (United States)

    Leonard, Margaret; Gilpin, Brent; Robson, Beth; Wall, Katrina

    2016-08-01

    Thirty on-site greywater systems were sampled to determine greywater characteristics and practices in the field. Kitchen greywater was present at eight sites and urine was included at seven sites. These non-traditional sources resulted in significantly higher concentrations of enterococci and 5-day biochemical oxygen demand (BOD5) in greywater. Even with the removal of these sources, the concentrations of microbial indicators indicated high levels of contamination could occur across all greywater sources, including "light" greywater. Using multiple microbial indicators showed that all samples had the potential for faecal contamination. Bacteroidales markers were confirmed in treated greywater and in each greywater source, highlighting the potential for human faecal contamination. Although Escherichia coli was absent in treated greywater recycled to the house, other microbial indicators were present; hence, caution is required in using E. coli concentrations as the sole indicator of microbiological water quality. High BOD5 or total suspended solid concentrations exceeded the levels recommended for effective disinfection. Subsurface irrigation, which is assumed to provide a five-log reduction in exposure, is a suitable reuse option for non-disinfected greywater. Only half the occupants had a good understanding of their greywater systems and 25 % of systems were poorly maintained. Elevated microbial indicator contamination of greywater sludge is a potential hazard during maintenance.

  11. Aging and computational systems biology.

    Science.gov (United States)

    Mooney, Kathleen M; Morgan, Amy E; Mc Auley, Mark T

    2016-01-01

    Aging research is undergoing a paradigm shift, which has led to new and innovative methods of exploring this complex phenomenon. The systems biology approach endeavors to understand biological systems in a holistic manner, by taking account of intrinsic interactions, while also attempting to account for the impact of external inputs, such as diet. A key technique employed in systems biology is computational modeling, which involves mathematically describing and simulating the dynamics of biological systems. Although a large number of computational models have been developed in recent years, these models have focused on various discrete components of the aging process, and to date no model has succeeded in completely representing the full scope of aging. Combining existing models or developing new models may help to address this need and in so doing could help achieve an improved understanding of the intrinsic mechanisms which underpin aging.

  12. Computational Systems for Multidisciplinary Applications

    Science.gov (United States)

    Soni, Bharat; Haupt, Tomasz; Koomullil, Roy; Luke, Edward; Thompson, David

    2002-01-01

    In this paper, we briefly describe our efforts to develop complex simulation systems. We focus first on four key infrastructure items: enterprise computational services, simulation synthesis, geometry modeling and mesh generation, and a fluid flow solver for arbitrary meshes. We conclude by presenting three diverse applications developed using these technologies.

  13. Computational Aeroacoustic Analysis System Development

    Science.gov (United States)

    Hadid, A.; Lin, W.; Ascoli, E.; Barson, S.; Sindir, M.

    2001-01-01

    Many industrial and commercial products operate in a dynamic flow environment and the aerodynamically generated noise has become a very important factor in the design of these products. In light of the importance in characterizing this dynamic environment, Rocketdyne has initiated a multiyear effort to develop an advanced general-purpose Computational Aeroacoustic Analysis System (CAAS) to address these issues. This system will provide a high fidelity predictive capability for aeroacoustic design and analysis. The numerical platform is able to provide high temporal and spatial accuracy that is required for aeroacoustic calculations through the development of a high order spectral element numerical algorithm. The analysis system is integrated with well-established CAE tools, such as a graphical user interface (GUI) through PATRAN, to provide cost-effective access to all of the necessary tools. These include preprocessing (geometry import, grid generation and boundary condition specification), code set up (problem specification, user parameter definition, etc.), and postprocessing. The purpose of the present paper is to assess the feasibility of such a system and to demonstrate the efficiency and accuracy of the numerical algorithm through numerical examples. Computations of vortex shedding noise were carried out in the context of a two-dimensional low Mach number turbulent flow past a square cylinder. The computational aeroacoustic approach that is used in CAAS relies on coupling a base flow solver to the acoustic solver throughout a computational cycle. The unsteady fluid motion, which is responsible for both the generation and propagation of acoustic waves, is calculated using a high order flow solver. The results of the flow field are then passed to the acoustic solver through an interpolator to map the field values into the acoustic grid. The acoustic field, which is governed by the linearized Euler equations, is then calculated using the flow results computed

  14. Computational models of complex systems

    CERN Document Server

    Dabbaghian, Vahid

    2014-01-01

    Computational and mathematical models provide us with the opportunities to investigate the complexities of real world problems. They allow us to apply our best analytical methods to define problems in a clearly mathematical manner and exhaustively test our solutions before committing expensive resources. This is made possible by assuming parameter(s) in a bounded environment, allowing for controllable experimentation, not always possible in live scenarios. For example, simulation of computational models allows the testing of theories in a manner that is both fundamentally deductive and experimental in nature. The main ingredients for such research ideas come from multiple disciplines and the importance of interdisciplinary research is well recognized by the scientific community. This book provides a window to the novel endeavours of the research communities to present their works by highlighting the value of computational modelling as a research tool when investigating complex systems. We hope that the reader...

  15. Redundant computing for exascale systems.

    Energy Technology Data Exchange (ETDEWEB)

    Stearley, Jon R.; Riesen, Rolf E.; Laros, James H., III; Ferreira, Kurt Brian; Pedretti, Kevin Thomas Tauke; Oldfield, Ron A.; Brightwell, Ronald Brian

    2010-12-01

    Exascale systems will have hundred thousands of compute nodes and millions of components which increases the likelihood of faults. Today, applications use checkpoint/restart to recover from these faults. Even under ideal conditions, applications running on more than 50,000 nodes will spend more than half of their total running time saving checkpoints, restarting, and redoing work that was lost. Redundant computing is a method that allows an application to continue working even when failures occur. Instead of each failure causing an application interrupt, multiple failures can be absorbed by the application until redundancy is exhausted. In this paper we present a method to analyze the benefits of redundant computing, present simulation results of the cost, and compare it to other proposed methods for fault resilience.

  16. Computer-aided system design

    Science.gov (United States)

    Walker, Carrie K.

    1991-01-01

    A technique has been developed for combining features of a systems architecture design and assessment tool and a software development tool. This technique reduces simulation development time and expands simulation detail. The Architecture Design and Assessment System (ADAS), developed at the Research Triangle Institute, is a set of computer-assisted engineering tools for the design and analysis of computer systems. The ADAS system is based on directed graph concepts and supports the synthesis and analysis of software algorithms mapped to candidate hardware implementations. Greater simulation detail is provided by the ADAS functional simulator. With the functional simulator, programs written in either Ada or C can be used to provide a detailed description of graph nodes. A Computer-Aided Software Engineering tool developed at the Charles Stark Draper Laboratory (CSDL CASE) automatically generates Ada or C code from engineering block diagram specifications designed with an interactive graphical interface. A technique to use the tools together has been developed, which further automates the design process.

  17. Assessment of the reliability of an on-site MBR system for greywater treatment and the associated aesthetic and health risks.

    Science.gov (United States)

    Friedler, E; Shwartzman, Z; Ostfeld, A

    2008-01-01

    This study analyses the reliability of an on-site MBR system for greywater treatment and reuse. To achieve this goal simulation was performed based on the IWA ASM1 model which was adapted to describe biological and physical mechanisms for MBR greywater treatment based systems. Model results were found to agree well with experimental data from an on site pilot greywater treatment plant, after which the calibrated model was used in a Monte Carlo mode for generating statistical data on the MBR system performance under different scenarios of failures and inflow loads variations. Effluents quality and their associated risks were successfully estimated.

  18. Computer Networks A Systems Approach

    CERN Document Server

    Peterson, Larry L

    2011-01-01

    This best-selling and classic book teaches you the key principles of computer networks with examples drawn from the real world of network and protocol design. Using the Internet as the primary example, the authors explain various protocols and networking technologies. Their systems-oriented approach encourages you to think about how individual network components fit into a larger, complex system of interactions. Whatever your perspective, whether it be that of an application developer, network administrator, or a designer of network equipment or protocols, you will come away with a "big pictur

  19. Floating Chip Mounting System Driven by Repulsive Force of Permanent Magnets for Multiple On-Site SPR Immunoassay Measurements

    Directory of Open Access Journals (Sweden)

    Emi Tamechika

    2012-10-01

    Full Text Available We have developed a measurement chip installation/removal mechanism for a surface plasmon resonance (SPR immunoassay analysis instrument designed for frequent testing, which requires a rapid and easy technique for changing chips. The key components of the mechanism are refractive index matching gel coated on the rear of the SPR chip and a float that presses the chip down. The refractive index matching gel made it possible to optically couple the chip and the prism of the SPR instrument easily via elastic deformation with no air bubbles. The float has an autonomous attitude control function that keeps the chip parallel in relation to the SPR instrument by employing the repulsive force of permanent magnets between the float and a float guide located in the SPR instrument. This function is realized by balancing the upward elastic force of the gel and the downward force of the float, which experiences a leveling force from the float guide. This system makes it possible to start an SPR measurement immediately after chip installation and to remove the chip immediately after the measurement with a simple and easy method that does not require any fine adjustment. Our sensor chip, which we installed using this mounting system, successfully performed an immunoassay measurement on a model antigen (spiked human-IgG in a model real sample (non-homogenized milk that included many kinds of interfering foreign substances without any sample pre-treatment. The ease of the chip installation/removal operation and simple measurement procedure are suitable for frequent on-site agricultural, environmental and medical testing.

  20. Floating chip mounting system driven by repulsive force of permanent magnets for multiple on-site SPR immunoassay measurements.

    Science.gov (United States)

    Horiuchi, Tsutomu; Tobita, Tatsuya; Miura, Toru; Iwasaki, Yuzuru; Seyama, Michiko; Inoue, Suzuyo; Takahashi, Jun-ichi; Haga, Tsuneyuki; Tamechika, Emi

    2012-10-17

    We have developed a measurement chip installation/removal mechanism for a surface plasmon resonance (SPR) immunoassay analysis instrument designed for frequent testing, which requires a rapid and easy technique for changing chips. The key components of the mechanism are refractive index matching gel coated on the rear of the SPR chip and a float that presses the chip down. The refractive index matching gel made it possible to optically couple the chip and the prism of the SPR instrument easily via elastic deformation with no air bubbles. The float has an autonomous attitude control function that keeps the chip parallel in relation to the SPR instrument by employing the repulsive force of permanent magnets between the float and a float guide located in the SPR instrument. This function is realized by balancing the upward elastic force of the gel and the downward force of the float, which experiences a leveling force from the float guide. This system makes it possible to start an SPR measurement immediately after chip installation and to remove the chip immediately after the measurement with a simple and easy method that does not require any fine adjustment. Our sensor chip, which we installed using this mounting system, successfully performed an immunoassay measurement on a model antigen (spiked human-IgG) in a model real sample (non-homogenized milk) that included many kinds of interfering foreign substances without any sample pre-treatment. The ease of the chip installation/removal operation and simple measurement procedure are suitable for frequent on-site agricultural, environmental and medical testing.

  1. Training courses on neutron detection systems on the ISIS research reactor: on-site and through internet training

    Energy Technology Data Exchange (ETDEWEB)

    Lescop, B.; Badeau, G.; Ivanovic, S.; Foulon, F. [National Institute for Nuclear science and Technology French Atomic Energy and Alternative Energies Commission (CEA), Saclay Research Center, 91191 Gif-sur-Yvette (France)

    2015-07-01

    Today, ISIS research reactor is an essential tool for Education and Training programs organized by the National Institute for Nuclear Science and Technology (INSTN) from CEA. In the field of nuclear instrumentation, the INSTN offers both, theoretical courses and training courses on the use of neutron detection systems taking advantage of the ISIS research reactor for the supply of a wide range of neutron fluxes. This paper describes the content of the training carried out on the use of neutron detectors and detection systems, on-site or remote. The ISIS reactor is a 700 kW open core pool type reactor. The facility is very flexible since neutron detectors can be inserted into the core or its vicinity, and be used at different levels of power according to the needs of the course. Neutron fluxes, typically ranging from 1 to 10{sup 12} n/cm{sup 2}.s, can be obtained for the characterisation of the neutron detectors and detection systems. For the monitoring of the neutron density at low level of power, the Instrumentation and Control (I and C) system of the reactor is equipped with two detection systems, named BN1 and BN2. Each way contains a fission chamber, type CFUL01, connected to an electronic system type SIREX.The system works in pulse mode and exhibits two outputs: the counting rate and the doubling time. For the high level of power, the I and C is equipped with two detection systems HN1 and HN2.Each way contain a boron ionization chamber (type CC52) connected to an electronics system type SIREX. The system works in current mode and has two outputs: the current and the doubling time. For each mode, the trainees can observe and measure the signal at the different stages of the electronic system, with an oscilloscope. They can understand the role of each component of the detection system: detector, cable and each electronic block. The limitation of the detection modes and their operating range can be established from the measured signal. The trainees can also

  2. SELF LEARNING COMPUTER TROUBLESHOOTING EXPERT SYSTEM

    OpenAIRE

    Amanuel Ayde Ergado

    2016-01-01

    In computer domain the professionals were limited in number but the numbers of institutions looking for computer professionals were high. The aim of this study is developing self learning expert system which is providing troubleshooting information about problems occurred in the computer system for the information and communication technology technicians and computer users to solve problems effectively and efficiently to utilize computer and computer related resources. Domain know...

  3. Computers as components principles of embedded computing system design

    CERN Document Server

    Wolf, Marilyn

    2012-01-01

    Computers as Components: Principles of Embedded Computing System Design, 3e, presents essential knowledge on embedded systems technology and techniques. Updated for today's embedded systems design methods, this edition features new examples including digital signal processing, multimedia, and cyber-physical systems. Author Marilyn Wolf covers the latest processors from Texas Instruments, ARM, and Microchip Technology plus software, operating systems, networks, consumer devices, and more. Like the previous editions, this textbook: Uses real processors to demonstrate both technology and tec

  4. Automated Computer Access Request System

    Science.gov (United States)

    Snook, Bryan E.

    2010-01-01

    The Automated Computer Access Request (AutoCAR) system is a Web-based account provisioning application that replaces the time-consuming paper-based computer-access request process at Johnson Space Center (JSC). Auto- CAR combines rules-based and role-based functionality in one application to provide a centralized system that is easily and widely accessible. The system features a work-flow engine that facilitates request routing, a user registration directory containing contact information and user metadata, an access request submission and tracking process, and a system administrator account management component. This provides full, end-to-end disposition approval chain accountability from the moment a request is submitted. By blending both rules-based and rolebased functionality, AutoCAR has the flexibility to route requests based on a user s nationality, JSC affiliation status, and other export-control requirements, while ensuring a user s request is addressed by either a primary or backup approver. All user accounts that are tracked in AutoCAR are recorded and mapped to the native operating system schema on the target platform where user accounts reside. This allows for future extensibility for supporting creation, deletion, and account management directly on the target platforms by way of AutoCAR. The system s directory-based lookup and day-today change analysis of directory information determines personnel moves, deletions, and additions, and automatically notifies a user via e-mail to revalidate his/her account access as a result of such changes. AutoCAR is a Microsoft classic active server page (ASP) application hosted on a Microsoft Internet Information Server (IIS).

  5. Research on computer systems benchmarking

    Science.gov (United States)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  6. Computer vision in control systems

    CERN Document Server

    Jain, Lakhmi

    2015-01-01

    Volume 1 : This book is focused on the recent advances in computer vision methodologies and technical solutions using conventional and intelligent paradigms. The Contributions include: ·         Morphological Image Analysis for Computer Vision Applications. ·         Methods for Detecting of Structural Changes in Computer Vision Systems. ·         Hierarchical Adaptive KL-based Transform: Algorithms and Applications. ·         Automatic Estimation for Parameters of Image Projective Transforms Based on Object-invariant Cores. ·         A Way of Energy Analysis for Image and Video Sequence Processing. ·         Optimal Measurement of Visual Motion Across Spatial and Temporal Scales. ·         Scene Analysis Using Morphological Mathematics and Fuzzy Logic. ·         Digital Video Stabilization in Static and Dynamic Scenes. ·         Implementation of Hadamard Matrices for Image Processing. ·         A Generalized Criterion ...

  7. When does a physical system compute?

    Science.gov (United States)

    Horsman, Clare; Stepney, Susan; Wagner, Rob C; Kendon, Viv

    2014-09-08

    Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper, we introduce a formal framework that can be used to determine whether a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, in comparison with the use of mathematical models in experimental science. This powerful formulation allows a precise description of experiments, technology, computation and simulation, giving our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution. We give conditions for computing, illustrated using a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We introduce the notion of a 'computational entity', and its critical role in defining when computing is taking place in physical systems.

  8. `95 computer system operation project

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young Taek; Lee, Hae Cho; Park, Soo Jin; Kim, Hee Kyung; Lee, Ho Yeun; Lee, Sung Kyu; Choi, Mi Kyung [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1995-12-01

    This report describes overall project works related to the operation of mainframe computers, the management of nuclear computer codes and the project of nuclear computer code conversion. The results of the project are as follows ; 1. The operation and maintenance of the three mainframe computers and other utilities. 2. The management of the nuclear computer codes. 3. The finishing of the computer codes conversion project. 26 tabs., 5 figs., 17 refs. (Author) .new.

  9. Computing abstractions of nonlinear systems

    CERN Document Server

    Reißig, Gunther

    2009-01-01

    We present an efficient algorithm for computing discrete abstractions of arbitrary memory span for nonlinear discrete-time and sampled systems, in which, apart from possibly numerically integrating ordinary differential equations, the only nontrivial operation to be performed repeatedly is to distinguish empty from non-empty convex polyhedra. We also provide sufficient conditions for the convexity of attainable sets, which is an important requirement for the correctness of the method we propose. It turns out that requirement can be met under rather mild conditions, which essentially reduce to sufficient smoothness in the case of sampled systems. Practicability of our approach in the design of discrete controllers for continuous plants is demonstrated by an example.

  10. Hydronic distribution system computer model

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, J.W.; Strasser, J.J.

    1994-10-01

    A computer model of a hot-water boiler and its associated hydronic thermal distribution loop has been developed at Brookhaven National Laboratory (BNL). It is intended to be incorporated as a submodel in a comprehensive model of residential-scale thermal distribution systems developed at Lawrence Berkeley. This will give the combined model the capability of modeling forced-air and hydronic distribution systems in the same house using the same supporting software. This report describes the development of the BNL hydronics model, initial results and internal consistency checks, and its intended relationship to the LBL model. A method of interacting with the LBL model that does not require physical integration of the two codes is described. This will provide capability now, with reduced up-front cost, as long as the number of runs required is not large.

  11. Computer systems and software engineering

    Science.gov (United States)

    Mckay, Charles W.

    1988-01-01

    The High Technologies Laboratory (HTL) was established in the fall of 1982 at the University of Houston Clear Lake. Research conducted at the High Tech Lab is focused upon computer systems and software engineering. There is a strong emphasis on the interrelationship of these areas of technology and the United States' space program. In Jan. of 1987, NASA Headquarters announced the formation of its first research center dedicated to software engineering. Operated by the High Tech Lab, the Software Engineering Research Center (SERC) was formed at the University of Houston Clear Lake. The High Tech Lab/Software Engineering Research Center promotes cooperative research among government, industry, and academia to advance the edge-of-knowledge and the state-of-the-practice in key topics of computer systems and software engineering which are critical to NASA. The center also recommends appropriate actions, guidelines, standards, and policies to NASA in matters pertinent to the center's research. Results of the research conducted at the High Tech Lab/Software Engineering Research Center have given direction to many decisions made by NASA concerning the Space Station Program.

  12. Trusted computing for embedded systems

    CERN Document Server

    Soudris, Dimitrios; Anagnostopoulos, Iraklis

    2015-01-01

    This book describes the state-of-the-art in trusted computing for embedded systems. It shows how a variety of security and trusted computing problems are addressed currently and what solutions are expected to emerge in the coming years. The discussion focuses on attacks aimed at hardware and software for embedded systems, and the authors describe specific solutions to create security features. Case studies are used to present new techniques designed as industrial security solutions. Coverage includes development of tamper resistant hardware and firmware mechanisms for lightweight embedded devices, as well as those serving as security anchors for embedded platforms required by applications such as smart power grids, smart networked and home appliances, environmental and infrastructure sensor networks, etc. ·         Enables readers to address a variety of security threats to embedded hardware and software; ·         Describes design of secure wireless sensor networks, to address secure authen...

  13. Using Expert Systems For Computational Tasks

    Science.gov (United States)

    Duke, Eugene L.; Regenie, Victoria A.; Brazee, Marylouise; Brumbaugh, Randal W.

    1990-01-01

    Transformation technique enables inefficient expert systems to run in real time. Paper suggests use of knowledge compiler to transform knowledge base and inference mechanism of expert-system computer program into conventional computer program. Main benefit, faster execution and reduced processing demands. In avionic systems, transformation reduces need for special-purpose computers.

  14. Software For Monitoring VAX Computer Systems

    Science.gov (United States)

    Farkas, Les; Don, Ken; Lavery, David; Baron, Amy

    1994-01-01

    VAX Continuous Monitoring System (VAXCMS) computer program developed at NASA Headquarters to aid system managers in monitoring performances of VAX computer systems through generation of graphic images summarizing trends in performance metrics over time. VAXCMS written in DCL and VAX FORTRAN for use with DEC VAX-series computers running VMS 5.1 or later.

  15. Computer Aided Control System Design (CACSD)

    Science.gov (United States)

    Stoner, Frank T.

    1993-01-01

    The design of modern aerospace systems relies on the efficient utilization of computational resources and the availability of computational tools to provide accurate system modeling. This research focuses on the development of a computer aided control system design application which provides a full range of stability analysis and control design capabilities for aerospace vehicles.

  16. Impact of new computing systems on finite element computations

    Science.gov (United States)

    Noor, A. K.; Storassili, O. O.; Fulton, R. E.

    1983-01-01

    Recent advances in computer technology that are likely to impact finite element computations are reviewed. The characteristics of supersystems, highly parallel systems, and small systems (mini and microcomputers) are summarized. The interrelations of numerical algorithms and software with parallel architectures are discussed. A scenario is presented for future hardware/software environment and finite element systems. A number of research areas which have high potential for improving the effectiveness of finite element analysis in the new environment are identified.

  17. Transient Faults in Computer Systems

    Science.gov (United States)

    Masson, Gerald M.

    1993-01-01

    A powerful technique particularly appropriate for the detection of errors caused by transient faults in computer systems was developed. The technique can be implemented in either software or hardware; the research conducted thus far primarily considered software implementations. The error detection technique developed has the distinct advantage of having provably complete coverage of all errors caused by transient faults that affect the output produced by the execution of a program. In other words, the technique does not have to be tuned to a particular error model to enhance error coverage. Also, the correctness of the technique can be formally verified. The technique uses time and software redundancy. The foundation for an effective, low-overhead, software-based certification trail approach to real-time error detection resulting from transient fault phenomena was developed.

  18. Computer system reliability safety and usability

    CERN Document Server

    Dhillon, BS

    2013-01-01

    Computer systems have become an important element of the world economy, with billions of dollars spent each year on development, manufacture, operation, and maintenance. Combining coverage of computer system reliability, safety, usability, and other related topics into a single volume, Computer System Reliability: Safety and Usability eliminates the need to consult many different and diverse sources in the hunt for the information required to design better computer systems.After presenting introductory aspects of computer system reliability such as safety, usability-related facts and figures,

  19. Integrated Computer System of Management in Logistics

    Science.gov (United States)

    Chwesiuk, Krzysztof

    2011-06-01

    This paper aims at presenting a concept of an integrated computer system of management in logistics, particularly in supply and distribution chains. Consequently, the paper includes the basic idea of the concept of computer-based management in logistics and components of the system, such as CAM and CIM systems in production processes, and management systems for storage, materials flow, and for managing transport, forwarding and logistics companies. The platform which integrates computer-aided management systems is that of electronic data interchange.

  20. Conflict Resolution in Computer Systems

    Directory of Open Access Journals (Sweden)

    G. P. Mojarov

    2015-01-01

    Full Text Available A conflict situation in computer systems CS is the phenomenon arising when the processes have multi-access to the shared resources and none of the involved processes can proceed because of their waiting for the certain resources locked by the other processes which, in turn, are in a similar position. The conflict situation is also called a deadlock that has quite clear impact on the CS state.To find the reduced to practice algorithms to resolve the impasses is of significant applied importance for ensuring information security of computing process and thereupon the presented article is aimed at solving a relevant problem.The gravity of situation depends on the types of processes in a deadlock, types of used resources, number of processes, and a lot of other factors.A disadvantage of the method for preventing the impasses used in many modern operating systems and based on the preliminary planning resources required for the process is obvious - waiting time can be overlong. The preventing method with the process interruption and deallocation of its resources is very specific and a little effective, when there is a set of the polytypic resources requested dynamically. The drawback of another method, to prevent a deadlock by ordering resources, consists in restriction of possible sequences of resource requests.A different way of "struggle" against deadlocks is a prevention of impasses. In the future a prediction of appearing impasses is supposed. There are known methods [1,4,5] to define and prevent conditions under which deadlocks may occur. Thus the preliminary information on what resources a running process can request is used. Before allocating a free resource to the process, a test for a state “safety” condition is provided. The state is "safe" if in the future impasses cannot occur as a result of resource allocation to the process. Otherwise the state is considered to be " hazardous ", and resource allocation is postponed. The obvious

  1. Digital optical computers at the optoelectronic computing systems center

    Science.gov (United States)

    Jordan, Harry F.

    1991-01-01

    The Digital Optical Computing Program within the National Science Foundation Engineering Research Center for Opto-electronic Computing Systems has as its specific goal research on optical computing architectures suitable for use at the highest possible speeds. The program can be targeted toward exploiting the time domain because other programs in the Center are pursuing research on parallel optical systems, exploiting optical interconnection and optical devices and materials. Using a general purpose computing architecture as the focus, we are developing design techniques, tools and architecture for operation at the speed of light limit. Experimental work is being done with the somewhat low speed components currently available but with architectures which will scale up in speed as faster devices are developed. The design algorithms and tools developed for a general purpose, stored program computer are being applied to other systems such as optimally controlled optical communication networks.

  2. The Remote Computer Control (RCC) system

    Science.gov (United States)

    Holmes, W.

    1980-01-01

    A system to remotely control job flow on a host computer from any touchtone telephone is briefly described. Using this system a computer programmer can submit jobs to a host computer from any touchtone telephone. In addition the system can be instructed by the user to call back when a job is finished. Because of this system every touchtone telephone becomes a conversant computer peripheral. This system known as the Remote Computer Control (RCC) system utilizes touchtone input, touchtone output, voice input, and voice output. The RCC system is microprocessor based and is currently using the INTEL 80/30microcomputer. Using the RCC system a user can submit, cancel, and check the status of jobs on a host computer. The RCC system peripherals consist of a CRT for operator control, a printer for logging all activity, mass storage for the storage of user parameters, and a PROM card for program storage.

  3. On-Site Energy Management by Integrating Campus Buildings and Optimizing Local Energy Systems, Case Study of the Campus in Finland

    Directory of Open Access Journals (Sweden)

    Genku Kayo

    2016-12-01

    Full Text Available This research work describes the potential study on the impact of energy improvements of existing campus buildings by on-site energy management and operational strategies. The focus buildings in the campus are mainly built in the 1960s, and therefore it is time to carry out renovation work. In connection with renovations, the aim is to improve the energy efficiency of the buildings, and to develop the functionality of the properties to meet the current requirements. Thus, in this study, the potentials of on-site energy generation and sharing in the cluster of campus buildings in Finland were studied. By means of optimisation method, the optimal combined heat and power systems (CHP capacity distribution and operation mode for minimizing annual primary energy consumption were simulated. The results show the integration of buildings has advantage for on-site energy management as 23% of primary energy reduction compared with current situation. Consequently, integrating buildings and optimizing on-site energy management can be one of effective strategies for minimizing primary energy consumption. Furthermore, the study to improve operation strategies of building service system considering current space use in the building clarified that up to 13% of total energy use reduction is expected. The research work also proposes the way of providing environmental information to increase awareness of building energy usage in the campus.

  4. Implementation of Computational Electromagnetic on Distributed Systems

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Now the new generation of technology could raise the bar for distributed computing. It seems to be a trend to solve computational electromagnetic work on a distributed system with parallel computing techniques. In this paper, we analyze the parallel characteristics of the distributed system and the possibility of setting up a tightly coupled distributed system by using LAN in our lab. The analysis of the performance of different computational methods, such as FEM, MOM, FDTD and finite difference method, are given. Our work on setting up a distributed system and the performance of the test bed is also included. At last, we mention the implementation of one of our computational electromagnetic codes.

  5. Cybersecurity of embedded computers systems

    OpenAIRE

    Carlioz, Jean

    2016-01-01

    International audience; Several articles have recently raised the issue of computer security of commercial flights by evoking the "connected aircraft, hackers target" or "Wi-Fi on planes, an open door for hackers ? " Or "Can you hack the computer of an Airbus or a Boeing ?". The feared scenario consists in a takeover of operational aircraft software that intentionally cause an accident. Moreover, several computer security experts have lately announced they had detected flaws in embedded syste...

  6. Infarct size in primary angioplasty without on-site cardiac surgical backup versus transferal to a tertiary center: a single photon emission computed tomography study

    Energy Technology Data Exchange (ETDEWEB)

    Knaapen, Paul; Rossum, Albert C. van [VU University Medical Center, Department of Cardiology, Amsterdam (Netherlands); Mulder, Maarten de; Peels, Hans O.; Cornel, Jan H.; Umans, Victor A.W.M. [Medical Center Alkmaar, Department of Cardiology, Alkmaar (Netherlands); Zant, Friso M. van der [Medical Center Alkmaar, Department of Nuclear Medicine, Alkmaar (Netherlands); Twisk, Jos W.R. [VU University Medical Center, Department of Clinical Epidemiology and Biostatistics, Amsterdam (Netherlands)

    2009-02-15

    Primary percutaneous coronary intervention (PCI) performed in large community hospitals without cardiac surgery back-up facilities (off-site) reduces door-to-balloon time compared with emergency transferal to tertiary interventional centers (on-site). The present study was performed to explore whether off-site PCI for acute myocardial infarction results in reduced infarct size. One hundred twenty-eight patients with acute ST-segment elevation myocardial infarction were randomly assigned to undergo primary PCI at the off-site center (n = 68) or to transferal to an on-site center (n = 60). Three days after PCI, {sup 99m}Tc-sestamibi SPECT was performed to estimate infarct size. Off-site PCI significantly reduced door-to-balloon time compared with on-site PCI (94 {+-} 54 versus 125 {+-} 59 min, respectively, p < 0.01), although symptoms-to-treatment time was only insignificantly reduced (257 {+-} 211 versus 286 {+-} 146 min, respectively, p = 0.39). Infarct size was comparable between treatment centers (16 {+-} 15 versus 14 {+-} 12%, respectively p = 0.35). Multivariate analysis revealed that TIMI 0/1 flow grade at initial coronary angiography (OR 3.125, 95% CI 1.17-8.33, p = 0.023), anterior wall localization of the myocardial infarction (OR 3.44, 95% CI 1.38-8.55, p < 0.01), and development of pathological Q-waves (OR 5.07, 95% CI 2.10-12.25, p < 0.01) were independent predictors of an infarct size > 12%. Off-site PCI reduces door-to-balloon time compared with transferal to a remote on-site interventional center but does not reduce infarct size. Instead, pre-PCI TIMI 0/1 flow, anterior wall infarct localization, and development of Q-waves are more important predictors of infarct size. (orig.)

  7. Applied computation and security systems

    CERN Document Server

    Saeed, Khalid; Choudhury, Sankhayan; Chaki, Nabendu

    2015-01-01

    This book contains the extended version of the works that have been presented and discussed in the First International Doctoral Symposium on Applied Computation and Security Systems (ACSS 2014) held during April 18-20, 2014 in Kolkata, India. The symposium has been jointly organized by the AGH University of Science & Technology, Cracow, Poland and University of Calcutta, India. The Volume I of this double-volume book contains fourteen high quality book chapters in three different parts. Part 1 is on Pattern Recognition and it presents four chapters. Part 2 is on Imaging and Healthcare Applications contains four more book chapters. The Part 3 of this volume is on Wireless Sensor Networking and it includes as many as six chapters. Volume II of the book has three Parts presenting a total of eleven chapters in it. Part 4 consists of five excellent chapters on Software Engineering ranging from cloud service design to transactional memory. Part 5 in Volume II is on Cryptography with two book...

  8. Universal blind quantum computation for hybrid system

    Science.gov (United States)

    Huang, He-Liang; Bao, Wan-Su; Li, Tan; Li, Feng-Guang; Fu, Xiang-Qun; Zhang, Shuo; Zhang, Hai-Long; Wang, Xiang

    2017-08-01

    As progress on the development of building quantum computer continues to advance, first-generation practical quantum computers will be available for ordinary users in the cloud style similar to IBM's Quantum Experience nowadays. Clients can remotely access the quantum servers using some simple devices. In such a situation, it is of prime importance to keep the security of the client's information. Blind quantum computation protocols enable a client with limited quantum technology to delegate her quantum computation to a quantum server without leaking any privacy. To date, blind quantum computation has been considered only for an individual quantum system. However, practical universal quantum computer is likely to be a hybrid system. Here, we take the first step to construct a framework of blind quantum computation for the hybrid system, which provides a more feasible way for scalable blind quantum computation.

  9. Computer Simulation and Computabiblity of Biological Systems

    CERN Document Server

    Baianu, I C

    2004-01-01

    The ability to simulate a biological organism by employing a computer is related to the ability of the computer to calculate the behavior of such a dynamical system, or the "computability" of the system. However, the two questions of computability and simulation are not equivalent. Since the question of computability can be given a precise answer in terms of recursive functions, automata theory and dynamical systems, it will be appropriate to consider it first. The more elusive question of adequate simulation of biological systems by a computer will be then addressed and a possible connection between the two answers given will be considered as follows. A symbolic, algebraic-topological "quantum computer" (as introduced in Baianu, 1971b) is here suggested to provide one such potential means for adequate biological simulations based on QMV Quantum Logic and meta-Categorical Modeling as for example in a QMV-based, Quantum-Topos (Baianu and Glazebrook,2004.

  10. The Computational Complexity of Evolving Systems

    NARCIS (Netherlands)

    Verbaan, P.R.A.

    2006-01-01

    Evolving systems are systems that change over time. Examples of evolving systems are computers with soft-and hardware upgrades and dynamic networks of computers that communicate with each other, but also colonies of cooperating organisms or cells within a single organism. In this research, several m

  11. Computational Models for Nonlinear Aeroelastic Systems Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Clear Science Corp. and Duke University propose to develop and demonstrate new and efficient computational methods of modeling nonlinear aeroelastic systems. The...

  12. ACSES, An Automated Computer Science Education System.

    Science.gov (United States)

    Nievergelt, Jurg; And Others

    A project to accommodate the large and increasing enrollment in introductory computer science courses by automating them with a subsystem for computer science instruction on the PLATO IV Computer-Based Education system at the University of Illinois was started. The subsystem was intended to be used for supplementary instruction at the University…

  13. 50 kW on-site concentrating solar photovoltaic power system. Phase I: design. Final report, 1 June 1978-28 February 1979

    Energy Technology Data Exchange (ETDEWEB)

    Pittman, P F

    1979-03-30

    This contract is part of a three phase program to design, fabricate, and operate a solar photovoltaic electric power system with concentrating optics. The system will be located beside a Local Operating Headquarters of the Georgia Power Company in Atlanta, Georgia and will provide part of the power for the on-site load. Fresnel lens concentrators will be used in 2-axis tracking arrays to focus solar energy onto silicon solar cells producing a peak power output of 56 kW. The present contract covers Phase I which has as its objective the complete design of the system and necessary subsystems.

  14. An optimization methodology for the design of renewable energy systems for residential net zero energy buildings with on-site heat production

    DEFF Research Database (Denmark)

    Milan, Christian; Bojesen, Carsten; Nielsen, Mads Pagh

    2011-01-01

    The concept of net zero energy buildings (NZEB) has received increased attention throughout the last years. A well adapted and optimized design of the energy supply system is crucial for the performance of such buildings. This paper aims at developing a method for the optimal sizing of renewable...... energy supply systems for residential NZEB involving on-site production of heat and electricity in combination with electricity exchanged with the public grid. The model is based on linear programming and determines the optimal capacities for each relevant supply technology in terms of the overall system...

  15. Task allocation in a distributed computing system

    Science.gov (United States)

    Seward, Walter D.

    1987-01-01

    A conceptual framework is examined for task allocation in distributed systems. Application and computing system parameters critical to task allocation decision processes are discussed. Task allocation techniques are addressed which focus on achieving a balance in the load distribution among the system's processors. Equalization of computing load among the processing elements is the goal. Examples of system performance are presented for specific applications. Both static and dynamic allocation of tasks are considered and system performance is evaluated using different task allocation methodologies.

  16. Distributed computer systems theory and practice

    CERN Document Server

    Zedan, H S M

    2014-01-01

    Distributed Computer Systems: Theory and Practice is a collection of papers dealing with the design and implementation of operating systems, including distributed systems, such as the amoeba system, argus, Andrew, and grapevine. One paper discusses the concepts and notations for concurrent programming, particularly language notation used in computer programming, synchronization methods, and also compares three classes of languages. Another paper explains load balancing or load redistribution to improve system performance, namely, static balancing and adaptive load balancing. For program effici

  17. Comparing the architecture of Grid Computing and Cloud Computing systems

    Directory of Open Access Journals (Sweden)

    Abdollah Doavi

    2015-09-01

    Full Text Available Grid Computing or computational connected networks is a new network model that allows the possibility of massive computational operations using the connected resources, in fact, it is a new generation of distributed networks. Grid architecture is recommended because the widespread nature of the Internet makes an exciting environment called 'Grid' to create a scalable system with high-performance, generalized and secure. Then the central architecture called to this goal is a firmware named GridOS. The term 'cloud computing' means the development and deployment of Internet –based computing technology. This is a style of computing in an environment where IT-related capabilities offered as a service or users services. And it allows him/her to have access to technology-based services on the Internet; without the user having the specific information about this technology or (s he wants to take control of the IT infrastructure supported by him/her. In the paper, general explanations are given about the systems Grid and Cloud. Then their provided components and services are checked by these systems and their security.

  18. Intelligent computing systems emerging application areas

    CERN Document Server

    Virvou, Maria; Jain, Lakhmi

    2016-01-01

    This book at hand explores emerging scientific and technological areas in which Intelligent Computing Systems provide efficient solutions and, thus, may play a role in the years to come. It demonstrates how Intelligent Computing Systems make use of computational methodologies that mimic nature-inspired processes to address real world problems of high complexity for which exact mathematical solutions, based on physical and statistical modelling, are intractable. Common intelligent computational methodologies are presented including artificial neural networks, evolutionary computation, genetic algorithms, artificial immune systems, fuzzy logic, swarm intelligence, artificial life, virtual worlds and hybrid methodologies based on combinations of the previous. The book will be useful to researchers, practitioners and graduate students dealing with mathematically-intractable problems. It is intended for both the expert/researcher in the field of Intelligent Computing Systems, as well as for the general reader in t...

  19. FPGA-accelerated simulation of computer systems

    CERN Document Server

    Angepat, Hari; Chung, Eric S; Hoe, James C; Chung, Eric S

    2014-01-01

    To date, the most common form of simulators of computer systems are software-based running on standard computers. One promising approach to improve simulation performance is to apply hardware, specifically reconfigurable hardware in the form of field programmable gate arrays (FPGAs). This manuscript describes various approaches of using FPGAs to accelerate software-implemented simulation of computer systems and selected simulators that incorporate those techniques. More precisely, we describe a simulation architecture taxonomy that incorporates a simulation architecture specifically designed f

  20. Formal Protection Architecture for Cloud Computing System

    Institute of Scientific and Technical Information of China (English)

    Yasha Chen; Jianpeng Zhao; Junmao Zhu; Fei Yan

    2014-01-01

    Cloud computing systems play a vital role in national securi-ty. This paper describes a conceptual framework called dual-system architecture for protecting computing environments. While attempting to be logical and rigorous, formalism meth-od is avoided and this paper chooses algebra Communication Sequential Process.

  1. Computer Literacy in a Distance Education System

    Science.gov (United States)

    Farajollahi, Mehran; Zandi, Bahman; Sarmadi, Mohamadreza; Keshavarz, Mohsen

    2015-01-01

    In a Distance Education (DE) system, students must be equipped with seven skills of computer (ICDL) usage. This paper aims at investigating the effect of a DE system on the computer literacy of Master of Arts students at Tehran University. The design of this study is quasi-experimental. Pre-test and post-test were used in both control and…

  2. Computer-Controlled, Motorized Positioning System

    Science.gov (United States)

    Vargas-Aburto, Carlos; Liff, Dale R.

    1994-01-01

    Computer-controlled, motorized positioning system developed for use in robotic manipulation of samples in custom-built secondary-ion mass spectrometry (SIMS) system. Positions sample repeatably and accurately, even during analysis in three linear orthogonal coordinates and one angular coordinate under manual local control, or microprocessor-based local control or remote control by computer via general-purpose interface bus (GPIB).

  3. Advanced Hybrid Computer Systems. Software Technology.

    Science.gov (United States)

    This software technology final report evaluates advances made in Advanced Hybrid Computer System software technology . The report describes what...automatic patching software is available as well as which analog/hybrid programming languages would be most feasible for the Advanced Hybrid Computer...compiler software . The problem of how software would interface with the hybrid system is also presented.

  4. Biomolecular computing systems: principles, progress and potential.

    Science.gov (United States)

    Benenson, Yaakov

    2012-06-12

    The task of information processing, or computation, can be performed by natural and man-made 'devices'. Man-made computers are made from silicon chips, whereas natural 'computers', such as the brain, use cells and molecules. Computation also occurs on a much smaller scale in regulatory and signalling pathways in individual cells and even within single biomolecules. Indeed, much of what we recognize as life results from the remarkable capacity of biological building blocks to compute in highly sophisticated ways. Rational design and engineering of biological computing systems can greatly enhance our ability to study and to control biological systems. Potential applications include tissue engineering and regeneration and medical treatments. This Review introduces key concepts and discusses recent progress that has been made in biomolecular computing.

  5. Design and Elementary Evaluation of a Highly-Automated Fluorescence-Based Instrument System for On-Site Detection of Food-Borne Pathogens

    Directory of Open Access Journals (Sweden)

    Zhan Lu

    2017-02-01

    Full Text Available A simple, highly-automated instrument system used for on-site detection of foodborne pathogens based on fluorescence was designed, fabricated, and preliminarily tested in this paper. A corresponding method has been proved effective in our previous studies. This system utilizes a light-emitting diode (LED to excite fluorescent labels and a spectrometer to record the fluorescence signal from samples. A rotation stage for positioning and switching samples was innovatively designed for high-throughput detection, ten at most in one single run. We also developed software based on LabVIEW for data receiving, processing, and the control of the whole system. In the test of using a pure quantum dot (QD solution as a standard sample, detection results from this home-made system were highly-relevant with that from a well-commercialized product and even slightly better reproducibility was found. And in the test of three typical kinds of food-borne pathogens, fluorescence signals recorded by this system are highly proportional to the variation of the sample concentration, with a satisfied limit of detection (LOD (nearly 102–103 CFU·mL−1 in food samples. Additionally, this instrument system is low-cost and easy-to-use, showing a promising potential for on-site rapid detection of food-borne pathogens.

  6. Design and Elementary Evaluation of a Highly-Automated Fluorescence-Based Instrument System for On-Site Detection of Food-Borne Pathogens.

    Science.gov (United States)

    Lu, Zhan; Zhang, Jianyi; Xu, Lizhou; Li, Yanbin; Chen, Siyu; Ye, Zunzhong; Wang, Jianping

    2017-02-23

    A simple, highly-automated instrument system used for on-site detection of foodborne pathogens based on fluorescence was designed, fabricated, and preliminarily tested in this paper. A corresponding method has been proved effective in our previous studies. This system utilizes a light-emitting diode (LED) to excite fluorescent labels and a spectrometer to record the fluorescence signal from samples. A rotation stage for positioning and switching samples was innovatively designed for high-throughput detection, ten at most in one single run. We also developed software based on LabVIEW for data receiving, processing, and the control of the whole system. In the test of using a pure quantum dot (QD) solution as a standard sample, detection results from this home-made system were highly-relevant with that from a well-commercialized product and even slightly better reproducibility was found. And in the test of three typical kinds of food-borne pathogens, fluorescence signals recorded by this system are highly proportional to the variation of the sample concentration, with a satisfied limit of detection (LOD) (nearly 10²-10³ CFU·mL(-1) in food samples). Additionally, this instrument system is low-cost and easy-to-use, showing a promising potential for on-site rapid detection of food-borne pathogens.

  7. Automated Diversity in Computer Systems

    Science.gov (United States)

    2005-09-01

    P ( EBM I ) = Me2a ; P (ELMP ) = ps and P (EBMP ) = ps. We are interested in the probability of a successful branch (escape) out of a sequence of n...reference is still le- gal. Both can generate false positives, although CRED is less computationally expensive. The common theme in all these

  8. 77 FR 67399 - Trim Systems Operating Corp., a Subsidiary of Commercial Vehicle Group, Inc., Including On-Site...

    Science.gov (United States)

    2012-11-09

    ... Employment and Training Administration Trim Systems Operating Corp., a Subsidiary of Commercial Vehicle Group... of Trim Systems Operating Corp., a subsidiary of Commercial Vehicle Group, Inc., Statesville, North... applicable to TA-W-81,393 is hereby issued as follows: All workers of Trim Systems Operating Corp.,...

  9. Laser Imaging Systems For Computer Vision

    Science.gov (United States)

    Vlad, Ionel V.; Ionescu-Pallas, Nicholas; Popa, Dragos; Apostol, Ileana; Vlad, Adriana; Capatina, V.

    1989-05-01

    The computer vision is becoming an essential feature of the high level artificial intelligence. Laser imaging systems act as special kind of image preprocessors/converters enlarging the access of the computer "intelligence" to the inspection, analysis and decision in new "world" : nanometric, three-dimensionals(3D), ultrafast, hostile for humans etc. Considering that the heart of the problem is the matching of the optical methods and the compu-ter software , some of the most promising interferometric,projection and diffraction systems are reviewed with discussions of our present results and of their potential in the precise 3D computer vision.

  10. Computer Bits: The Ideal Computer System for Your Center.

    Science.gov (United States)

    Brown, Dennis; Neugebauer, Roger

    1986-01-01

    Reviews five computer systems that can address the needs of a child care center: (1) Sperry PC IT with Bernoulli Box, (2) Compaq DeskPro 286, (3) Macintosh Plus, (4) Epson Equity II, and (5) Leading Edge Model "D." (HOD)

  11. An Optical Tri-valued Computing System

    Directory of Open Access Journals (Sweden)

    Junjie Peng

    2014-03-01

    Full Text Available A new optical computing experimental system is presented. Designed based on tri-valued logic, the system is built as a photoelectric hybrid computer system which is much advantageous over its electronic counterparts. Say, the tri-valued logic of the system guarantees that it is more powerful in information processing than that of systems with binary logic. And the optical characteristic of the system makes it be much capable in huge data processing than that of the electronic computers. The optical computing system includes two parts, electronic part and optical part. The electronic part consists of a PC and two embedded systems which are used for data input/output, monitor, synchronous control, user data combination and separation and so on. The optical part includes three components. They are optical encoder, logic calculator and decoder. It mainly answers for encoding the users' requests into tri-valued optical information, computing and processing the requests, decoding the tri-valued optical information to binary electronic information and so forth. Experiment results show that the system is quite right in optical information processing which demonstrates the feasibility and correctness of the optical computing system.

  12. Hybrid Systems: Computation and Control.

    Science.gov (United States)

    2007-11-02

    elbow) and a pinned first joint (shoul- der) (see Figure 2); it is termed an underactuated system since it is a mechanical system with fewer...Montreal, PQ, Canada, 1998. [10] M. W. Spong. Partial feedback linearization of underactuated mechanical systems . In Proceedings, IROS󈨢, pages 314-321...control mechanism and search for optimal combinations of control variables. Besides the nonlinear and hybrid nature of powertrain systems , hardware

  13. MTA Computer Based Evaluation System.

    Science.gov (United States)

    Brenner, Lisa P.; And Others

    The MTA PLATO-based evaluation system, which has been implemented by a consortium of schools of medical technology, is designed to be general-purpose, modular, data-driven, and interactive, and to accommodate other national and local item banks. The system provides a comprehensive interactive item-banking system in conjunction with online student…

  14. MTA Computer Based Evaluation System.

    Science.gov (United States)

    Brenner, Lisa P.; And Others

    The MTA PLATO-based evaluation system, which has been implemented by a consortium of schools of medical technology, is designed to be general-purpose, modular, data-driven, and interactive, and to accommodate other national and local item banks. The system provides a comprehensive interactive item-banking system in conjunction with online student…

  15. Computer Jet-Engine-Monitoring System

    Science.gov (United States)

    Disbrow, James D.; Duke, Eugene L.; Ray, Ronald J.

    1992-01-01

    "Intelligent Computer Assistant for Engine Monitoring" (ICAEM), computer-based monitoring system intended to distill and display data on conditions of operation of two turbofan engines of F-18, is in preliminary state of development. System reduces burden on propulsion engineer by providing single display of summary information on statuses of engines and alerting engineer to anomalous conditions. Effective use of prior engine-monitoring system requires continuous attention to multiple displays.

  16. A computational system for a Mars rover

    Science.gov (United States)

    Lambert, Kenneth E.

    1989-01-01

    This paper presents an overview of an onboard computing system that can be used for meeting the computational needs of a Mars rover. The paper begins by presenting an overview of some of the requirements which are key factors affecting the architecture. The rest of the paper describes the architecture. Particular emphasis is placed on the criteria used in defining the system and how the system qualitatively meets the criteria.

  17. Computer Jet-Engine-Monitoring System

    Science.gov (United States)

    Disbrow, James D.; Duke, Eugene L.; Ray, Ronald J.

    1992-01-01

    "Intelligent Computer Assistant for Engine Monitoring" (ICAEM), computer-based monitoring system intended to distill and display data on conditions of operation of two turbofan engines of F-18, is in preliminary state of development. System reduces burden on propulsion engineer by providing single display of summary information on statuses of engines and alerting engineer to anomalous conditions. Effective use of prior engine-monitoring system requires continuous attention to multiple displays.

  18. Intelligent computational systems for space applications

    Science.gov (United States)

    Lum, Henry, Jr.; Lau, Sonie

    1989-01-01

    The evolution of intelligent computation systems is discussed starting with the Spaceborne VHSIC Multiprocessor System (SVMS). The SVMS is a six-processor system designed to provide at least a 100-fold increase in both numeric and symbolic processing over the i386 uniprocessor. The significant system performance parameters necessary to achieve the performance increase are discussed.

  19. Computation of Weapons Systems Effectiveness

    Science.gov (United States)

    2013-09-01

    Aircraft Dive Angle : Initial Weapon Release Velocity at x-axis VOx VOz x: x-axis z: z-axis : Initial Weapon Release Velocity at z...altitude Impact Velocity (x− axis), Vix = VOx (3.4) Impact Velocity (z− axis), Viz = VOz + (g ∗ TOF) (3.5) Impact Velocity, Vi = �Vix2 + Viz2 (3.6...compute the ballistic partials to examine the effects that varying h, VOx and VOz have on RB using the following equations: ∂RB ∂h = New RB−Old RB

  20. A cost modelling system for cloud computing

    OpenAIRE

    Ajeh, Daniel; Ellman, Jeremy; Keogh, Shelagh

    2014-01-01

    An advance in technology unlocks new opportunities for organizations to increase their productivity, efficiency and process automation while reducing the cost of doing business as well. The emergence of cloud computing addresses these prospects through the provision of agile systems that are scalable, flexible and reliable as well as cost effective. Cloud computing has made hosting and deployment of computing resources cheaper and easier with no up-front charges but pay per-use flexible payme...

  1. The university computer network security system

    Institute of Scientific and Technical Information of China (English)

    张丁欣

    2012-01-01

    With the development of the times, advances in technology, computer network technology has been deep into all aspects of people's lives, it plays an increasingly important role, is an important tool for information exchange. Colleges and universities is to cultivate the cradle of new technology and new technology, computer network Yulu nectar to nurture emerging technologies, and so, as institutions of higher learning should pay attention to the construction of computer network security system.

  2. QUBIT DATA STRUCTURES FOR ANALYZING COMPUTING SYSTEMS

    Directory of Open Access Journals (Sweden)

    Vladimir Hahanov

    2014-11-01

    Full Text Available Qubit models and methods for improving the performance of software and hardware for analyzing digital devices through increasing the dimension of the data structures and memory are proposed. The basic concepts, terminology and definitions necessary for the implementation of quantum computing when analyzing virtual computers are introduced. The investigation results concerning design and modeling computer systems in a cyberspace based on the use of two-component structure are presented.

  3. Computational Intelligence in Information Systems Conference

    CERN Document Server

    Au, Thien-Wan; Omar, Saiful

    2017-01-01

    This book constitutes the Proceedings of the Computational Intelligence in Information Systems conference (CIIS 2016), held in Brunei, November 18–20, 2016. The CIIS conference provides a platform for researchers to exchange the latest ideas and to present new research advances in general areas related to computational intelligence and its applications. The 26 revised full papers presented in this book have been carefully selected from 62 submissions. They cover a wide range of topics and application areas in computational intelligence and informatics.

  4. Position-sensitive detector system OBI for High Resolution X-Ray Powder Diffraction using on-site readable image plates

    Science.gov (United States)

    Knapp, M.; Joco, V.; Baehtz, C.; Brecht, H. H.; Berghaeuser, A.; Ehrenberg, H.; von Seggern, H.; Fuess, H.

    2004-04-01

    A one-dimensional detector system has been developed using image plates. The detector is working in transmission mode or Debye-Scherrer geometry and is on-site readable which reduces the effort for calibration. It covers a wide angular range up to 110° and shows narrow reflection half-widths depending on the capillary diameter. The acquisition time is in the range of minutes and the data quality allows for reliable Rietveld refinement of complicated structures, even in multi-phase samples. The detector opens a wide field of new applications in kinetics and temperature resolved measurements.

  5. Position-sensitive detector system OBI for High Resolution X-Ray Powder Diffraction using on-site readable image plates

    Energy Technology Data Exchange (ETDEWEB)

    Knapp, M. E-mail: mknapp@tu-darmstadt.de; Joco, V.; Baehtz, C.; Brecht, H.H.; Berghaeuser, A.; Ehrenberg, H.; Seggern, H. von; Fuess, H

    2004-04-01

    A one-dimensional detector system has been developed using image plates. The detector is working in transmission mode or Debye-Scherrer geometry and is on-site readable which reduces the effort for calibration. It covers a wide angular range up to 110 deg. and shows narrow reflection half-widths depending on the capillary diameter. The acquisition time is in the range of minutes and the data quality allows for reliable Rietveld refinement of complicated structures, even in multi-phase samples. The detector opens a wide field of new applications in kinetics and temperature resolved measurements.

  6. Optimization of Operating Systems towards Green Computing

    Directory of Open Access Journals (Sweden)

    Appasami Govindasamy

    2011-01-01

    Full Text Available Green Computing is one of the emerging computing technology in the field of computer science engineering and technology to provide Green Information Technology (Green IT. It is mainly used to protect environment, optimize energy consumption and keeps green environment. Green computing also refers to environmentally sustainable computing. In recent years, companies in the computer industry have come to realize that going green is in their best interest, both in terms of public relations and reduced costs. Information and communication technology (ICT has now become an important department for the success of any organization. Making IT “Green” can not only save money but help save our world by making it a better place through reducing and/or eliminating wasteful practices. In this paper we focus on green computing by optimizing operating systems and scheduling of hardware resources. The objectives of the green computing are human power, electrical energy, time and cost reduction with out polluting the environment while developing the software. Operating System (OS Optimization is very important for Green computing, because it is bridge for both hardware components and Application Soft wares. The important Steps for green computing user and energy efficient usage are also discussed in this paper.

  7. Resilience assessment and evaluation of computing systems

    CERN Document Server

    Wolter, Katinka; Vieira, Marco

    2012-01-01

    The resilience of computing systems includes their dependability as well as their fault tolerance and security. It defines the ability of a computing system to perform properly in the presence of various kinds of disturbances and to recover from any service degradation. These properties are immensely important in a world where many aspects of our daily life depend on the correct, reliable and secure operation of often large-scale distributed computing systems. Wolter and her co-editors grouped the 20 chapters from leading researchers into seven parts: an introduction and motivating examples,

  8. Computer-aided dispatching system design specification

    Energy Technology Data Exchange (ETDEWEB)

    Briggs, M.G.

    1997-12-16

    This document defines the performance requirements for a graphic display dispatching system to support Hanford Patrol Operations Center. This document reflects the as-built requirements for the system that was delivered by GTE Northwest, Inc. This system provided a commercial off-the-shelf computer-aided dispatching system and alarm monitoring system currently in operations at the Hanford Patrol Operations Center, Building 2721E. This system also provides alarm back-up capability for the Plutonium Finishing Plant (PFP).

  9. Rendezvous Facilities in a Distributed Computer System

    Institute of Scientific and Technical Information of China (English)

    廖先Zhi; 金兰

    1995-01-01

    The distributed computer system described in this paper is a set of computer nodes interconnected in an interconnection network via packet-switching interfaces.The nodes communicate with each other by means of message-passing protocols.This paper presents the implementation of rendezvous facilities as high-level primitives provided by a parallel programming language to support interprocess communication and synchronization.

  10. 75 FR 28655 - Rexam Closure Systems, Inc. a Subsidiary of Rexam PLC Including On-Site Leased Workers From...

    Science.gov (United States)

    2010-05-21

    ...) Wages Are Paid Through Owens Illinois Manufacturing Hamlet, NC; Amended Certification Regarding... Closure Systems, Inc., a subsidiary of Rexam PLC, Hamlet, North Carolina. The notice was published in the..., Hamlet, North Carolina, who became totally or partially separated from employment on or after November 10...

  11. 75 FR 38127 - Visteon Systems, LLC North Penn Plant Electronics Products Group Including On-Site Leased Workers...

    Science.gov (United States)

    2010-07-01

    ... Employment and Training Administration Visteon Systems, LLC North Penn Plant Electronics Products Group... Adjustment Assistance and Alternative Trade Adjustment Assistance In accordance with Section 223 of the Trade Act of 1974 (19 U.S.C. 2273), and Section 246 of the Trade Act of 1974 (26 U.S.C. 2813), as...

  12. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  13. Comparison of contaminants of emerging concern removal, discharge, and water quality hazards among centralized and on-site wastewater treatment system effluents receiving common wastewater influent.

    Science.gov (United States)

    Du, Bowen; Price, Amy E; Scott, W Casan; Kristofco, Lauren A; Ramirez, Alejandro J; Chambliss, C Kevin; Yelderman, Joe C; Brooks, Bryan W

    2014-01-01

    A comparative understanding of effluent quality of decentralized on-site wastewater treatment systems, particularly for contaminants of emerging concern (CECs), remains less understood than effluent quality from centralized municipal wastewater treatment plants. Using a novel experimental facility with common influent wastewater, effluent water quality from a decentralized advanced aerobic treatment system (ATS) and a typical septic treatment system (STS) coupled to a subsurface flow constructed wetland (WET) were compared to effluent from a centralized municipal treatment plant (MTP). The STS did not include soil treatment, which may represent a system not functioning properly. Occurrence and discharge of a range of CECs were examined using isotope dilution liquid chromatography-tandem mass spectrometry during fall and winter seasons. Conventional parameters, including total suspended solids, carbonaceous biochemical oxygen demand and nutrients were also evaluated from each treatment system. Water quality of these effluents was further examined using a therapeutic hazard modeling approach. Of 19 CECs targeted for study, the benzodiazepine pharmaceutical diazepam was the only CEC not detected in all wastewater influent and effluent samples over two sampling seasons. Diphenhydramine, codeine, diltiazem, atenolol, and diclofenac exhibited significant (ptreatment systems was generally not influenced by season. However, significant differences (pwater quality indicators were observed among the various treatment technologies. For example, removal of most CECs by ATS was generally comparable to MTP. Lowest removal of most CECs was observed for STS; however, removal was improved when coupling the STS to a WET. Across the treatment systems examined, the majority of pharmaceuticals observed in on-site and municipal effluent discharges were predicted to potentially present therapeutic hazards to fish.

  14. Computer-aided power systems analysis

    CERN Document Server

    Kusic, George

    2008-01-01

    Computer applications yield more insight into system behavior than is possible by using hand calculations on system elements. Computer-Aided Power Systems Analysis: Second Edition is a state-of-the-art presentation of basic principles and software for power systems in steady-state operation. Originally published in 1985, this revised edition explores power systems from the point of view of the central control facility. It covers the elements of transmission networks, bus reference frame, network fault and contingency calculations, power flow on transmission networks, generator base power setti

  15. Rapid Deployment Drilling System for on-site inspections under a Comprehensive Test Ban Preliminary Engineering Design

    Energy Technology Data Exchange (ETDEWEB)

    Maurer, W.C.; Deskins, W.G.; McDonald, W.J.; Cohen, J.H. [Maurer Engineering, Inc., Houston, TX (United States); Heuze, F.E.; Butler, M.W. [Lawrence Livermore National Lab., CA (United States)

    1996-09-01

    While not a new drilling technology, coiled-tubing (CT) drilling continues to undergo rapid development and expansion, with new equipment, tools and procedures developed almost daily. This project was undertaken to: analyze available technological options for a Rapid Deployment Drilling System (RDDS) CT drilling system: recommend specific technologies that best match the requirements for the RDDS; and highlight any areas where adequate technological solutions are not currently available. Postshot drilling is a well established technique at the Nevada Test Site (NTS). Drilling provides essential data on the results of underground tests including obtaining samples for the shot zone, information on cavity size, chimney dimensions, effects of the event on surrounding material, and distribution of radioactivity.

  16. Modular Robotics for Delivering On-Site contamination Sensors and Mapping Systems to Difficult-to-Access Locations

    Energy Technology Data Exchange (ETDEWEB)

    Geisinger, Joseph

    2001-05-21

    Presently, characterization operations are scheduled for thousands of facilities and pieces of equipment throughout DOE sites, each of which requires manual surveying with handheld instruments and manual record keeping. Such work, particularly in difficult-to-access-areas, results in significant amounts of worker exposure, long timelines and additional secondary waste generation. Therefore, a distinct need exists for remote tools that can quickly deploy sensors and automated contamination mapping systems into these areas.

  17. Sandia Laboratories technical capabilities: computation systems

    Energy Technology Data Exchange (ETDEWEB)

    1977-12-01

    This report characterizes the computation systems capabilities at Sandia Laboratories. Selected applications of these capabilities are presented to illustrate the extent to which they can be applied in research and development programs. 9 figures.

  18. Console Networks for Major Computer Systems

    Energy Technology Data Exchange (ETDEWEB)

    Ophir, D; Shepherd, B; Spinrad, R J; Stonehill, D

    1966-07-22

    A concept for interactive time-sharing of a major computer system is developed in which satellite computers mediate between the central computing complex and the various individual user terminals. These techniques allow the development of a satellite system substantially independent of the details of the central computer and its operating system. Although the user terminals' roles may be rich and varied, the demands on the central facility are merely those of a tape drive or similar batched information transfer device. The particular system under development provides service for eleven visual display and communication consoles, sixteen general purpose, low rate data sources, and up to thirty-one typewriters. Each visual display provides a flicker-free image of up to 4000 alphanumeric characters or tens of thousands of points by employing a swept raster picture generating technique directly compatible with that of commercial television. Users communicate either by typewriter or a manually positioned light pointer.

  19. The structural robustness of multiprocessor computing system

    Directory of Open Access Journals (Sweden)

    N. Andronaty

    1996-03-01

    Full Text Available The model of the multiprocessor computing system on the base of transputers which permits to resolve the question of valuation of a structural robustness (viability, survivability is described.

  20. Computational Models for Nonlinear Aeroelastic Systems Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Clear Science Corp. and Duke University propose to develop and demonstrate a new and efficient computational method of modeling nonlinear aeroelastic systems. The...

  1. A Management System for Computer Performance Evaluation.

    Science.gov (United States)

    1981-12-01

    large unused capacity indicates a potential cost performance improvement (i.e. the potential to perform more within current costs or reduce costs ...necessary to bring the performance of the computer system in line with operational goals. : (Ref. 18 : 7) The General Accouting Office estimates that the...tasks in attempting to improve the efficiency and effectiveness of their computer systems. Cost began to plan an important role in the life of a

  2. Cloud Computing for Standard ERP Systems

    DEFF Research Database (Denmark)

    Schubert, Petra; Adisa, Femi

    for the operation of ERP systems. We argue that the phenomenon of cloud computing could lead to a decisive change in the way business software is deployed in companies. Our reference framework contains three levels (IaaS, PaaS, SaaS) and clarifies the meaning of public, private and hybrid clouds. The three levels...... of cloud computing and their impact on ERP systems operation are discussed. From the literature we identify areas for future research and propose a research agenda....

  3. Computer support for mechatronic control system design

    NARCIS (Netherlands)

    van Amerongen, J.; Coelingh, H.J.; de Vries, Theodorus J.A.

    2000-01-01

    This paper discusses the demands for proper tools for computer aided control system design of mechatronic systems and identifies a number of tasks in this design process. Real mechatronic design, involving input from specialists from varying disciplines, requires that the system can be represented

  4. Computer Systems for Distributed and Distance Learning.

    Science.gov (United States)

    Anderson, M.; Jackson, David

    2000-01-01

    Discussion of network-based learning focuses on a survey of computer systems for distributed and distance learning. Both Web-based systems and non-Web-based systems are reviewed in order to highlight some of the major trends of past projects and to suggest ways in which progress may be made in the future. (Contains 92 references.) (Author/LRW)

  5. Information systems and computing technology

    CERN Document Server

    Zhang, Lei

    2013-01-01

    Invited papersIncorporating the multi-cross-sectional temporal effect in Geographically Weighted Logit Regression K. Wu, B. Liu, B. Huang & Z. LeiOne shot learning human actions recognition using key posesW.H. Zou, S.G. Li, Z. Lei & N. DaiBand grouping pansharpening for WorldView-2 satellite images X. LiResearch on GIS based haze trajectory data analysis system Y. Wang, J. Chen, J. Shu & X. WangRegular papersA warning model of systemic financial risks W. Xu & Q. WangResearch on smart mobile phone user experience with grounded theory J.P. Wan & Y.H. ZhuThe software reliability analysis based on

  6. Computational approaches for systems metabolomics.

    Science.gov (United States)

    Krumsiek, Jan; Bartel, Jörg; Theis, Fabian J

    2016-06-01

    Systems genetics is defined as the simultaneous assessment and analysis of multi-omics datasets. In the past few years, metabolomics has been established as a robust tool describing an important functional layer in this approach. The metabolome of a biological system represents an integrated state of genetic and environmental factors and has been referred to as a 'link between genotype and phenotype'. In this review, we summarize recent progresses in statistical analysis methods for metabolomics data in combination with other omics layers. We put a special focus on complex, multivariate statistical approaches as well as pathway-based and network-based analysis methods. Moreover, we outline current challenges and pitfalls of metabolomics-focused multi-omics analyses and discuss future steps for the field.

  7. Computational systems biology for aging research.

    Science.gov (United States)

    Mc Auley, Mark T; Mooney, Kathleen M

    2015-01-01

    Computational modelling is a key component of systems biology and integrates with the other techniques discussed thus far in this book by utilizing a myriad of data that are being generated to quantitatively represent and simulate biological systems. This chapter will describe what computational modelling involves; the rationale for using it, and the appropriateness of modelling for investigating the aging process. How a model is assembled and the different theoretical frameworks that can be used to build a model are also discussed. In addition, the chapter will describe several models which demonstrate the effectiveness of each computational approach for investigating the constituents of a healthy aging trajectory. Specifically, a number of models will be showcased which focus on the complex age-related disorders associated with unhealthy aging. To conclude, we discuss the future applications of computational systems modelling to aging research.

  8. Artificial immune system applications in computer security

    CERN Document Server

    Tan, Ying

    2016-01-01

    This book provides state-of-the-art information on the use, design, and development of the Artificial Immune System (AIS) and AIS-based solutions to computer security issues. Artificial Immune System: Applications in Computer Security focuses on the technologies and applications of AIS in malware detection proposed in recent years by the Computational Intelligence Laboratory of Peking University (CIL@PKU). It offers a theoretical perspective as well as practical solutions for readers interested in AIS, machine learning, pattern recognition and computer security. The book begins by introducing the basic concepts, typical algorithms, important features, and some applications of AIS. The second chapter introduces malware and its detection methods, especially for immune-based malware detection approaches. Successive chapters present a variety of advanced detection approaches for malware, including Virus Detection System, K-Nearest Neighbour (KNN), RBF networ s, and Support Vector Machines (SVM), Danger theory, ...

  9. Quantum Computing in Solid State Systems

    CERN Document Server

    Ruggiero, B; Granata, C

    2006-01-01

    The aim of Quantum Computation in Solid State Systems is to report on recent theoretical and experimental results on the macroscopic quantum coherence of mesoscopic systems, as well as on solid state realization of qubits and quantum gates. Particular attention has been given to coherence effects in Josephson devices. Other solid state systems, including quantum dots, optical, ion, and spin devices which exhibit macroscopic quantum coherence are also discussed. Quantum Computation in Solid State Systems discusses experimental implementation of quantum computing and information processing devices, and in particular observations of quantum behavior in several solid state systems. On the theoretical side, the complementary expertise of the contributors provides models of the various structures in connection with the problem of minimizing decoherence.

  10. Telemetry Computer System at Wallops Flight Center

    Science.gov (United States)

    Bell, H.; Strock, J.

    1980-01-01

    This paper describes the Telemetry Computer System in operation at NASA's Wallops Flight Center for real-time or off-line processing, storage, and display of telemetry data from rockets and aircraft. The system accepts one or two PCM data streams and one FM multiplex, converting each type of data into computer format and merging time-of-day information. A data compressor merges the active streams, and removes redundant data if desired. Dual minicomputers process data for display, while storing information on computer tape for further processing. Real-time displays are located at the station, at the rocket launch control center, and in the aircraft control tower. The system is set up and run by standard telemetry software under control of engineers and technicians. Expansion capability is built into the system to take care of possible future requirements.

  11. Honeywell Modular Automation System Computer Software Documentation

    Energy Technology Data Exchange (ETDEWEB)

    CUNNINGHAM, L.T.

    1999-09-27

    This document provides a Computer Software Documentation for a new Honeywell Modular Automation System (MAS) being installed in the Plutonium Finishing Plant (PFP). This system will be used to control new thermal stabilization furnaces in HA-211 and vertical denitration calciner in HC-230C-2.

  12. Computation and design of autonomous intelligent systems

    Science.gov (United States)

    Fry, Robert L.

    2008-04-01

    This paper describes a theory of intelligent systems and its reduction to engineering practice. The theory is based on a broader theory of computation wherein information and control are defined within the subjective frame of a system. At its most primitive level, the theory describes what it computationally means to both ask and answer questions which, like traditional logic, are also Boolean. The logic of questions describes the subjective rules of computation that are objective in the sense that all the described systems operate according to its principles. Therefore, all systems are autonomous by construct. These systems include thermodynamic, communication, and intelligent systems. Although interesting, the important practical consequence is that the engineering framework for intelligent systems can borrow efficient constructs and methodologies from both thermodynamics and information theory. Thermodynamics provides the Carnot cycle which describes intelligence dynamics when operating in the refrigeration mode. It also provides the principle of maximum entropy. Information theory has recently provided the important concept of dual-matching useful for the design of efficient intelligent systems. The reverse engineered model of computation by pyramidal neurons agrees well with biology and offers a simple and powerful exemplar of basic engineering concepts.

  13. Remote computer monitors corrosion protection system

    Energy Technology Data Exchange (ETDEWEB)

    Kendrick, A.

    Effective corrosion protection with electrochemical methods requires some method of routine monitoring that provides reliable data that is free of human error. A test installation of a remote computer control monitoring system for electrochemical corrosion protection is described. The unit can handle up to six channel inputs. Each channel comprises 3 analog signals and 1 digital. The operation of the system is discussed.

  14. Terrace Layout Using a Computer Assisted System

    Science.gov (United States)

    Development of a web-based terrace design tool based on the MOTERR program is presented, along with representative layouts for conventional and parallel terrace systems. Using digital elevation maps and geographic information systems (GIS), this tool utilizes personal computers to rapidly construct ...

  15. Cloud Computing for Standard ERP Systems

    DEFF Research Database (Denmark)

    Schubert, Petra; Adisa, Femi

    Cloud Computing is a topic that has gained momentum in the last years. Current studies show that an increasing number of companies is evaluating the promised advantages and considering making use of cloud services. In this paper we investigate the phenomenon of cloud computing and its importance...... for the operation of ERP systems. We argue that the phenomenon of cloud computing could lead to a decisive change in the way business software is deployed in companies. Our reference framework contains three levels (IaaS, PaaS, SaaS) and clarifies the meaning of public, private and hybrid clouds. The three levels...... of cloud computing and their impact on ERP systems operation are discussed. From the literature we identify areas for future research and propose a research agenda....

  16. Building Low Cost Cloud Computing Systems

    Directory of Open Access Journals (Sweden)

    Carlos Antunes

    2013-06-01

    Full Text Available The actual models of cloud computing are based in megalomaniac hardware solutions, being its implementation and maintenance unaffordable to the majority of service providers. The use of jail services is an alternative to current models of cloud computing based on virtualization. Models based in utilization of jail environments instead of the used virtualization systems will provide huge gains in terms of optimization of hardware resources at computation level and in terms of storage and energy consumption. In this paper it will be addressed the practical implementation of jail environments in real scenarios, which allows the visualization of areas where its application will be relevant and will make inevitable the redefinition of the models that are currently defined for cloud computing. In addition it will bring new opportunities in the development of support features for jail environments in the majority of operating systems.

  17. Computer networks ISE a systems approach

    CERN Document Server

    Peterson, Larry L

    2007-01-01

    Computer Networks, 4E is the only introductory computer networking book written by authors who have had first-hand experience with many of the protocols discussed in the book, who have actually designed some of them as well, and who are still actively designing the computer networks today. This newly revised edition continues to provide an enduring, practical understanding of networks and their building blocks through rich, example-based instruction. The authors' focus is on the why of network design, not just the specifications comprising today's systems but how key technologies and p

  18. Unified Computational Intelligence for Complex Systems

    CERN Document Server

    Seiffertt, John

    2010-01-01

    Computational intelligence encompasses a wide variety of techniques that allow computation to learn, to adapt, and to seek. That is, they may be designed to learn information without explicit programming regarding the nature of the content to be retained, they may be imbued with the functionality to adapt to maintain their course within a complex and unpredictably changing environment, and they may help us seek out truths about our own dynamics and lives through their inclusion in complex system modeling. These capabilities place our ability to compute in a category apart from our ability to e

  19. Computer surety: computer system inspection guidance. [Contains glossary

    Energy Technology Data Exchange (ETDEWEB)

    1981-07-01

    This document discusses computer surety in NRC-licensed nuclear facilities from the perspective of physical protection inspectors. It gives background information and a glossary of computer terms, along with threats and computer vulnerabilities, methods used to harden computer elements, and computer audit controls.

  20. Fault tolerant hypercube computer system architecture

    Science.gov (United States)

    Madan, Herb S. (Inventor); Chow, Edward (Inventor)

    1989-01-01

    A fault-tolerant multiprocessor computer system of the hypercube type comprising a hierarchy of computers of like kind which can be functionally substituted for one another as necessary is disclosed. Communication between the working nodes is via one communications network while communications between the working nodes and watch dog nodes and load balancing nodes higher in the structure is via another communications network separate from the first. A typical branch of the hierarchy reporting to a master node or host computer comprises, a plurality of first computing nodes; a first network of message conducting paths for interconnecting the first computing nodes as a hypercube. The first network provides a path for message transfer between the first computing nodes; a first watch dog node; and a second network of message connecting paths for connecting the first computing nodes to the first watch dog node independent from the first network, the second network provides an independent path for test message and reconfiguration affecting transfers between the first computing nodes and the first switch watch dog node. There is additionally, a plurality of second computing nodes; a third network of message conducting paths for interconnecting the second computing nodes as a hypercube. The third network provides a path for message transfer between the second computing nodes; a fourth network of message conducting paths for connecting the second computing nodes to the first watch dog node independent from the third network. The fourth network provides an independent path for test message and reconfiguration affecting transfers between the second computing nodes and the first watch dog node; and a first multiplexer disposed between the first watch dog node and the second and fourth networks for allowing the first watch dog node to selectively communicate with individual ones of the computing nodes through the second and fourth networks; as well as, a second watch dog node

  1. Monitoring SLAC High Performance UNIX Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC

    2005-12-15

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.

  2. Operator support system using computational intelligence techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bueno, Elaine Inacio, E-mail: ebueno@ifsp.edu.br [Instituto Federal de Educacao, Ciencia e Tecnologia de Sao Paulo (IFSP), Sao Paulo, SP (Brazil); Pereira, Iraci Martinez, E-mail: martinez@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    Computational Intelligence Systems have been widely applied in Monitoring and Fault Detection Systems in several processes and in different kinds of applications. These systems use interdependent components ordered in modules. It is a typical behavior of such systems to ensure early detection and diagnosis of faults. Monitoring and Fault Detection Techniques can be divided into two categories: estimative and pattern recognition methods. The estimative methods use a mathematical model, which describes the process behavior. The pattern recognition methods use a database to describe the process. In this work, an operator support system using Computational Intelligence Techniques was developed. This system will show the information obtained by different CI techniques in order to help operators to take decision in real time and guide them in the fault diagnosis before the normal alarm limits are reached. (author)

  3. Attacker Modelling in Ubiquitous Computing Systems

    DEFF Research Database (Denmark)

    Papini, Davide

    Within the last five to ten years we have experienced an incredible growth of ubiquitous technologies which has allowed for improvements in several areas, including energy distribution and management, health care services, border surveillance, secure monitoring and management of buildings......, localisation services and many others. These technologies can be classified under the name of ubiquitous systems. The term Ubiquitous System dates back to 1991 when Mark Weiser at Xerox PARC Lab first referred to it in writing. He envisioned a future where computing technologies would have been melted...... in with our everyday life. This future is visible to everyone nowadays: terms like smartphone, cloud, sensor, network etc. are widely known and used in our everyday life. But what about the security of such systems. Ubiquitous computing devices can be limited in terms of energy, computing power and memory...

  4. A New System Architecture for Pervasive Computing

    CERN Document Server

    Ismail, Anis; Ismail, Ziad

    2011-01-01

    We present new system architecture, a distributed framework designed to support pervasive computing applications. We propose a new architecture consisting of a search engine and peripheral clients that addresses issues in scalability, data sharing, data transformation and inherent platform heterogeneity. Key features of our application are a type-aware data transport that is capable of extract data, and present data through handheld devices (PDA (personal digital assistant), mobiles, etc). Pervasive computing uses web technology, portable devices, wireless communications and nomadic or ubiquitous computing systems. The web and the simple standard HTTP protocol that it is based on, facilitate this kind of ubiquitous access. This can be implemented on a variety of devices - PDAs, laptops, information appliances such as digital cameras and printers. Mobile users get transparent access to resources outside their current environment. We discuss our system's architecture and its implementation. Through experimental...

  5. Metasynthetic computing and engineering of complex systems

    CERN Document Server

    Cao, Longbing

    2015-01-01

    Provides a comprehensive overview and introduction to the concepts, methodologies, analysis, design and applications of metasynthetic computing and engineering. The author: Presents an overview of complex systems, especially open complex giant systems such as the Internet, complex behavioural and social problems, and actionable knowledge discovery and delivery in the big data era. Discusses ubiquitous intelligence in complex systems, including human intelligence, domain intelligence, social intelligence, network intelligence, data intelligence and machine intelligence, and their synergy thro

  6. Reliable computer systems design and evaluatuion

    CERN Document Server

    Siewiorek, Daniel

    2014-01-01

    Enhance your hardware/software reliabilityEnhancement of system reliability has been a major concern of computer users and designers ¦ and this major revision of the 1982 classic meets users' continuing need for practical information on this pressing topic. Included are case studies of reliablesystems from manufacturers such as Tandem, Stratus, IBM, and Digital, as well as coverage of special systems such as the Galileo Orbiter fault protection system and AT&T telephone switching processors.

  7. Model for personal computer system selection.

    Science.gov (United States)

    Blide, L

    1987-12-01

    Successful computer software and hardware selection is best accomplished by following an organized approach such as the one described in this article. The first step is to decide what you want to be able to do with the computer. Secondly, select software that is user friendly, well documented, bug free, and that does what you want done. Next, you select the computer, printer and other needed equipment from the group of machines on which the software will run. Key factors here are reliability and compatibility with other microcomputers in your facility. Lastly, you select a reliable vendor who will provide good, dependable service in a reasonable time. The ability to correctly select computer software and hardware is a key skill needed by medical record professionals today and in the future. Professionals can make quality computer decisions by selecting software and systems that are compatible with other computers in their facility, allow for future net-working, ease of use, and adaptability for expansion as new applications are identified. The key to success is to not only provide for your present needs, but to be prepared for future rapid expansion and change in your computer usage as technology and your skills grow.

  8. Architecture, systems research and computational sciences

    CERN Document Server

    2012-01-01

    The Winter 2012 (vol. 14 no. 1) issue of the Nexus Network Journal is dedicated to the theme “Architecture, Systems Research and Computational Sciences”. This is an outgrowth of the session by the same name which took place during the eighth international, interdisciplinary conference “Nexus 2010: Relationships between Architecture and Mathematics, held in Porto, Portugal, in June 2010. Today computer science is an integral part of even strictly historical investigations, such as those concerning the construction of vaults, where the computer is used to survey the existing building, analyse the data and draw the ideal solution. What the papers in this issue make especially evident is that information technology has had an impact at a much deeper level as well: architecture itself can now be considered as a manifestation of information and as a complex system. The issue is completed with other research papers, conference reports and book reviews.

  9. Development of an on-site rapid real-time polymerase chain reaction system and the characterization of suitable DNA polymerases for TaqMan probe technology.

    Science.gov (United States)

    Furutani, Shunsuke; Naruishi, Nahoko; Hagihara, Yoshihisa; Nagai, Hidenori

    2016-08-01

    On-site quantitative analyses of microorganisms (including viruses) by the polymerase chain reaction (PCR) system are significantly influencing medical and biological research. We have developed a remarkably rapid and portable real-time PCR system that is based on microfluidic approaches. Real-time PCR using TaqMan probes consists of a complex reaction. Therefore, in a rapid real-time PCR, the optimum DNA polymerase must be estimated by using actual real-time PCR conditions. In this study, we compared the performance of three DNA polymerases in actual PCR conditions using our rapid real-time PCR system. Although KAPA2G Fast HS DNA Polymerase has the highest enzymatic activity among them, SpeedSTAR HS DNA Polymerase exhibited better performance to rapidly increase the fluorescence signal in an actual real-time PCR using TaqMan probes. Furthermore, we achieved rapid detection of Escherichia coli in 7 min by using SpeedSTAR HS DNA Polymerase with the same sensitivity as that of a conventional thermal cycler.

  10. NIF Integrated Computer Controls System Description

    Energy Technology Data Exchange (ETDEWEB)

    VanArsdall, P.

    1998-01-26

    This System Description introduces the NIF Integrated Computer Control System (ICCS). The architecture is sufficiently abstract to allow the construction of many similar applications from a common framework. As discussed below, over twenty software applications derived from the framework comprise the NIF control system. This document lays the essential foundation for understanding the ICCS architecture. The NIF design effort is motivated by the magnitude of the task. Figure 1 shows a cut-away rendition of the coliseum-sized facility. The NIF requires integration of about 40,000 atypical control points, must be highly automated and robust, and will operate continuously around the clock. The control system coordinates several experimental cycles concurrently, each at different stages of completion. Furthermore, facilities such as the NIF represent major capital investments that will be operated, maintained, and upgraded for decades. The computers, control subsystems, and functionality must be relatively easy to extend or replace periodically with newer technology.

  11. NIF Integrated Computer Controls System Description

    Energy Technology Data Exchange (ETDEWEB)

    VanArsdall, P.

    1998-01-26

    This System Description introduces the NIF Integrated Computer Control System (ICCS). The architecture is sufficiently abstract to allow the construction of many similar applications from a common framework. As discussed below, over twenty software applications derived from the framework comprise the NIF control system. This document lays the essential foundation for understanding the ICCS architecture. The NIF design effort is motivated by the magnitude of the task. Figure 1 shows a cut-away rendition of the coliseum-sized facility. The NIF requires integration of about 40,000 atypical control points, must be highly automated and robust, and will operate continuously around the clock. The control system coordinates several experimental cycles concurrently, each at different stages of completion. Furthermore, facilities such as the NIF represent major capital investments that will be operated, maintained, and upgraded for decades. The computers, control subsystems, and functionality must be relatively easy to extend or replace periodically with newer technology.

  12. Some Unexpected Results Using Computer Algebra Systems.

    Science.gov (United States)

    Alonso, Felix; Garcia, Alfonsa; Garcia, Francisco; Hoya, Sara; Rodriguez, Gerardo; de la Villa, Agustin

    2001-01-01

    Shows how teachers can often use unexpected outputs from Computer Algebra Systems (CAS) to reinforce concepts and to show students the importance of thinking about how they use the software and reflecting on their results. Presents different examples where DERIVE, MAPLE, or Mathematica does not work as expected and suggests how to use them as a…

  13. High performance computing on vector systems

    CERN Document Server

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  14. Computing in Large-Scale Dynamic Systems

    NARCIS (Netherlands)

    Pruteanu, A.S.

    2013-01-01

    Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data dissem

  15. Computer Graphics for System Effectiveness Analysis.

    Science.gov (United States)

    1986-05-01

    02139, August 1982. Chapra , Steven C., and Raymond P. Canale, (1985), Numerical Methods for Engineers with Personal Computer Applications New York...I -~1.2 Outline of Thesis .................................. 1..... .......... CHAPTER 11. METHOD OF ANALYSIS...Chapter VII summarizes the results and gives recommendations for future research. I - P** METHOD OF ANALYSIS 2.1 Introduction Systems effectiveness

  16. Characterizing Video Coding Computing in Conference Systems

    NARCIS (Netherlands)

    Tuquerres, G.

    2000-01-01

    In this paper, a number of coding operations is provided for computing continuous data streams, in particular, video streams. A coding capability of the operations is expressed by a pyramidal structure in which coding processes and requirements of a distributed information system are represented. Th

  17. Lumber Grading With A Computer Vision System

    Science.gov (United States)

    Richard W. Conners; Tai-Hoon Cho; Philip A. Araman

    1989-01-01

    Over the past few years significant progress has been made in developing a computer vision system for locating and identifying defects on surfaced hardwood lumber. Unfortunately, until September of 1988 little research had gone into developing methods for analyzing rough lumber. This task is arguably more complex than the analysis of surfaced lumber. The prime...

  18. Computer Algebra Systems, Pedagogy, and Epistemology

    Science.gov (United States)

    Bosse, Michael J.; Nandakumar, N. R.

    2004-01-01

    The advent of powerful Computer Algebra Systems (CAS) continues to dramatically affect curricula, pedagogy, and epistemology in secondary and college algebra classrooms. However, epistemological and pedagogical research regarding the role and effectiveness of CAS in the learning of algebra lags behind. This paper investigates concerns regarding…

  19. Computer system SANC: its development and applications

    Science.gov (United States)

    Arbuzov, A.; Bardin, D.; Bondarenko, S.; Christova, P.; Kalinovskaya, L.; Sadykov, R.; Sapronov, A.; Riemann, T.

    2016-10-01

    The SANC system is used for systematic calculations of various processes within the Standard Model in the one-loop approximation. QED, electroweak, and QCD corrections are computed to a number of processes being of interest for modern and future high-energy experiments. Several applications for the LHC physics program are presented. Development of the system and the general problems and perspectives for future improvement of the theoretical precision are discussed.

  20. Personal healthcare system using cloud computing.

    Science.gov (United States)

    Takeuchi, Hiroshi; Mayuzumi, Yuuki; Kodama, Naoki; Sato, Keiichi

    2013-01-01

    A personal healthcare system used with cloud computing has been developed. It enables a daily time-series of personal health and lifestyle data to be stored in the cloud through mobile devices. The cloud automatically extracts personally useful information, such as rules and patterns concerning lifestyle and health conditions embedded in the personal big data, by using a data mining technology. The system provides three editions (Diet, Lite, and Pro) corresponding to users' needs.

  1. The CMS Computing System: Successes and Challenges

    CERN Document Server

    Bloom, Kenneth

    2009-01-01

    Each LHC experiment will produce datasets with sizes of order one petabyte per year. All of this data must be stored, processed, transferred, simulated and analyzed, which requires a computing system of a larger scale than ever mounted for any particle physics experiment, and possibly for any enterprise in the world. I discuss how CMS has chosen to address these challenges, focusing on recent tests of the system that demonstrate the experiment's readiness for producing physics results with the first LHC data.

  2. Integrative Genomics and Computational Systems Medicine

    Energy Technology Data Exchange (ETDEWEB)

    McDermott, Jason E.; Huang, Yufei; Zhang, Bing; Xu, Hua; Zhao, Zhongming

    2014-01-01

    The exponential growth in generation of large amounts of genomic data from biological samples has driven the emerging field of systems medicine. This field is promising because it improves our understanding of disease processes at the systems level. However, the field is still in its young stage. There exists a great need for novel computational methods and approaches to effectively utilize and integrate various omics data.

  3. Analytical performance modeling for computer systems

    CERN Document Server

    Tay, Y C

    2013-01-01

    This book is an introduction to analytical performance modeling for computer systems, i.e., writing equations to describe their performance behavior. It is accessible to readers who have taken college-level courses in calculus and probability, networking and operating systems. This is not a training manual for becoming an expert performance analyst. Rather, the objective is to help the reader construct simple models for analyzing and understanding the systems that they are interested in.Describing a complicated system abstractly with mathematical equations requires a careful choice of assumpti

  4. Adaptive Fuzzy Systems in Computational Intelligence

    Science.gov (United States)

    Berenji, Hamid R.

    1996-01-01

    In recent years, the interest in computational intelligence techniques, which currently includes neural networks, fuzzy systems, and evolutionary programming, has grown significantly and a number of their applications have been developed in the government and industry. In future, an essential element in these systems will be fuzzy systems that can learn from experience by using neural network in refining their performances. The GARIC architecture, introduced earlier, is an example of a fuzzy reinforcement learning system which has been applied in several control domains such as cart-pole balancing, simulation of to Space Shuttle orbital operations, and tether control. A number of examples from GARIC's applications in these domains will be demonstrated.

  5. Cluster Computing for Embedded/Real-Time Systems

    Science.gov (United States)

    Katz, D.; Kepner, J.

    1999-01-01

    Embedded and real-time systems, like other computing systems, seek to maximize computing power for a given price, and thus can significantly benefit from the advancing capabilities of cluster computing.

  6. Landauer Bound for Analog Computing Systems

    CERN Document Server

    Diamantini, M Cristina; Trugenberger, Carlo A

    2016-01-01

    By establishing a relation between information erasure and continuous phase transitions we generalise the Landauer bound to analog computing systems. The entropy production per degree of freedom during erasure of an analog variable (reset to standard value) is given by the logarithm of the configurational volume measured in units of its minimal quantum. As a consequence every computation has to be carried on with a finite number of bits and infinite precision is forbidden by the fundamental laws of physics, since it would require an infinite amount of energy.

  7. Landauer bound for analog computing systems

    Science.gov (United States)

    Diamantini, M. Cristina; Gammaitoni, Luca; Trugenberger, Carlo A.

    2016-07-01

    By establishing a relation between information erasure and continuous phase transitions we generalize the Landauer bound to analog computing systems. The entropy production per degree of freedom during erasure of an analog variable (reset to standard value) is given by the logarithm of the configurational volume measured in units of its minimal quantum. As a consequence, every computation has to be carried on with a finite number of bits and infinite precision is forbidden by the fundamental laws of physics, since it would require an infinite amount of energy.

  8. International Conference on Soft Computing Systems

    CERN Document Server

    Panigrahi, Bijaya

    2016-01-01

    The book is a collection of high-quality peer-reviewed research papers presented in International Conference on Soft Computing Systems (ICSCS 2015) held at Noorul Islam Centre for Higher Education, Chennai, India. These research papers provide the latest developments in the emerging areas of Soft Computing in Engineering and Technology. The book is organized in two volumes and discusses a wide variety of industrial, engineering and scientific applications of the emerging techniques. It presents invited papers from the inventors/originators of new applications and advanced technologies.

  9. Embedded systems for supporting computer accessibility.

    Science.gov (United States)

    Mulfari, Davide; Celesti, Antonio; Fazio, Maria; Villari, Massimo; Puliafito, Antonio

    2015-01-01

    Nowadays, customized AT software solutions allow their users to interact with various kinds of computer systems. Such tools are generally available on personal devices (e.g., smartphones, laptops and so on) commonly used by a person with a disability. In this paper, we investigate a way of using the aforementioned AT equipments in order to access many different devices without assistive preferences. The solution takes advantage of open source hardware and its core component consists of an affordable Linux embedded system: it grabs data coming from the assistive software, which runs on the user's personal device, then, after processing, it generates native keyboard and mouse HID commands for the target computing device controlled by the end user. This process supports any operating system available on the target machine and it requires no specialized software installation; therefore the user with a disability can rely on a single assistive tool to control a wide range of computing platforms, including conventional computers and many kinds of mobile devices, which receive input commands through the USB HID protocol.

  10. Music Genre Classification Systems - A Computational Approach

    DEFF Research Database (Denmark)

    Ahrendt, Peter

    2006-01-01

    Automatic music genre classification is the classification of a piece of music into its corresponding genre (such as jazz or rock) by a computer. It is considered to be a cornerstone of the research area Music Information Retrieval (MIR) and closely linked to the other areas in MIR. It is thought...... that MIR will be a key element in the processing, searching and retrieval of digital music in the near future. This dissertation is concerned with music genre classification systems and in particular systems which use the raw audio signal as input to estimate the corresponding genre. This is in contrast...... to systems which use e.g. a symbolic representation or textual information about the music. The approach to music genre classification systems has here been system-oriented. In other words, all the different aspects of the systems have been considered and it is emphasized that the systems should...

  11. Autonomous Systems, Robotics, and Computing Systems Capability Roadmap: NRC Dialogue

    Science.gov (United States)

    Zornetzer, Steve; Gage, Douglas

    2005-01-01

    Contents include the following: Introduction. Process, Mission Drivers, Deliverables, and Interfaces. Autonomy. Crew-Centered and Remote Operations. Integrated Systems Health Management. Autonomous Vehicle Control. Autonomous Process Control. Robotics. Robotics for Solar System Exploration. Robotics for Lunar and Planetary Habitation. Robotics for In-Space Operations. Computing Systems. Conclusion.

  12. Nature-inspired computing for control systems

    CERN Document Server

    2016-01-01

    The book presents recent advances in nature-inspired computing, giving a special emphasis to control systems applications. It reviews different techniques used for simulating physical, chemical, biological or social phenomena at the purpose of designing robust, predictive and adaptive control strategies. The book is a collection of several contributions, covering either more general approaches in control systems, or methodologies for control tuning and adaptive controllers, as well as exciting applications of nature-inspired techniques in robotics. On one side, the book is expected to motivate readers with a background in conventional control systems to try out these powerful techniques inspired by nature. On the other side, the book provides advanced readers with a deeper understanding of the field and a broad spectrum of different methods and techniques. All in all, the book is an outstanding, practice-oriented reference guide to nature-inspired computing addressing graduate students, researchers and practi...

  13. Decomposability queueing and computer system applications

    CERN Document Server

    Courtois, P J

    1977-01-01

    Decomposability: Queueing and Computer System Applications presents a set of powerful methods for systems analysis. This 10-chapter text covers the theory of nearly completely decomposable systems upon which specific analytic methods are based.The first chapters deal with some of the basic elements of a theory of nearly completely decomposable stochastic matrices, including the Simon-Ando theorems and the perturbation theory. The succeeding chapters are devoted to the analysis of stochastic queuing networks that appear as a type of key model. These chapters also discuss congestion problems in

  14. Computer-aided Analysis of Phisiological Systems

    Directory of Open Access Journals (Sweden)

    Balázs Benyó

    2007-12-01

    Full Text Available This paper presents the recent biomedical engineering research activity of theMedical Informatics Laboratory at the Budapest University of Technology and Economics.The research projects are carried out in the fields as follows: Computer aidedidentification of physiological systems; Diabetic management and blood glucose control;Remote patient monitoring and diagnostic system; Automated system for analyzing cardiacultrasound images; Single-channel hybrid ECG segmentation; Event recognition and stateclassification to detect brain ischemia by means of EEG signal processing; Detection ofbreathing disorders like apnea and hypopnea; Molecular biology studies with DNA-chips;Evaluation of the cry of normal hearing and hard of hearing infants.

  15. Applicability of Computational Systems Biology in Toxicology

    DEFF Research Database (Denmark)

    Kongsbak, Kristine Grønning; Hadrup, Niels; Audouze, Karine Marie Laure

    2014-01-01

    be used to establish hypotheses on links between the chemical and human diseases. Such information can also be applied for designing more intelligent animal/cell experiments that can test the established hypotheses. Here, we describe how and why to apply an integrative systems biology method......Systems biology as a research field has emerged within the last few decades. Systems biology, often defined as the antithesis of the reductionist approach, integrates information about individual components of a biological system. In integrative systems biology, large data sets from various sources...... and databases are used to model and predict effects of chemicals on, for instance, human health. In toxicology, computational systems biology enables identification of important pathways and molecules from large data sets; tasks that can be extremely laborious when performed by a classical literature search...

  16. Low Power Dynamic Scheduling for Computing Systems

    CERN Document Server

    Neely, Michael J

    2011-01-01

    This paper considers energy-aware control for a computing system with two states: "active" and "idle." In the active state, the controller chooses to perform a single task using one of multiple task processing modes. The controller then saves energy by choosing an amount of time for the system to be idle. These decisions affect processing time, energy expenditure, and an abstract attribute vector that can be used to model other criteria of interest (such as processing quality or distortion). The goal is to optimize time average system performance. Applications of this model include a smart phone that makes energy-efficient computation and transmission decisions, a computer that processes tasks subject to rate, quality, and power constraints, and a smart grid energy manager that allocates resources in reaction to a time varying energy price. The solution methodology of this paper uses the theory of optimization for renewal systems developed in our previous work. This paper is written in tutorial form and devel...

  17. Applicability of computational systems biology in toxicology.

    Science.gov (United States)

    Kongsbak, Kristine; Hadrup, Niels; Audouze, Karine; Vinggaard, Anne Marie

    2014-07-01

    Systems biology as a research field has emerged within the last few decades. Systems biology, often defined as the antithesis of the reductionist approach, integrates information about individual components of a biological system. In integrative systems biology, large data sets from various sources and databases are used to model and predict effects of chemicals on, for instance, human health. In toxicology, computational systems biology enables identification of important pathways and molecules from large data sets; tasks that can be extremely laborious when performed by a classical literature search. However, computational systems biology offers more advantages than providing a high-throughput literature search; it may form the basis for establishment of hypotheses on potential links between environmental chemicals and human diseases, which would be very difficult to establish experimentally. This is possible due to the existence of comprehensive databases containing information on networks of human protein-protein interactions and protein-disease associations. Experimentally determined targets of the specific chemical of interest can be fed into these networks to obtain additional information that can be used to establish hypotheses on links between the chemical and human diseases. Such information can also be applied for designing more intelligent animal/cell experiments that can test the established hypotheses. Here, we describe how and why to apply an integrative systems biology method in the hypothesis-generating phase of toxicological research.

  18. 电力安全风险现场监控系统的设计%Research and Design of Power Security Risk on Site Monitoring System

    Institute of Scientific and Technical Information of China (English)

    李琳; 杨涛; 栗庆吉

    2013-01-01

    In light of the better mobility and flexibility of the mobile terminal, this paper proposes a design of Android-based power se-curity risk on site monitoring system. With this software, users can view and operate the instructions during the operation in real-time, monitor and record the information of the job sites, to achieve the assessment, warning and containment of the full state of standardized job security risk. And it comprehensively improves the level of standardization of operating risk containment.%基于移动终端具有移动性、灵活性等优点,提出一种基于Android的电力安全风险现场监控系统。应用该软件系统,用户可在作业过程中对指导书进行实时查看与操作,对作业现场进行实时监控与记录,实现标准化作业安全风险全程实时评估、预警和控制,全面提高标准化作业风险防控水平。

  19. Interactive computer-enhanced remote viewing system

    Energy Technology Data Exchange (ETDEWEB)

    Tourtellott, J.A.; Wagner, J.F. [Mechanical Technology Incorporated, Latham, NY (United States)

    1995-10-01

    Remediation activities such as decontamination and decommissioning (D&D) typically involve materials and activities hazardous to humans. Robots are an attractive way to conduct such remediation, but for efficiency they need a good three-dimensional (3-D) computer model of the task space where they are to function. This model can be created from engineering plans and architectural drawings and from empirical data gathered by various sensors at the site. The model is used to plan robotic tasks and verify that selected paths are clear of obstacles. This report describes the development of an Interactive Computer-Enhanced Remote Viewing System (ICERVS), a software system to provide a reliable geometric description of a robotic task space, and enable robotic remediation to be conducted more effectively and more economically.

  20. Cloud Computing Security in Business Information Systems

    CERN Document Server

    Ristov, Sasko; Kostoska, Magdalena

    2012-01-01

    Cloud computing providers' and customers' services are not only exposed to existing security risks, but, due to multi-tenancy, outsourcing the application and data, and virtualization, they are exposed to the emergent, as well. Therefore, both the cloud providers and customers must establish information security system and trustworthiness each other, as well as end users. In this paper we analyze main international and industrial standards targeting information security and their conformity with cloud computing security challenges. We evaluate that almost all main cloud service providers (CSPs) are ISO 27001:2005 certified, at minimum. As a result, we propose an extension to the ISO 27001:2005 standard with new control objective about virtualization, to retain generic, regardless of company's type, size and nature, that is, to be applicable for cloud systems, as well, where virtualization is its baseline. We also define a quantitative metric and evaluate the importance factor of ISO 27001:2005 control objecti...

  1. Thermoelectric property measurements with computer controlled systems

    Science.gov (United States)

    Chmielewski, A. B.; Wood, C.

    1984-01-01

    A joint JPL-NASA program to develop an automated system to measure the thermoelectric properties of newly developed materials is described. Consideration is given to the difficulties created by signal drift in measurements of Hall voltage and the Large Delta T Seebeck coefficient. The benefits of a computerized system were examined with respect to error reduction and time savings for human operators. It is shown that the time required to measure Hall voltage can be reduced by a factor of 10 when a computer is used to fit a curve to the ratio of the measured signal and its standard deviation. The accuracy of measurements of the Large Delta T Seebeck coefficient and thermal diffusivity was also enhanced by the use of computers.

  2. Checkpoint triggering in a computer system

    Science.gov (United States)

    Cher, Chen-Yong

    2016-09-06

    According to an aspect, a method for triggering creation of a checkpoint in a computer system includes executing a task in a processing node of the computer system and determining whether it is time to read a monitor associated with a metric of the task. The monitor is read to determine a value of the metric based on determining that it is time to read the monitor. A threshold for triggering creation of the checkpoint is determined based on the value of the metric. Based on determining that the value of the metric has crossed the threshold, the checkpoint including state data of the task is created to enable restarting execution of the task upon a restart operation.

  3. A NEW SYSTEM ARCHITECTURE FOR PERVASIVE COMPUTING

    Directory of Open Access Journals (Sweden)

    Anis ISMAIL

    2011-08-01

    Full Text Available We present new system architecture, a distributed framework designed to support pervasive computingapplications. We propose a new architecture consisting of a search engine and peripheral clients thataddresses issues in scalability, data sharing, data transformation and inherent platform heterogeneity. Keyfeatures of our application are a type-aware data transport that is capable of extract data, and presentdata through handheld devices (PDA (personal digital assistant, mobiles, etc. Pervasive computing usesweb technology, portable devices, wireless communications and nomadic or ubiquitous computing systems.The web and the simple standard HTTP protocol that it is based on, facilitate this kind of ubiquitousaccess. This can be implemented on a variety of devices - PDAs, laptops, information appliances such asdigital cameras and printers. Mobile users get transparent access to resources outside their currentenvironment. We discuss our system’s architecture and its implementation. Through experimental study,we show reasonable performance and adaptation for our system’s implementation for the mobile devices.

  4. Music Genre Classification Systems - A Computational Approach

    OpenAIRE

    Ahrendt, Peter; Hansen, Lars Kai

    2006-01-01

    Automatic music genre classification is the classification of a piece of music into its corresponding genre (such as jazz or rock) by a computer. It is considered to be a cornerstone of the research area Music Information Retrieval (MIR) and closely linked to the other areas in MIR. It is thought that MIR will be a key element in the processing, searching and retrieval of digital music in the near future. This dissertation is concerned with music genre classification systems and in particular...

  5. Research on Dynamic Distributed Computing System for Small and Medium-Sized Computer Clusters

    Institute of Scientific and Technical Information of China (English)

    Le Kang; Jianliang Xu; Feng Liu

    2012-01-01

      Distributed computing system is a science by which a complex task that need for large amount of computation can be divided into small pieces and calculated by more than one computer,and we can get the final result according to results from each computer.This paper considers a distributed computing system running in the small and medium-sized computer clusters to solve the problem that single computer has a low efficiency,and improve the efficiency of large-scale computing.The experiments show that the system can effectively improve the efficiency and it is a viable program.

  6. Performance evaluation of a computed radiography system

    Energy Technology Data Exchange (ETDEWEB)

    Roussilhe, J.; Fallet, E. [Carestream Health France, 71 - Chalon/Saone (France); Mango, St.A. [Carestream Health, Inc. Rochester, New York (United States)

    2007-07-01

    Computed radiography (CR) standards have been formalized and published in Europe and in the US. The CR system classification is defined in those standards by - minimum normalized signal-to-noise ratio (SNRN), and - maximum basic spatial resolution (SRb). Both the signal-to-noise ratio (SNR) and the contrast sensitivity of a CR system depend on the dose (exposure time and conditions) at the detector. Because of their wide dynamic range, the same storage phosphor imaging plate can qualify for all six CR system classes. The exposure characteristics from 30 to 450 kV, the contrast sensitivity, and the spatial resolution of the KODAK INDUSTREX CR Digital System have been thoroughly evaluated. This paper will present some of the factors that determine the system's spatial resolution performance. (authors)

  7. TMX-U computer system in evolution

    Science.gov (United States)

    Casper, T. A.; Bell, H.; Brown, M.; Gorvad, M.; Jenkins, S.; Meyer, W.; Moller, J.; Perkins, D.

    1986-08-01

    Over the past three years, the total TMX-U diagnostic data base has grown to exceed 10 Mbytes from over 1300 channels; roughly triple the originally designed size. This acquisition and processing load has resulted in an experiment repetition rate exceeding 10 min per shot using the five original Hewlett-Packard HP-1000 computers with their shared disks. Our new diagnostics tend to be multichannel instruments, which, in our environment, can be more easily managed using local computers. For this purpose, we are using HP series 9000 computers for instrument control, data acquisition, and analysis. Fourteen such systems are operational with processed format output exchanged via a shared resource manager. We are presently implementing the necessary hardware and software changes to create a local area network allowing us to combine the data from these systems with our main data archive. The expansion of our diagnostic system using the parallel acquisition and processing concept allows us to increase our data base with a minimum of impact on the experimental repetition rate.

  8. Physical Optics Based Computational Imaging Systems

    Science.gov (United States)

    Olivas, Stephen Joseph

    There is an ongoing demand on behalf of the consumer, medical and military industries to make lighter weight, higher resolution, wider field-of-view and extended depth-of-focus cameras. This leads to design trade-offs between performance and cost, be it size, weight, power, or expense. This has brought attention to finding new ways to extend the design space while adhering to cost constraints. Extending the functionality of an imager in order to achieve extraordinary performance is a common theme of computational imaging, a field of study which uses additional hardware along with tailored algorithms to formulate and solve inverse problems in imaging. This dissertation details four specific systems within this emerging field: a Fiber Bundle Relayed Imaging System, an Extended Depth-of-Focus Imaging System, a Platform Motion Blur Image Restoration System, and a Compressive Imaging System. The Fiber Bundle Relayed Imaging System is part of a larger project, where the work presented in this thesis was to use image processing techniques to mitigate problems inherent to fiber bundle image relay and then, form high-resolution wide field-of-view panoramas captured from multiple sensors within a custom state-of-the-art imager. The Extended Depth-of-Focus System goals were to characterize the angular and depth dependence of the PSF of a focal swept imager in order to increase the acceptably focused imaged scene depth. The goal of the Platform Motion Blur Image Restoration System was to build a system that can capture a high signal-to-noise ratio (SNR), long-exposure image which is inherently blurred while at the same time capturing motion data using additional optical sensors in order to deblur the degraded images. Lastly, the objective of the Compressive Imager was to design and build a system functionally similar to the Single Pixel Camera and use it to test new sampling methods for image generation and to characterize it against a traditional camera. These computational

  9. Rapid deployment drilling system for on-site inspections under a comprehensive test ban treaty vol. 1: description, acquisition, deployment, and operation vol. 2: appendices

    Energy Technology Data Exchange (ETDEWEB)

    Heuze, F; Cohen, J; Pittard, G; Deskius, G; Vorkinn, P; Rock, D

    1999-11-01

    The Comprehensive Test Ban Treaty (CTBT) has been signed by many countries, including the US. The US Senate will start discussions of CTBT ratification in the near future. The Treaty aims to prevent any nuclear explosion from being conducted. A verification system is being implemented. It includes the possibility of On-Site Inspections (OSI) in a country where a suspicious seismic signal has been identified, which could come from a clandestine nuclear test. As part of an OSI, the use of drilling is allowed by the CTBT so as to obtain irrefutable proof of a Treaty violation. Such proof could be in the form of diagnostics of very high gamma radiation levels and high temperatures underground, which could not be explained by a natural source. A typical situation is shown in Figure 1, where the OSI team must find a nuclear cavity underground when only an approximate location is inferred. This calls for the ability to do directional drilling. Because there is no need for large borings and to minimize the cost and size of the equipment, slim-hole drilling is adequate. On that basis, an initial study by Lawrence Livermore National Laboratory [1] concluded that coiled-tubing (C-T) was the most attractive option for OSI drilling (Figure 2). Then, a preliminary design of a C-T Rapid Deployment Drilling System (RDDS) was performed by Maurer Engineering of Houston, TX [2]. Although a drilling mud system is also included in the RDDS definition, the preferred mode of operation of the RDDS would be drilling with air and foam. This minimizes water requirements in cases when water may be scarce at the OSI site. It makes the required equipment smaller than when a mud system is included. And it may increase the drilling rates, by eliminating the ''chip hold-down'' effect of a mud column. Following this preliminary design study, it was determined that the preferred bottom-hole assembly for such a system would be the Viper system of Schlumberger Anadrill, with one

  10. Computer performance optimization systems, applications, processes

    CERN Document Server

    Osterhage, Wolfgang W

    2013-01-01

    Computing power performance was important at times when hardware was still expensive, because hardware had to be put to the best use. Later on this criterion was no longer critical, since hardware had become inexpensive. Meanwhile, however, people have realized that performance again plays a significant role, because of the major drain on system resources involved in developing complex applications. This book distinguishes between three levels of performance optimization: the system level, application level and business processes level. On each, optimizations can be achieved and cost-cutting p

  11. Computational modeling of shallow geothermal systems

    CERN Document Server

    Al-Khoury, Rafid

    2011-01-01

    A Step-by-step Guide to Developing Innovative Computational Tools for Shallow Geothermal Systems Geothermal heat is a viable source of energy and its environmental impact in terms of CO2 emissions is significantly lower than conventional fossil fuels. Shallow geothermal systems are increasingly utilized for heating and cooling of buildings and greenhouses. However, their utilization is inconsistent with the enormous amount of energy available underneath the surface of the earth. Projects of this nature are not getting the public support they deserve because of the uncertainties associated with

  12. Prestandardisation Activities for Computer Based Safety Systems

    DEFF Research Database (Denmark)

    Taylor, J. R.; Bologna, S.; Ehrenberger, W.

    1981-01-01

    Questions of technical safety become more and more important. Due to the higher complexity of their functions computer based safety systems have special problems. Researchers, producers, licensing personnel and customers have met on a European basis to exchange knowledge and formulate positions....... The Commission of the european Community supports the work. Major topics comprise hardware configuration and self supervision, software design, verification and testing, documentation, system specification and concurrent processing. Preliminary results have been used for the draft of an IEC standard and for some...

  13. Tools for Embedded Computing Systems Software

    Science.gov (United States)

    1978-01-01

    A workshop was held to assess the state of tools for embedded systems software and to determine directions for tool development. A synopsis of the talk and the key figures of each workshop presentation, together with chairmen summaries, are presented. The presentations covered four major areas: (1) tools and the software environment (development and testing); (2) tools and software requirements, design, and specification; (3) tools and language processors; and (4) tools and verification and validation (analysis and testing). The utility and contribution of existing tools and research results for the development and testing of embedded computing systems software are described and assessed.

  14. Classification of (2+1)-dimensional topological order and symmetry-protected topological order for bosonic and fermionic systems with on-site symmetries

    Science.gov (United States)

    Lan, Tian; Kong, Liang; Wen, Xiao-Gang

    2017-06-01

    In 2+1-dimensional space-time, gapped quantum states are always gapped quantum liquids (GQL) which include both topologically ordered states (with long range entanglement) and symmetry protected topological (SPT) states (with short range entanglement). In this paper, we propose a classification of 2+1D GQLs for both bosonic and fermionic systems: 2+1D bosonic/fermionic GQLs with finite on-site symmetry are classified by nondegenerate unitary braided fusion categories over a symmetric fusion category (SFC) E , abbreviated as UMTC/E, together with their modular extensions and total chiral central charges. In our classification, SFC E describes the symmetry, which is Rep(G ) for bosonic symmetry G , or sRep(Gf) for fermionic symmetry Gf. As a special case of the above result, we find that the modular extensions of Rep(G ) classify the 2+1D bosonic SPT states of symmetry G , while the c =0 modular extensions of sRep(Gf) classify the 2+1D fermionic SPT states of symmetry Gf. Many fermionic SPT states are studied based on the constructions from free-fermion models. But free-fermion constructions cannot produce all fermionic SPT states. Our classification does not have such a drawback. We show that, for interacting 2+1D fermionic systems, there are exactly 16 superconducting phases with no symmetry and no fractional excitations (up to E8 bosonic quantum Hall states). Also, there are exactly 8 Z2×Z2f -SPT phases, 2 Z8f-SPT phases, and so on. Besides, we show that two topological orders with identical bulk excitations and central charge always differ by the stacking of the SPT states of the same symmetry.

  15. Computer-Assisted Photo Interpretation System

    Science.gov (United States)

    Niedzwiadek, Harry A.

    1981-11-01

    A computer-assisted photo interpretation research (CAPIR) system has been developed at the U.S. Army Engineer Topographic Laboratories (ETL), Fort Belvoir, Virginia. The system is based around the APPS-IV analytical plotter, a photogrammetric restitution device that was designed and developed by Autometric specifically for interactive, computerized data collection activities involving high-resolution, stereo aerial photographs. The APPS-IV is ideally suited for feature analysis and feature extraction, the primary functions of a photo interpreter. The APPS-IV is interfaced with a minicomputer and a geographic information system called AUTOGIS. The AUTOGIS software provides the tools required to collect or update digital data using an APPS-IV, construct and maintain a geographic data base, and analyze or display the contents of the data base. Although the CAPIR system is fully functional at this time, considerable enhancements are planned for the future.

  16. Computational systems biology in cancer brain metastasis.

    Science.gov (United States)

    Peng, Huiming; Tan, Hua; Zhao, Weiling; Jin, Guangxu; Sharma, Sambad; Xing, Fei; Watabe, Kounosuke; Zhou, Xiaobo

    2016-01-01

    Brain metastases occur in 20-40% of patients with advanced malignancies. A better understanding of the mechanism of this disease will help us to identify novel therapeutic strategies. In this review, we will discuss the systems biology approaches used in this area, including bioinformatics and mathematical modeling. Bioinformatics has been used for identifying the molecular mechanisms driving brain metastasis and mathematical modeling methods for analyzing dynamics of a system and predicting optimal therapeutic strategies. We will illustrate the strategies, procedures, and computational techniques used for studying systems biology in cancer brain metastases. We will give examples on how to use a systems biology approach to analyze a complex disease. Some of the approaches used to identify relevant networks, pathways, and possibly biomarkers in metastasis will be reviewed into details. Finally, certain challenges and possible future directions in this area will also be discussed.

  17. A computer-aided continuous assessment system

    Directory of Open Access Journals (Sweden)

    B. C.H. Turton

    1996-12-01

    Full Text Available Universities within the United Kingdom have had to cope with a massive expansion in undergraduate student numbers over the last five years (Committee of Scottish University Principals, 1993; CVCP Briefing Note, 1994. In addition, there has been a move towards modularization and a closer monitoring of a student's progress throughout the year. Since the price/performance ratio of computer systems has continued to improve, Computer- Assisted Learning (CAL has become an attractive option. (Fry, 1990; Benford et al, 1994; Laurillard et al, 1994. To this end, the Universities Funding Council (UFQ has funded the Teaching and Learning Technology Programme (TLTP. However universities also have a duty to assess as well as to teach. This paper describes a Computer-Aided Assessment (CAA system capable of assisting in grading students and providing feedback. In this particular case, a continuously assessed course (Low-Level Languages of over 100 students is considered. Typically, three man-days are required to mark one assessed piece of coursework from the students in this class. Any feedback on how the questions were dealt with by the student are of necessity brief. Most of the feedback is provided in a tutorial session that covers the pitfalls encountered by the majority of the students.

  18. OPTIMIZATION OF PARAMETERS OF ELEMENTS COMPUTER SYSTEM

    Directory of Open Access Journals (Sweden)

    Nesterov G. D.

    2016-03-01

    Full Text Available The work is devoted to the topical issue of increasing the productivity of computers. It has an experimental character. Therefore, the description of a number of the carried-out tests and the analysis of their results is offered. Previously basic characteristics of modules of the computer for the regular mode of functioning are provided in the article. Further the technique of regulating their parameters in the course of experiment is described. Thus the special attention is paid to observing the necessary thermal mode in order to avoid an undesirable overheat of the central processor. Also, operability of system in the conditions of the increased energy consumption is checked. The most responsible moment thus is regulating the central processor. As a result of the test its optimum tension, frequency and delays of data reading from memory are found. The analysis of stability of characteristics of the RAM, in particular, a condition of its tires in the course of experiment is made. As the executed tests took place within the standard range of characteristics of modules, and, therefore, the margin of safety put in the computer and capacity of system wasn't used, further experiments were made at extreme dispersal in the conditions of air cooling. The received results are also given in the offered article

  19. Microchannel Reactor System Design & Demonstration For On-Site H2O2 Production by Controlled H2/O2 Reaction

    Energy Technology Data Exchange (ETDEWEB)

    Adeniyi Lawal

    2008-12-09

    We successfully demonstrated an innovative hydrogen peroxide (H2O2) production concept which involved the development of flame- and explosion-resistant microchannel reactor system for energy efficient, cost-saving, on-site H2O2 production. We designed, fabricated, evaluated, and optimized a laboratory-scale microchannel reactor system for controlled direct combination of H2 and O2 in all proportions including explosive regime, at a low pressure and a low temperature to produce about 1.5 wt% H2O2 as proposed. In the second phase of the program, as a prelude to full-scale commercialization, we demonstrated our H2O2 production approach by ‘numbering up’ the channels in a multi-channel microreactor-based pilot plant to produce 1 kg/h of H2O2 at 1.5 wt% as demanded by end-users of the developed technology. To our knowledge, we are the first group to accomplish this significant milestone. We identified the reaction pathways that comprise the process, and implemented rigorous mechanistic kinetic studies to obtain the kinetics of the three main dominant reactions. We are not aware of any such comprehensive kinetic studies for the direct combination process, either in a microreactor or any other reactor system. We showed that the mass transfer parameter in our microreactor system is several orders of magnitude higher than what obtains in the macroreactor, attesting to the superior performance of microreactor. A one-dimensional reactor model incorporating the kinetics information enabled us to clarify certain important aspects of the chemistry of the direct combination process as detailed in section 5 of this report. Also, through mathematical modeling and simulation using sophisticated and robust commercial software packages, we were able to elucidate the hydrodynamics of the complex multiphase flows that take place in the microchannel. In conjunction with the kinetics information, we were able to validate the experimental data. If fully implemented across the whole

  20. Visual computing model for immune system and medical system.

    Science.gov (United States)

    Gong, Tao; Cao, Xinxue; Xiong, Qin

    2015-01-01

    Natural immune system is an intelligent self-organizing and adaptive system, which has a variety of immune cells with different types of immune mechanisms. The mutual cooperation between the immune cells shows the intelligence of this immune system, and modeling this immune system has an important significance in medical science and engineering. In order to build a comprehensible model of this immune system for better understanding with the visualization method than the traditional mathematic model, a visual computing model of this immune system was proposed and also used to design a medical system with the immune system, in this paper. Some visual simulations of the immune system were made to test the visual effect. The experimental results of the simulations show that the visual modeling approach can provide a more effective way for analyzing this immune system than only the traditional mathematic equations.

  1. Visual computing scientific visualization and imaging systems

    CERN Document Server

    2014-01-01

    This volume aims to stimulate discussions on research involving the use of data and digital images as an understanding approach for analysis and visualization of phenomena and experiments. The emphasis is put not only on graphically representing data as a way of increasing its visual analysis, but also on the imaging systems which contribute greatly to the comprehension of real cases. Scientific Visualization and Imaging Systems encompass multidisciplinary areas, with applications in many knowledge fields such as Engineering, Medicine, Material Science, Physics, Geology, Geographic Information Systems, among others. This book is a selection of 13 revised and extended research papers presented in the International Conference on Advanced Computational Engineering and Experimenting -ACE-X conferences 2010 (Paris), 2011 (Algarve), 2012 (Istanbul) and 2013 (Madrid). The examples were particularly chosen from materials research, medical applications, general concepts applied in simulations and image analysis and ot...

  2. Epilepsy analytic system with cloud computing.

    Science.gov (United States)

    Shen, Chia-Ping; Zhou, Weizhi; Lin, Feng-Seng; Sung, Hsiao-Ya; Lam, Yan-Yu; Chen, Wei; Lin, Jeng-Wei; Pan, Ming-Kai; Chiu, Ming-Jang; Lai, Feipei

    2013-01-01

    Biomedical data analytic system has played an important role in doing the clinical diagnosis for several decades. Today, it is an emerging research area of analyzing these big data to make decision support for physicians. This paper presents a parallelized web-based tool with cloud computing service architecture to analyze the epilepsy. There are many modern analytic functions which are wavelet transform, genetic algorithm (GA), and support vector machine (SVM) cascaded in the system. To demonstrate the effectiveness of the system, it has been verified by two kinds of electroencephalography (EEG) data, which are short term EEG and long term EEG. The results reveal that our approach achieves the total classification accuracy higher than 90%. In addition, the entire training time accelerate about 4.66 times and prediction time is also meet requirements in real time.

  3. Development of a gas-cylinder-free plasma desorption/ionization system for on-site detection of chemical warfare agents.

    Science.gov (United States)

    Iwai, Takahiro; Kakegawa, Ken; Aida, Mari; Nagashima, Hisayuki; Nagoya, Tomoki; Kanamori-Kataoka, Mieko; Miyahara, Hidekazu; Seto, Yasuo; Okino, Akitoshi

    2015-06-02

    A gas-cylinder-free plasma desorption/ionization system was developed to realize a mobile on-site analytical device for detection of chemical warfare agents (CWAs). In this system, the plasma source was directly connected to the inlet of a mass spectrometer. The plasma can be generated with ambient air, which is drawn into the discharge region by negative pressure in the mass spectrometer. High-power density pulsed plasma of 100 kW could be generated by using a microhollow cathode and a laboratory-built high-intensity pulsed power supply (pulse width: 10-20 μs; repetition frequency: 50 Hz). CWAs were desorbed and protonated in the enclosed space adjacent to the plasma source. Protonated sample molecules were introduced to the mass spectrometer by airflow through the discharge region. To evaluate the analytical performance of this device, helium and air plasma were directly irradiated to CWAs in the gas-cylinder-free plasma desorption/ionization system and the protonated molecules were analyzed by using an ion-trap mass spectrometer. A blister agent (nitrogen mustard 3) and nerve gases [cyclohexylsarin (GF), tabun (GA), and O-ethyl S-2-N,N-diisopropylaminoethyl methylphosphonothiolate (VX)] in solution in n-hexane were applied to the Teflon rod and used as test samples, after solvent evaporation. As a result, protonated molecules of CWAs were successfully observed as the characteristic ion peaks at m/z 204, 181, 163, and 268, respectively. In air plasma, the limits of detection were estimated to be 22, 20, 4.8, and 1.0 pmol, respectively, which were lower than those obtained with helium plasma. To achieve quantitative analysis, calibration curves were made by using CWA stimulant dipinacolyl methylphosphonate as an internal standard; straight correlation lines (R(2) = 0.9998) of the peak intensity ratios (target per internal standard) were obtained. Remarkably, GA and GF gave protonated dimer ions, and the ratios of the protonated dimer ions to the protonated

  4. 10 CFR 35.457 - Therapy-related computer systems.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Therapy-related computer systems. 35.457 Section 35.457... Therapy-related computer systems. The licensee shall perform acceptance testing on the treatment planning system of therapy-related computer systems in accordance with published protocols accepted by...

  5. Knowledge and intelligent computing system in medicine.

    Science.gov (United States)

    Pandey, Babita; Mishra, R B

    2009-03-01

    Knowledge-based systems (KBS) and intelligent computing systems have been used in the medical planning, diagnosis and treatment. The KBS consists of rule-based reasoning (RBR), case-based reasoning (CBR) and model-based reasoning (MBR) whereas intelligent computing method (ICM) encompasses genetic algorithm (GA), artificial neural network (ANN), fuzzy logic (FL) and others. The combination of methods in KBS such as CBR-RBR, CBR-MBR and RBR-CBR-MBR and the combination of methods in ICM is ANN-GA, fuzzy-ANN, fuzzy-GA and fuzzy-ANN-GA. The combination of methods from KBS to ICM is RBR-ANN, CBR-ANN, RBR-CBR-ANN, fuzzy-RBR, fuzzy-CBR and fuzzy-CBR-ANN. In this paper, we have made a study of different singular and combined methods (185 in number) applicable to medical domain from mid 1970s to 2008. The study is presented in tabular form, showing the methods and its salient features, processes and application areas in medical domain (diagnosis, treatment and planning). It is observed that most of the methods are used in medical diagnosis very few are used for planning and moderate number in treatment. The study and its presentation in this context would be helpful for novice researchers in the area of medical expert system.

  6. An Applet-based Anonymous Distributed Computing System.

    Science.gov (United States)

    Finkel, David; Wills, Craig E.; Ciaraldi, Michael J.; Amorin, Kevin; Covati, Adam; Lee, Michael

    2001-01-01

    Defines anonymous distributed computing systems and focuses on the specifics of a Java, applet-based approach for large-scale, anonymous, distributed computing on the Internet. Explains the possibility of a large number of computers participating in a single computation and describes a test of the functionality of the system. (Author/LRW)

  7. Final Report on the Automated Computer Science Education System.

    Science.gov (United States)

    Danielson, R. L.; And Others

    At the University of Illinois at Urbana, a computer based curriculum called Automated Computer Science Education System (ACSES) has been developed to supplement instruction in introductory computer science courses or to assist individuals interested in acquiring a foundation in computer science through independent study. The system, which uses…

  8. Neural circuits as computational dynamical systems.

    Science.gov (United States)

    Sussillo, David

    2014-04-01

    Many recent studies of neurons recorded from cortex reveal complex temporal dynamics. How such dynamics embody the computations that ultimately lead to behavior remains a mystery. Approaching this issue requires developing plausible hypotheses couched in terms of neural dynamics. A tool ideally suited to aid in this question is the recurrent neural network (RNN). RNNs straddle the fields of nonlinear dynamical systems and machine learning and have recently seen great advances in both theory and application. I summarize recent theoretical and technological advances and highlight an example of how RNNs helped to explain perplexing high-dimensional neurophysiological data in the prefrontal cortex.

  9. Controlling Energy Demand in Mobile Computing Systems

    CERN Document Server

    Ellis, Carla

    2007-01-01

    This lecture provides an introduction to the problem of managing the energy demand of mobile devices. Reducing energy consumption, primarily with the goal of extending the lifetime of battery-powered devices, has emerged as a fundamental challenge in mobile computing and wireless communication. The focus of this lecture is on a systems approach where software techniques exploit state-of-the-art architectural features rather than relying only upon advances in lower-power circuitry or the slow improvements in battery technology to solve the problem. Fortunately, there are many opportunities to i

  10. Large-scale neuromorphic computing systems

    Science.gov (United States)

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.

  11. The Spartan attitude control system - Ground support computer

    Science.gov (United States)

    Schnurr, R. G., Jr.

    1986-01-01

    The Spartan Attitude Control System (ACS) contains a command and control computer. This computer is optimized for the activities of the flight and contains very little human interface hardware and software. The computer system provides the technicians testing of Spartan ACS with a convenient command-oriented interface to the flight ACS computer. The system also decodes and time tags data automatically sent out by the flight computer as key events occur. The duration and magnitude of all system maneuvers is also derived and displayed by this system. The Ground Support Computer is also the primary Ground Support Equipment for the flight sequencer which controls all payload maneuvers, and long term program timing.

  12. Computer system for monitoring power boiler operation

    Energy Technology Data Exchange (ETDEWEB)

    Taler, J.; Weglowski, B.; Zima, W.; Duda, P.; Gradziel, S.; Sobota, T.; Cebula, A.; Taler, D. [Cracow University of Technology, Krakow (Poland). Inst. for Process & Power Engineering

    2008-02-15

    The computer-based boiler performance monitoring system was developed to perform thermal-hydraulic computations of the boiler working parameters in an on-line mode. Measurements of temperatures, heat flux, pressures, mass flowrates, and gas analysis data were used to perform the heat transfer analysis in the evaporator, furnace, and convection pass. A new construction technique of heat flux tubes for determining heat flux absorbed by membrane water-walls is also presented. The current paper presents the results of heat flux measurement in coal-fired steam boilers. During changes of the boiler load, the necessary natural water circulation cannot be exceeded. A rapid increase of pressure may cause fading of the boiling process in water-wall tubes, whereas a rapid decrease of pressure leads to water boiling in all elements of the boiler's evaporator - water-wall tubes and downcomers. Both cases can cause flow stagnation in the water circulation leading to pipe cracking. Two flowmeters were assembled on central downcomers, and an investigation of natural water circulation in an OP-210 boiler was carried out. On the basis of these measurements, the maximum rates of pressure change in the boiler evaporator were determined. The on-line computation of the conditions in the combustion chamber allows for real-time determination of the heat flowrate transferred to the power boiler evaporator. Furthermore, with a quantitative indication of surface cleanliness, selective sootblowing can be directed at specific problem areas. A boiler monitoring system is also incorporated to provide details of changes in boiler efficiency and operating conditions following sootblowing, so that the effects of a particular sootblowing sequence can be analysed and optimized at a later stage.

  13. Engineering Control Systems and Computing in the 1990s

    OpenAIRE

    Casti, J.L.

    1985-01-01

    The relationship between computing hardware/software and engineering control systems is projected into the next decade, and conjectures are made as to the areas of control and system theory that will most benefit from various types of computing advances.

  14. Computer Based Information Systems and the Middle Manager.

    Science.gov (United States)

    Why do some computer based information systems succeed while others fail. It concludes with eleven recommended areas that middle management must...understand in order to effectively use computer based information systems . (Modified author abstract)

  15. Potential of Cognitive Computing and Cognitive Systems

    Science.gov (United States)

    Noor, Ahmed K.

    2014-11-01

    Cognitive computing and cognitive technologies are game changers for future engineering systems, as well as for engineering practice and training. They are major drivers for knowledge automation work, and the creation of cognitive products with higher levels of intelligence than current smart products. This paper gives a brief review of cognitive computing and some of the cognitive engineering systems activities. The potential of cognitive technologies is outlined, along with a brief description of future cognitive environments, incorporating cognitive assistants - specialized proactive intelligent software agents designed to follow and interact with humans and other cognitive assistants across the environments. The cognitive assistants engage, individually or collectively, with humans through a combination of adaptive multimodal interfaces, and advanced visualization and navigation techniques. The realization of future cognitive environments requires the development of a cognitive innovation ecosystem for the engineering workforce. The continuously expanding major components of the ecosystem include integrated knowledge discovery and exploitation facilities (incorporating predictive and prescriptive big data analytics); novel cognitive modeling and visual simulation facilities; cognitive multimodal interfaces; and cognitive mobile and wearable devices. The ecosystem will provide timely, engaging, personalized / collaborative, learning and effective decision making. It will stimulate creativity and innovation, and prepare the participants to work in future cognitive enterprises and develop new cognitive products of increasing complexity. http://www.aee.odu.edu/cognitivecomp

  16. COMPUTER-BASED REASONING SYSTEMS: AN OVERVIEW

    Directory of Open Access Journals (Sweden)

    CIPRIAN CUCU

    2012-12-01

    Full Text Available Argumentation is nowadays seen both as skill that people use in various aspects of their lives, as well as an educational technique that can support the transfer or creation of knowledge thus aiding in the development of other skills (e.g. Communication, critical thinking or attitudes. However, teaching argumentation and teaching with argumentation is still a rare practice, mostly due to the lack of available resources such as time or expert human tutors that are specialized in argumentation. Intelligent Computer Systems (i.e. Systems that implement an inner representation of particular knowledge and try to emulate the behavior of humans could allow more people to understand the purpose, techniques and benefits of argumentation. The proposed paper investigates the state of the art concepts of computer-based argumentation used in education and tries to develop a conceptual map, showing benefits, limitation and relations between various concepts focusing on the duality “learning to argue – arguing to learn”.

  17. Computational System For Rapid CFD Analysis In Engineering

    Science.gov (United States)

    Barson, Steven L.; Ascoli, Edward P.; Decroix, Michelle E.; Sindir, Munir M.

    1995-01-01

    Computational system comprising modular hardware and software sub-systems developed to accelerate and facilitate use of techniques of computational fluid dynamics (CFD) in engineering environment. Addresses integration of all aspects of CFD analysis process, including definition of hardware surfaces, generation of computational grids, CFD flow solution, and postprocessing. Incorporates interfaces for integration of all hardware and software tools needed to perform complete CFD analysis. Includes tools for efficient definition of flow geometry, generation of computational grids, computation of flows on grids, and postprocessing of flow data. System accepts geometric input from any of three basic sources: computer-aided design (CAD), computer-aided engineering (CAE), or definition by user.

  18. Multiaxis, Lightweight, Computer-Controlled Exercise System

    Science.gov (United States)

    Haynes, Leonard; Bachrach, Benjamin; Harvey, William

    2006-01-01

    The multipurpose, multiaxial, isokinetic dynamometer (MMID) is a computer-controlled system of exercise machinery that can serve as a means for quantitatively assessing a subject s muscle coordination, range of motion, strength, and overall physical condition with respect to a wide variety of forces, motions, and exercise regimens. The MMID is easily reconfigurable and compactly stowable and, in comparison with prior computer-controlled exercise systems, it weighs less, costs less, and offers more capabilities. Whereas a typical prior isokinetic exercise machine is limited to operation in only one plane, the MMID can operate along any path. In addition, the MMID is not limited to the isokinetic (constant-speed) mode of operation. The MMID provides for control and/or measurement of position, force, and/or speed of exertion in as many as six degrees of freedom simultaneously; hence, it can accommodate more complex, more nearly natural combinations of motions and, in so doing, offers greater capabilities for physical conditioning and evaluation. The MMID (see figure) includes as many as eight active modules, each of which can be anchored to a floor, wall, ceiling, or other fixed object. A cable is payed out from a reel in each module to a bar or other suitable object that is gripped and manipulated by the subject. The reel is driven by a DC brushless motor or other suitable electric motor via a gear reduction unit. The motor can be made to function as either a driver or an electromagnetic brake, depending on the required nature of the interaction with the subject. The module includes a force and a displacement sensor for real-time monitoring of the tension in and displacement of the cable, respectively. In response to commands from a control computer, the motor can be operated to generate a required tension in the cable, to displace the cable a required distance, or to reel the cable in or out at a required speed. The computer can be programmed, either locally or via

  19. 14 CFR 415.123 - Computing systems and software.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Computing systems and software. 415.123... Launch Vehicle From a Non-Federal Launch Site § 415.123 Computing systems and software. (a) An applicant's safety review document must describe all computing systems and software that perform a safety...

  20. Intelligent Computer Vision System for Automated Classification

    Science.gov (United States)

    Jordanov, Ivan; Georgieva, Antoniya

    2010-05-01

    In this paper we investigate an Intelligent Computer Vision System applied for recognition and classification of commercially available cork tiles. The system is capable of acquiring and processing gray images using several feature generation and analysis techniques. Its functionality includes image acquisition, feature extraction and preprocessing, and feature classification with neural networks (NN). We also discuss system test and validation results from the recognition and classification tasks. The system investigation also includes statistical feature processing (features number and dimensionality reduction techniques) and classifier design (NN architecture, target coding, learning complexity and performance, and training with our own metaheuristic optimization method). The NNs trained with our genetic low-discrepancy search method (GLPτS) for global optimisation demonstrated very good generalisation abilities. In our view, the reported testing success rate of up to 95% is due to several factors: combination of feature generation techniques; application of Analysis of Variance (ANOVA) and Principal Component Analysis (PCA), which appeared to be very efficient for preprocessing the data; and use of suitable NN design and learning method.

  1. Computational dynamics of acoustically driven microsphere systems.

    Science.gov (United States)

    Glosser, Connor; Piermarocchi, Carlo; Li, Jie; Dault, Dan; Shanker, B

    2016-01-01

    We propose a computational framework for the self-consistent dynamics of a microsphere system driven by a pulsed acoustic field in an ideal fluid. Our framework combines a molecular dynamics integrator describing the dynamics of the microsphere system with a time-dependent integral equation solver for the acoustic field that makes use of fields represented as surface expansions in spherical harmonic basis functions. The presented approach allows us to describe the interparticle interaction induced by the field as well as the dynamics of trapping in counter-propagating acoustic pulses. The integral equation formulation leads to equations of motion for the microspheres describing the effect of nondissipative drag forces. We show (1) that the field-induced interactions between the microspheres give rise to effective dipolar interactions, with effective dipoles defined by their velocities and (2) that the dominant effect of an ultrasound pulse through a cloud of microspheres gives rise mainly to a translation of the system, though we also observe both expansion and contraction of the cloud determined by the initial system geometry.

  2. Computing the Moore-Penrose Inverse of a Matrix with a Computer Algebra System

    Science.gov (United States)

    Schmidt, Karsten

    2008-01-01

    In this paper "Derive" functions are provided for the computation of the Moore-Penrose inverse of a matrix, as well as for solving systems of linear equations by means of the Moore-Penrose inverse. Making it possible to compute the Moore-Penrose inverse easily with one of the most commonly used Computer Algebra Systems--and to have the blueprint…

  3. Assessment of the impact of on-site sanitation systems on groundwater pollution in two diverse geological settings--a case study from India.

    Science.gov (United States)

    Pujari, Paras R; Padmakar, C; Labhasetwar, Pawan K; Mahore, Piyush; Ganguly, A K

    2012-01-01

    On-site sanitation has emerged as a preferred mode of sanitation in cities experiencing rapid urbanization due to the high cost involved in off-site sanitation which requires conventional sewerages. However, this practice has put severe stress on groundwater especially its quality. Under the above backdrop, a study has been undertaken to investigate the impact of on-site sanitation on quality of groundwater sources in two mega cities namely Indore and Kolkata which are situated in two different geological settings. The parameters for the studies are distance of groundwater source from place of sanitation, effect of summer and monsoon seasons, local hydro-geological conditions, and physico-chemical parameters. NO(3) and fecal coliform concentrations are considered as main indexes of pollution in water. Out of many conclusions which can be made from this studies, one major conclusion is about the influence of on-site sanitation on groundwater quality is minimal in Kolkata, whereas it is significant in Indore. This difference is due to the difference in hydrogeological parameters of these two cities, Kolkata being on alluvium quaternary and Indore being on Deccan trap of Cretaceous to Paleogene age.

  4. Performance Aspects of Synthesizable Computing Systems

    DEFF Research Database (Denmark)

    Schleuniger, Pascal

    . However, high setup and design costs make ASICs economically viable only for high volume production. Therefore, FPGAs are increasingly being used in low and medium volume markets. The evolution of FPGAs has reached a point where multiple processor cores, dedicated accelerators, and a large number...... of interfaces can be integrated on a single device. This thesis consists of ve parts that address performance aspects of synthesizable computing systems on FPGAs. First, it is evaluated how synthesizable processor cores can exploit current state-of-the-art FPGA architectures. This evaluation results...... in a processor architecture optimized for a high throughput on modern FPGA architectures. The current hardware implementation, the Tinuso I core, can be clocked as high as 376MHz on a Xilinx Virtex 6 device and consumes fewer hardware resources than similar commercial processor congurations. The Tinuso...

  5. The fundamentals of computational intelligence system approach

    CERN Document Server

    Zgurovsky, Mikhail Z

    2017-01-01

    This monograph is dedicated to the systematic presentation of main trends, technologies and methods of computational intelligence (CI). The book pays big attention to novel important CI technology- fuzzy logic (FL) systems and fuzzy neural networks (FNN). Different FNN including new class of FNN- cascade neo-fuzzy neural networks are considered and their training algorithms are described and analyzed. The applications of FNN to the forecast in macroeconomics and at stock markets are examined. The book presents the problem of portfolio optimization under uncertainty, the novel theory of fuzzy portfolio optimization free of drawbacks of classical model of Markovitz as well as an application for portfolios optimization at Ukrainian, Russian and American stock exchanges. The book also presents the problem of corporations bankruptcy risk forecasting under incomplete and fuzzy information, as well as new methods based on fuzzy sets theory and fuzzy neural networks and results of their application for bankruptcy ris...

  6. Computational Modeling of Biological Systems From Molecules to Pathways

    CERN Document Server

    2012-01-01

    Computational modeling is emerging as a powerful new approach for studying and manipulating biological systems. Many diverse methods have been developed to model, visualize, and rationally alter these systems at various length scales, from atomic resolution to the level of cellular pathways. Processes taking place at larger time and length scales, such as molecular evolution, have also greatly benefited from new breeds of computational approaches. Computational Modeling of Biological Systems: From Molecules to Pathways provides an overview of established computational methods for the modeling of biologically and medically relevant systems. It is suitable for researchers and professionals working in the fields of biophysics, computational biology, systems biology, and molecular medicine.

  7. A computing system for LBB considerations

    Energy Technology Data Exchange (ETDEWEB)

    Ikonen, K.; Miettinen, J.; Raiko, H.; Keskinen, R.

    1997-04-01

    A computing system has been developed at VTT Energy for making efficient leak-before-break (LBB) evaluations of piping components. The system consists of fracture mechanics and leak rate analysis modules which are linked via an interactive user interface LBBCAL. The system enables quick tentative analysis of standard geometric and loading situations by means of fracture mechanics estimation schemes such as the R6, FAD, EPRI J, Battelle, plastic limit load and moments methods. Complex situations are handled with a separate in-house made finite-element code EPFM3D which uses 20-noded isoparametric solid elements, automatic mesh generators and advanced color graphics. Analytical formulas and numerical procedures are available for leak area evaluation. A novel contribution for leak rate analysis is the CRAFLO code which is based on a nonequilibrium two-phase flow model with phase slip. Its predictions are essentially comparable with those of the well known SQUIRT2 code; additionally it provides outputs for temperature, pressure and velocity distributions in the crack depth direction. An illustrative application to a circumferentially cracked elbow indicates expectedly that a small margin relative to the saturation temperature of the coolant reduces the leak rate and is likely to influence the LBB implementation to intermediate diameter (300 mm) primary circuit piping of BWR plants.

  8. Computer vision for driver assistance systems

    Science.gov (United States)

    Handmann, Uwe; Kalinke, Thomas; Tzomakas, Christos; Werner, Martin; von Seelen, Werner

    1998-07-01

    Systems for automated image analysis are useful for a variety of tasks and their importance is still increasing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut fur Neuroinformatik, methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information.

  9. Advances in Future Computer and Control Systems v.2

    CERN Document Server

    Lin, Sally; 2012 International Conference on Future Computer and Control Systems(FCCS2012)

    2012-01-01

    FCCS2012 is an integrated conference concentrating its focus on Future Computer and Control Systems. “Advances in Future Computer and Control Systems” presents the proceedings of the 2012 International Conference on Future Computer and Control Systems(FCCS2012) held April 21-22,2012, in Changsha, China including recent research results on Future Computer and Control Systems of researchers from all around the world.

  10. Advances in Future Computer and Control Systems v.1

    CERN Document Server

    Lin, Sally; 2012 International Conference on Future Computer and Control Systems(FCCS2012)

    2012-01-01

    FCCS2012 is an integrated conference concentrating its focus on Future Computer and Control Systems. “Advances in Future Computer and Control Systems” presents the proceedings of the 2012 International Conference on Future Computer and Control Systems(FCCS2012) held April 21-22,2012, in Changsha, China including recent research results on Future Computer and Control Systems of researchers from all around the world.

  11. Reachability computation for hybrid systems with Ariadne

    NARCIS (Netherlands)

    L. Benvenuti; D. Bresolin; A. Casagrande; P.J. Collins (Pieter); A. Ferrari; E. Mazzi; T. Villa; A. Sangiovanni-Vincentelli

    2008-01-01

    htmlabstractAriadne is an in-progress open environment to design algorithms for computing with hybrid automata, that relies on a rigorous computable analysis theory to represent geometric objects, in order to achieve provable approximation bounds along the computations. In this paper we discuss the

  12. Genost: A System for Introductory Computer Science Education with a Focus on Computational Thinking

    Science.gov (United States)

    Walliman, Garret

    Computational thinking, the creative thought process behind algorithmic design and programming, is a crucial introductory skill for both computer scientists and the population in general. In this thesis I perform an investigation into introductory computer science education in the United States and find that computational thinking is not effectively taught at either the high school or the college level. To remedy this, I present a new educational system intended to teach computational thinking called Genost. Genost consists of a software tool and a curriculum based on teaching computational thinking through fundamental programming structures and algorithm design. Genost's software design is informed by a review of eight major computer science educational software systems. Genost's curriculum is informed by a review of major literature on computational thinking. In two educational tests of Genost utilizing both college and high school students, Genost was shown to significantly increase computational thinking ability with a large effect size.

  13. A computer control system using a virtual keyboard

    Science.gov (United States)

    Ejbali, Ridha; Zaied, Mourad; Ben Amar, Chokri

    2015-02-01

    This work is in the field of human-computer communication, namely in the field of gestural communication. The objective was to develop a system for gesture recognition. This system will be used to control a computer without a keyboard. The idea consists in using a visual panel printed on an ordinary paper to communicate with a computer.

  14. 10 CFR 35.657 - Therapy-related computer systems.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Therapy-related computer systems. 35.657 Section 35.657... Units, Teletherapy Units, and Gamma Stereotactic Radiosurgery Units § 35.657 Therapy-related computer... computer systems in accordance with published protocols accepted by nationally recognized bodies. At...

  15. Factory automation management computer system and its applications. FA kanri computer system no tekiyo jirei

    Energy Technology Data Exchange (ETDEWEB)

    Maeda, M. (Meidensha Corp., Tokyo (Japan))

    1993-06-11

    A plurality of NC composite lathes used in a breaker manufacturing and processing line were integrated under a system mainly comprising the industrial computer [mu] PORT, an exclusive LAN, and material handling robots. This paper describes this flexible manufacturing system (FMS) that operates on an unmanned basis from process control to material distribution and processing. This system has achieved the following results: efficiency improvement in lines producing a great variety of products in small quantities and in mixed flow production lines enhancement in facility operating rates by means of group management of NC machine tools; orientation to developing into integrated production systems; expansion of processing capacity; reduction in number of processes; and reduction in management and indirect manpowers. This system allocates the production control plans transmitted from the production control system operated by a host computer to the processes on a daily basis and by machines, using the [mu] PORT. This FMS utilizes features of the multi-task processing function of the [mu] PORT and the ultra high-speed real-time-based BASIC. The system processes simultaneously the process management such as machining programs and processing results, the processing data management, and the operation control of a plurality of machines. The system achieved systematized machining processes. 6 figs., 2 tabs.

  16. Distributed computing system with dual independent communications paths between computers and employing split tokens

    Science.gov (United States)

    Rasmussen, Robert D. (Inventor); Manning, Robert M. (Inventor); Lewis, Blair F. (Inventor); Bolotin, Gary S. (Inventor); Ward, Richard S. (Inventor)

    1990-01-01

    This is a distributed computing system providing flexible fault tolerance; ease of software design and concurrency specification; and dynamic balance of the loads. The system comprises a plurality of computers each having a first input/output interface and a second input/output interface for interfacing to communications networks each second input/output interface including a bypass for bypassing the associated computer. A global communications network interconnects the first input/output interfaces for providing each computer the ability to broadcast messages simultaneously to the remainder of the computers. A meshwork communications network interconnects the second input/output interfaces providing each computer with the ability to establish a communications link with another of the computers bypassing the remainder of computers. Each computer is controlled by a resident copy of a common operating system. Communications between respective ones of computers is by means of split tokens each having a moving first portion which is sent from computer to computer and a resident second portion which is disposed in the memory of at least one of computer and wherein the location of the second portion is part of the first portion. The split tokens represent both functions to be executed by the computers and data to be employed in the execution of the functions. The first input/output interfaces each include logic for detecting a collision between messages and for terminating the broadcasting of a message whereby collisions between messages are detected and avoided.

  17. New computing systems, future computing environment, and their implications on structural analysis and design

    Science.gov (United States)

    Noor, Ahmed K.; Housner, Jerrold M.

    1993-01-01

    Recent advances in computer technology that are likely to impact structural analysis and design of flight vehicles are reviewed. A brief summary is given of the advances in microelectronics, networking technologies, and in the user-interface hardware and software. The major features of new and projected computing systems, including high performance computers, parallel processing machines, and small systems, are described. Advances in programming environments, numerical algorithms, and computational strategies for new computing systems are reviewed. The impact of the advances in computer technology on structural analysis and the design of flight vehicles is described. A scenario for future computing paradigms is presented, and the near-term needs in the computational structures area are outlined.

  18. Applications of membrane computing in systems and synthetic biology

    CERN Document Server

    Gheorghe, Marian; Pérez-Jiménez, Mario

    2014-01-01

    Membrane Computing was introduced as a computational paradigm in Natural Computing. The models introduced, called Membrane (or P) Systems, provide a coherent platform to describe and study living cells as computational systems. Membrane Systems have been investigated for their computational aspects and employed to model problems in other fields, like: Computer Science, Linguistics, Biology, Economy, Computer Graphics, Robotics, etc. Their inherent parallelism, heterogeneity and intrinsic versatility allow them to model a broad range of processes and phenomena, being also an efficient means to solve and analyze problems in a novel way. Membrane Computing has been used to model biological systems, becoming with time a thorough modeling paradigm comparable, in its modeling and predicting capabilities, to more established models in this area. This book is the result of the need to collect, in an organic way, different facets of this paradigm. The chapters of this book, together with the web pages accompanying th...

  19. COMPUTER APPLICATION SYSTEM FOR OPERATIONAL EFFICIENCY OF DIESEL RAILBUSES

    Directory of Open Access Journals (Sweden)

    Łukasz WOJCIECHOWSKI

    2016-09-01

    Full Text Available The article presents a computer algorithm to calculate the estimated operating cost analysis rail bus. This computer application system compares the cost of employment locomotive and wagon, the cost of using locomotives and cost of using rail bus. An intensive growth of passenger railway traffic increased a demand for modern computer systems to management means of transportation. Described computer application operates on the basis of selected operating parameters of rail buses.

  20. Computers as Components Principles of Embedded Computing System Design

    CERN Document Server

    Wolf, Wayne

    2008-01-01

    This book was the first to bring essential knowledge on embedded systems technology and techniques under a single cover. This second edition has been updated to the state-of-the-art by reworking and expanding performance analysis with more examples and exercises, and coverage of electronic systems now focuses on the latest applications. Researchers, students, and savvy professionals schooled in hardware or software design, will value Wayne Wolf's integrated engineering design approach.The second edition gives a more comprehensive view of multiprocessors including VLIW and superscalar archite

  1. An operating system for future aerospace vehicle computer systems

    Science.gov (United States)

    Foudriat, E. C.; Berman, W. J.; Will, R. W.; Bynum, W. L.

    1984-01-01

    The requirements for future aerospace vehicle computer operating systems are examined in this paper. The computer architecture is assumed to be distributed with a local area network connecting the nodes. Each node is assumed to provide a specific functionality. The network provides for communication so that the overall tasks of the vehicle are accomplished. The O/S structure is based upon the concept of objects. The mechanisms for integrating node unique objects with node common objects in order to implement both the autonomy and the cooperation between nodes is developed. The requirements for time critical performance and reliability and recovery are discussed. Time critical performance impacts all parts of the distributed operating system; e.g., its structure, the functional design of its objects, the language structure, etc. Throughout the paper the tradeoffs - concurrency, language structure, object recovery, binding, file structure, communication protocol, programmer freedom, etc. - are considered to arrive at a feasible, maximum performance design. Reliability of the network system is considered. A parallel multipath bus structure is proposed for the control of delivery time for time critical messages. The architecture also supports immediate recovery for the time critical message system after a communication failure.

  2. Possible Computer Vision Systems and Automated or Computer-Aided Edging and Trimming

    Science.gov (United States)

    Philip A. Araman

    1990-01-01

    This paper discusses research which is underway to help our industry reduce costs, increase product volume and value recovery, and market more accurately graded and described products. The research is part of a team effort to help the hardwood sawmill industry automate with computer vision systems, and computer-aided or computer controlled processing. This paper...

  3. Computational systems analysis of dopamine metabolism.

    Directory of Open Access Journals (Sweden)

    Zhen Qi

    Full Text Available A prominent feature of Parkinson's disease (PD is the loss of dopamine in the striatum, and many therapeutic interventions for the disease are aimed at restoring dopamine signaling. Dopamine signaling includes the synthesis, storage, release, and recycling of dopamine in the presynaptic terminal and activation of pre- and post-synaptic receptors and various downstream signaling cascades. As an aid that might facilitate our understanding of dopamine dynamics in the pathogenesis and treatment in PD, we have begun to merge currently available information and expert knowledge regarding presynaptic dopamine homeostasis into a computational model, following the guidelines of biochemical systems theory. After subjecting our model to mathematical diagnosis and analysis, we made direct comparisons between model predictions and experimental observations and found that the model exhibited a high degree of predictive capacity with respect to genetic and pharmacological changes in gene expression or function. Our results suggest potential approaches to restoring the dopamine imbalance and the associated generation of oxidative stress. While the proposed model of dopamine metabolism is preliminary, future extensions and refinements may eventually serve as an in silico platform for prescreening potential therapeutics, identifying immediate side effects, screening for biomarkers, and assessing the impact of risk factors of the disease.

  4. Lightness computation by the human visual system

    Science.gov (United States)

    Rudd, Michael E.

    2017-05-01

    A model of achromatic color computation by the human visual system is presented, which is shown to account in an exact quantitative way for a large body of appearance matching data collected with simple visual displays. The model equations are closely related to those of the original Retinex model of Land and McCann. However, the present model differs in important ways from Land and McCann's theory in that it invokes additional biological and perceptual mechanisms, including contrast gain control, different inherent neural gains for incremental, and decremental luminance steps, and two types of top-down influence on the perceptual weights applied to local luminance steps in the display: edge classification and spatial integration attentional windowing. Arguments are presented to support the claim that these various visual processes must be instantiated by a particular underlying neural architecture. By pointing to correspondences between the architecture of the model and findings from visual neurophysiology, this paper suggests that edge classification involves a top-down gating of neural edge responses in early visual cortex (cortical areas V1 and/or V2) while spatial integration windowing occurs in cortical area V4 or beyond.

  5. Quantum Computing in Fock Space Systems

    Science.gov (United States)

    Berezin, Alexander A.

    1997-04-01

    Fock space system (FSS) has unfixed number (N) of particles and/or degrees of freedom. In quantum computing (QC) main requirement is sustainability of coherent Q-superpositions. This normally favoured by low noise environment. High excitation/high temperature (T) limit is hence discarded as unfeasible for QC. Conversely, if N is itself a quantized variable, the dimensionality of Hilbert basis for qubits may increase faster (say, N-exponentially) than thermal noise (likely, in powers of N and T). Hence coherency may win over T-randomization. For this type of QC speed (S) of factorization of long integers (with D digits) may increase with D (for 'ordinary' QC speed polynomially decreases with D). This (apparent) paradox rests on non-monotonic bijectivity (cf. Georg Cantor's diagonal counting of rational numbers). This brings entire aleph-null structurality ("Babylonian Library" of infinite informational content of integer field) to superposition determining state of quantum analogue of Turing machine head. Structure of integer infinititude (e.g. distribution of primes) results in direct "Platonic pressure" resembling semi-virtual Casimir efect (presure of cut-off vibrational modes). This "effect", the embodiment of Pythagorean "Number is everything", renders Godelian barrier arbitrary thin and hence FSS-based QC can in principle be unlimitedly efficient (e.g. D/S may tend to zero when D tends to infinity).

  6. Context-aware computing and self-managing systems

    CERN Document Server

    Dargie, Waltenegus

    2009-01-01

    Bringing together an extensively researched area with an emerging research issue, Context-Aware Computing and Self-Managing Systems presents the core contributions of context-aware computing in the development of self-managing systems, including devices, applications, middleware, and networks. The expert contributors reveal the usefulness of context-aware computing in developing autonomous systems that have practical application in the real world.The first chapter of the book identifies features that are common to both context-aware computing and autonomous computing. It offers a basic definit

  7. Time computations in anuran auditory systems

    Directory of Open Access Journals (Sweden)

    Gary J Rose

    2014-05-01

    Full Text Available Temporal computations are important in the acoustic communication of anurans. In many cases, calls between closely related species are nearly identical spectrally but differ markedly in temporal structure. Depending on the species, calls can differ in pulse duration, shape and/or rate (i.e., amplitude modulation, direction and rate of frequency modulation, and overall call duration. Also, behavioral studies have shown that anurans are able to discriminate between calls that differ in temporal structure. In the peripheral auditory system, temporal information is coded primarily in the spatiotemporal patterns of activity of auditory-nerve fibers. However, major transformations in the representation of temporal information occur in the central auditory system. In this review I summarize recent advances in understanding how temporal information is represented in the anuran midbrain, with particular emphasis on mechanisms that underlie selectivity for pulse duration and pulse rate (i.e., intervals between onsets of successive pulses. Two types of neurons have been identified that show selectivity for pulse rate: long-interval cells respond well to slow pulse rates but fail to spike or respond phasically to fast pulse rates; conversely, interval-counting neurons respond to intermediate or fast pulse rates, but only after a threshold number of pulses, presented at optimal intervals, have occurred. Duration-selectivity is manifest as short-pass, band-pass or long-pass tuning. Whole-cell patch recordings, in vivo, suggest that excitation and inhibition are integrated in diverse ways to generate temporal selectivity. In many cases, activity-related enhancement or depression of excitatory or inhibitory processes appear to contribute to selective responses.

  8. Modelling, abstraction, and computation in systems biology: A view from computer science.

    Science.gov (United States)

    Melham, Tom

    2013-04-01

    Systems biology is centrally engaged with computational modelling across multiple scales and at many levels of abstraction. Formal modelling, precise and formalised abstraction relationships, and computation also lie at the heart of computer science--and over the past decade a growing number of computer scientists have been bringing their discipline's core intellectual and computational tools to bear on biology in fascinating new ways. This paper explores some of the apparent points of contact between the two fields, in the context of a multi-disciplinary discussion on conceptual foundations of systems biology. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Software Systems for High-performance Quantum Computing

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S [ORNL; Britt, Keith A [ORNL

    2016-01-01

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventional computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.

  10. A Heterogeneous High-Performance System for Computational and Computer Science

    Science.gov (United States)

    2016-11-15

    Science The views, opinions and/or findings contained in this report are those of the author(s) and should not contrued as an official Department of the...System for Computational and Computer Science Report Title This DoD HBC/MI Equipment/Instrumentation grant was awarded in October 2014 for the purchase...Computing (HPC) course taught in the department of computer science as to attract more graduate students from many disciplines where their research

  11. Efficient on-site construction

    DEFF Research Database (Denmark)

    Thuesen, Christian Langhoff; Hvam, Lars

    2011-01-01

    Purpose – This research aims to analyse the implementation of a German platform for housing projects through a successful case on modern methods of construction featuring efficient on-site construction. Through continuous development, the platform has been carefully designed to suit a carefully s...

  12. Computer controlled vent and pressurization system

    Science.gov (United States)

    Cieslewicz, E. J.

    1975-01-01

    The Centaur space launch vehicle airborne computer, which was primarily used to perform guidance, navigation, and sequencing tasks, was further used to monitor and control inflight pressurization and venting of the cryogenic propellant tanks. Computer software flexibility also provided a failure detection and correction capability necessary to adopt and operate redundant hardware techniques and enhance the overall vehicle reliability.

  13. Generalised Computability and Applications to Hybrid Systems

    DEFF Research Database (Denmark)

    Korovina, Margarita V.; Kudinov, Oleg V.

    2001-01-01

    We investigate the concept of generalised computability of operators and functionals defined on the set of continuous functions, firstly introduced in [9]. By working in the reals, with equality and without equality, we study properties of generalised computable operators and functionals. Also we...

  14. The hack attack - Increasing computer system awareness of vulnerability threats

    Science.gov (United States)

    Quann, John; Belford, Peter

    1987-01-01

    The paper discusses the issue of electronic vulnerability of computer based systems supporting NASA Goddard Space Flight Center (GSFC) by unauthorized users. To test the security of the system and increase security awareness, NYMA, Inc. employed computer 'hackers' to attempt to infiltrate the system(s) under controlled conditions. Penetration procedures, methods, and descriptions are detailed in the paper. The procedure increased the security consciousness of GSFC management to the electronic vulnerability of the system(s).

  15. PLAID- A COMPUTER AIDED DESIGN SYSTEM

    Science.gov (United States)

    Brown, J. W.

    1994-01-01

    PLAID is a three-dimensional Computer Aided Design (CAD) system which enables the user to interactively construct, manipulate, and display sets of highly complex geometric models. PLAID was initially developed by NASA to assist in the design of Space Shuttle crewstation panels, and the detection of payload object collisions. It has evolved into a more general program for convenient use in many engineering applications. Special effort was made to incorporate CAD techniques and features which minimize the users workload in designing and managing PLAID models. PLAID consists of three major modules: the Primitive Object Generator (BUILD), the Composite Object Generator (COG), and the DISPLAY Processor. The BUILD module provides a means of constructing simple geometric objects called primitives. The primitives are created from polygons which are defined either explicitly by vertex coordinates, or graphically by use of terminal crosshairs or a digitizer. Solid objects are constructed by combining, rotating, or translating the polygons. Corner rounding, hole punching, milling, and contouring are special features available in BUILD. The COG module hierarchically organizes and manipulates primitives and other previously defined COG objects to form complex assemblies. The composite object is constructed by applying transformations to simpler objects. The transformations which can be applied are scalings, rotations, and translations. These transformations may be defined explicitly or defined graphically using the interactive COG commands. The DISPLAY module enables the user to view COG assemblies from arbitrary viewpoints (inside or outside the object) both in wireframe and hidden line renderings. The PLAID projection of a three-dimensional object can be either orthographic or with perspective. A conflict analysis option enables detection of spatial conflicts or collisions. DISPLAY provides camera functions to simulate a view of the model through different lenses. Other

  16. Overview of ASC Capability Computing System Governance Model

    Energy Technology Data Exchange (ETDEWEB)

    Doebling, Scott W. [Los Alamos National Laboratory

    2012-07-11

    This document contains a description of the Advanced Simulation and Computing Program's Capability Computing System Governance Model. Objectives of the Governance Model are to ensure that the capability system resources are allocated on a priority-driven basis according to the Program requirements; and to utilize ASC Capability Systems for the large capability jobs for which they were designed and procured.

  17. High-Speed Computer-Controlled Switch-Matrix System

    Science.gov (United States)

    Spisz, E.; Cory, B.; Ho, P.; Hoffman, M.

    1985-01-01

    High-speed computer-controlled switch-matrix system developed for communication satellites. Satellite system controlled by onboard computer and all message-routing functions between uplink and downlink beams handled by newly developed switch-matrix system. Message requires only 2-microsecond interconnect period, repeated every millisecond.

  18. Granular computing analysis and design of intelligent systems

    CERN Document Server

    Pedrycz, Witold

    2013-01-01

    Information granules, as encountered in natural language, are implicit in nature. To make them fully operational so they can be effectively used to analyze and design intelligent systems, information granules need to be made explicit. An emerging discipline, granular computing focuses on formalizing information granules and unifying them to create a coherent methodological and developmental environment for intelligent system design and analysis. Granular Computing: Analysis and Design of Intelligent Systems presents the unified principles of granular computing along with its comprehensive algo

  19. Computational Modeling of Flow Control Systems for Aerospace Vehicles Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Clear Science Corp. proposes to develop computational methods for designing active flow control systems on aerospace vehicles with the primary objective of...

  20. Simulation model of load balancing in distributed computing systems

    Science.gov (United States)

    Botygin, I. A.; Popov, V. N.; Frolov, S. G.

    2017-02-01

    The availability of high-performance computing, high speed data transfer over the network and widespread of software for the design and pre-production in mechanical engineering have led to the fact that at the present time the large industrial enterprises and small engineering companies implement complex computer systems for efficient solutions of production and management tasks. Such computer systems are generally built on the basis of distributed heterogeneous computer systems. The analytical problems solved by such systems are the key models of research, but the system-wide problems of efficient distribution (balancing) of the computational load and accommodation input, intermediate and output databases are no less important. The main tasks of this balancing system are load and condition monitoring of compute nodes, and the selection of a node for transition of the user’s request in accordance with a predetermined algorithm. The load balancing is one of the most used methods of increasing productivity of distributed computing systems through the optimal allocation of tasks between the computer system nodes. Therefore, the development of methods and algorithms for computing optimal scheduling in a distributed system, dynamically changing its infrastructure, is an important task.

  1. Evolutionary Computing for Intelligent Power System Optimization and Control

    DEFF Research Database (Denmark)

    This new book focuses on how evolutionary computing techniques benefit engineering research and development tasks by converting practical problems of growing complexities into simple formulations, thus largely reducing development efforts. This book begins with an overview of the optimization the...... theory and modern evolutionary computing techniques, and goes on to cover specific applications of evolutionary computing to power system optimization and control problems....

  2. Top 10 Threats to Computer Systems Include Professors and Students

    Science.gov (United States)

    Young, Jeffrey R.

    2008-01-01

    User awareness is growing in importance when it comes to computer security. Not long ago, keeping college networks safe from cyberattackers mainly involved making sure computers around campus had the latest software patches. New computer worms or viruses would pop up, taking advantage of some digital hole in the Windows operating system or in…

  3. Top 10 Threats to Computer Systems Include Professors and Students

    Science.gov (United States)

    Young, Jeffrey R.

    2008-01-01

    User awareness is growing in importance when it comes to computer security. Not long ago, keeping college networks safe from cyberattackers mainly involved making sure computers around campus had the latest software patches. New computer worms or viruses would pop up, taking advantage of some digital hole in the Windows operating system or in…

  4. Bringing the CMS distributed computing system into scalable operations

    CERN Document Server

    Belforte, S; Fisk, I; Flix, J; Hernández, J M; Kress, T; Letts, J; Magini, N; Miccio, V; Sciabà, A

    2010-01-01

    Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure an...

  5. A Survey of Civilian Dental Computer Systems.

    Science.gov (United States)

    1988-01-01

    r.arketplace, the orthodontic community continued to pioneer clinical automation through diagnosis, treat- (1) patient registration, identification...profession." New York State Dental Journal 34:76, 1968. 17. Ehrlich, A., The Role of Computers in Dental Practice Management. Champaign, IL: Colwell...Council on Dental military dental clinic. Medical Bulletin of the US Army Practice. Report: Dental Computer Vendors. 1984 Europe 39:14-16, 1982. 19

  6. Distributed computing environments for future space control systems

    Science.gov (United States)

    Viallefont, Pierre

    1993-01-01

    The aim of this paper is to present the results of a CNES research project on distributed computing systems. The purpose of this research was to study the impact of the use of new computer technologies in the design and development of future space applications. The first part of this study was a state-of-the-art review of distributed computing systems. One of the interesting ideas arising from this review is the concept of a 'virtual computer' allowing the distributed hardware architecture to be hidden from a software application. The 'virtual computer' can improve system performance by adapting the best architecture (addition of computers) to the software application without having to modify its source code. This concept can also decrease the cost and obsolescence of the hardware architecture. In order to verify the feasibility of the 'virtual computer' concept, a prototype representative of a distributed space application is being developed independently of the hardware architecture.

  7. A computational design system for rapid CFD analysis

    Science.gov (United States)

    Ascoli, E. P.; Barson, S. L.; Decroix, M. E.; Sindir, Munir M.

    1992-01-01

    A computation design system (CDS) is described in which these tools are integrated in a modular fashion. This CDS ties together four key areas of computational analysis: description of geometry; grid generation; computational codes; and postprocessing. Integration of improved computational fluid dynamics (CFD) analysis tools through integration with the CDS has made a significant positive impact in the use of CFD for engineering design problems. Complex geometries are now analyzed on a frequent basis and with greater ease.

  8. Safety Metrics for Human-Computer Controlled Systems

    Science.gov (United States)

    Leveson, Nancy G; Hatanaka, Iwao

    2000-01-01

    The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems.This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.

  9. Computational system identification of continuous-time nonlinear systems using approximate Bayesian computation

    Science.gov (United States)

    Krishnanathan, Kirubhakaran; Anderson, Sean R.; Billings, Stephen A.; Kadirkamanathan, Visakan

    2016-11-01

    In this paper, we derive a system identification framework for continuous-time nonlinear systems, for the first time using a simulation-focused computational Bayesian approach. Simulation approaches to nonlinear system identification have been shown to outperform regression methods under certain conditions, such as non-persistently exciting inputs and fast-sampling. We use the approximate Bayesian computation (ABC) algorithm to perform simulation-based inference of model parameters. The framework has the following main advantages: (1) parameter distributions are intrinsically generated, giving the user a clear description of uncertainty, (2) the simulation approach avoids the difficult problem of estimating signal derivatives as is common with other continuous-time methods, and (3) as noted above, the simulation approach improves identification under conditions of non-persistently exciting inputs and fast-sampling. Term selection is performed by judging parameter significance using parameter distributions that are intrinsically generated as part of the ABC procedure. The results from a numerical example demonstrate that the method performs well in noisy scenarios, especially in comparison to competing techniques that rely on signal derivative estimation.

  10. Design technologies for green and sustainable computing systems

    CERN Document Server

    Ganguly, Amlan; Chakrabarty, Krishnendu

    2013-01-01

    This book provides a comprehensive guide to the design of sustainable and green computing systems (GSC). Coverage includes important breakthroughs in various aspects of GSC, including multi-core architectures, interconnection technology, data centers, high-performance computing (HPC), and sensor networks. The authors address the challenges of power efficiency and sustainability in various contexts, including system design, computer architecture, programming languages, compilers and networking. ·         Offers readers a single-source reference for addressing the challenges of power efficiency and sustainability in embedded computing systems; ·         Provides in-depth coverage of the key underlying design technologies for green and sustainable computing; ·         Covers a wide range of topics, from chip-level design to architectures, computing systems, and networks.

  11. A comparison of queueing, cluster and distributed computing systems

    Science.gov (United States)

    Kaplan, Joseph A.; Nelson, Michael L.

    1993-01-01

    Using workstation clusters for distributed computing has become popular with the proliferation of inexpensive, powerful workstations. Workstation clusters offer both a cost effective alternative to batch processing and an easy entry into parallel computing. However, a number of workstations on a network does not constitute a cluster. Cluster management software is necessary to harness the collective computing power. A variety of cluster management and queuing systems are compared: Distributed Queueing Systems (DQS), Condor, Load Leveler, Load Balancer, Load Sharing Facility (LSF - formerly Utopia), Distributed Job Manager (DJM), Computing in Distributed Networked Environments (CODINE), and NQS/Exec. The systems differ in their design philosophy and implementation. Based on published reports on the different systems and conversations with the system's developers and vendors, a comparison of the systems are made on the integral issues of clustered computing.

  12. Computer Generated Hologram System for Wavefront Measurement System Calibration

    Science.gov (United States)

    Olczak, Gene

    2011-01-01

    Computer Generated Holograms (CGHs) have been used for some time to calibrate interferometers that require nulling optics. A typical scenario is the testing of aspheric surfaces with an interferometer placed near the paraxial center of curvature. Existing CGH technology suffers from a reduced capacity to calibrate middle and high spatial frequencies. The root cause of this shortcoming is as follows: the CGH is not placed at an image conjugate of the asphere due to limitations imposed by the geometry of the test and the allowable size of the CGH. This innovation provides a calibration system where the imaging properties in calibration can be made comparable to the test configuration. Thus, if the test is designed to have good imaging properties, then middle and high spatial frequency errors in the test system can be well calibrated. The improved imaging properties are provided by a rudimentary auxiliary optic as part of the calibration system. The auxiliary optic is simple to characterize and align to the CGH. Use of the auxiliary optic also reduces the size of the CGH required for calibration and the density of the lines required for the CGH. The resulting CGH is less expensive than the existing technology and has reduced write error and alignment error sensitivities. This CGH system is suitable for any kind of calibration using an interferometer when high spatial resolution is required. It is especially well suited for tests that include segmented optical components or large apertures.

  13. The Cc1 Project – System For Private Cloud Computing

    Directory of Open Access Journals (Sweden)

    J Chwastowski

    2012-01-01

    Full Text Available The main features of the Cloud Computing system developed at IFJ PAN are described. The project is financed from the structural resources provided by the European Commission and the Polish Ministry of Science and Higher Education (Innovative Economy, National Cohesion Strategy. The system delivers a solution for carrying out computer calculations on a Private Cloud computing infrastructure. It consists of an intuitive Web based user interface, a module for the users and resources administration and the standard EC2 interface implementation. Thanks to the distributed character of the system it allows for the integration of a geographically distant federation of computer clusters within a uniform user environment.

  14. National electronic medical records integration on cloud computing system.

    Science.gov (United States)

    Mirza, Hebah; El-Masri, Samir

    2013-01-01

    Few Healthcare providers have an advanced level of Electronic Medical Record (EMR) adoption. Others have a low level and most have no EMR at all. Cloud computing technology is a new emerging technology that has been used in other industry and showed a great success. Despite the great features of Cloud computing, they haven't been utilized fairly yet in healthcare industry. This study presents an innovative Healthcare Cloud Computing system for Integrating Electronic Health Record (EHR). The proposed Cloud system applies the Cloud Computing technology on EHR system, to present a comprehensive EHR integrated environment.

  15. A Brief Talk on Teaching Reform Program of Computer Network Course System about Computer Related Professional

    Institute of Scientific and Technical Information of China (English)

    Wang Jian-Ping; Huang Yong

    2008-01-01

    The computer network course is the mainstay required course that college computer-related professional sets up,in regard to current teaching condition analysis,the teaching of this course has not formed a complete system,the new knowledge points can be added in promptly while the outdated technology is still there in teaching The article describes the current situation and maladies which appears in the university computer network related professional teaching,the teaching systems and teaching reform schemes about the computer network coupe are presented.

  16. Mechanisms of protection of information in computer networks and systems

    Directory of Open Access Journals (Sweden)

    Sergey Petrovich Evseev

    2011-10-01

    Full Text Available Protocols of information protection in computer networks and systems are investigated. The basic types of threats of infringement of the protection arising from the use of computer networks are classified. The basic mechanisms, services and variants of realization of cryptosystems for maintaining authentication, integrity and confidentiality of transmitted information are examined. Their advantages and drawbacks are described. Perspective directions of development of cryptographic transformations for the maintenance of information protection in computer networks and systems are defined and analyzed.

  17. Research on computer virus database management system

    Science.gov (United States)

    Qi, Guoquan

    2011-12-01

    The growing proliferation of computer viruses becomes the lethal threat and research focus of the security of network information. While new virus is emerging, the number of viruses is growing, virus classification increasing complex. Virus naming because of agencies' capture time differences can not be unified. Although each agency has its own virus database, the communication between each other lacks, or virus information is incomplete, or a small number of sample information. This paper introduces the current construction status of the virus database at home and abroad, analyzes how to standardize and complete description of virus characteristics, and then gives the information integrity, storage security and manageable computer virus database design scheme.

  18. Sensor fusion control system for computer integrated manufacturing

    CSIR Research Space (South Africa)

    Kumile, CM

    2007-08-01

    Full Text Available of products in unpredictable quantities. Computer Integrated Manufacturing (CIM) systems plays an important role towards integrating such flexible systems. This paper presents a methodology of increasing flexibility and reusability of a generic CIM cell...

  19. Computer-Based Integrated Learning Systems: Research and Theory.

    Science.gov (United States)

    Hativa, Nira, Ed.; Becker, Henry Jay, Ed.

    1994-01-01

    The eight chapters of this theme issue discuss recent research and theory concerning computer-based integrated learning systems. Following an introduction about their theoretical background and current use in schools, the effects of using computer-based integrated learning systems in the elementary school classroom are considered. (SLD)

  20. Entrepreneurial Health Informatics for Computer Science and Information Systems Students

    Science.gov (United States)

    Lawler, James; Joseph, Anthony; Narula, Stuti

    2014-01-01

    Corporate entrepreneurship is a critical area of curricula for computer science and information systems students. Few institutions of computer science and information systems have entrepreneurship in the curricula however. This paper presents entrepreneurial health informatics as a course in a concentration of Technology Entrepreneurship at a…

  1. On the Computation of Lyapunov Functions for Interconnected Systems

    DEFF Research Database (Denmark)

    Sloth, Christoffer

    2016-01-01

    This paper addresses the computation of additively separable Lyapunov functions for interconnected systems. The presented results can be applied to reduce the complexity of the computations associated with stability analysis of large scale systems. We provide a necessary and sufficient condition...

  2. Software For Computer-Aided Design Of Control Systems

    Science.gov (United States)

    Wette, Matthew

    1994-01-01

    Computer Aided Engineering System (CAESY) software developed to provide means to evaluate methods for dealing with users' needs in computer-aided design of control systems. Interpreter program for performing engineering calculations. Incorporates features of both Ada and MATLAB. Designed to be flexible and powerful. Includes internally defined functions, procedures and provides for definition of functions and procedures by user. Written in C language.

  3. 3-D Signal Processing in a Computer Vision System

    Science.gov (United States)

    Dongping Zhu; Richard W. Conners; Philip A. Araman

    1991-01-01

    This paper discusses the problem of 3-dimensional image filtering in a computer vision system that would locate and identify internal structural failure. In particular, a 2-dimensional adaptive filter proposed by Unser has been extended to 3-dimension. In conjunction with segmentation and labeling, the new filter has been used in the computer vision system to...

  4. Experiments and simulation models of a basic computation element of an autonomous molecular computing system.

    Science.gov (United States)

    Takinoue, Masahiro; Kiga, Daisuke; Shohda, Koh-Ichiroh; Suyama, Akira

    2008-10-01

    Autonomous DNA computers have been attracting much attention because of their ability to integrate into living cells. Autonomous DNA computers can process information through DNA molecules and their molecular reactions. We have already proposed an idea of an autonomous molecular computer with high computational ability, which is now named Reverse-transcription-and-TRanscription-based Autonomous Computing System (RTRACS). In this study, we first report an experimental demonstration of a basic computation element of RTRACS and a mathematical modeling method for RTRACS. We focus on an AND gate, which produces an output RNA molecule only when two input RNA molecules exist, because it is one of the most basic computation elements in RTRACS. Experimental results demonstrated that the basic computation element worked as designed. In addition, its behaviors were analyzed using a mathematical model describing the molecular reactions of the RTRACS computation elements. A comparison between experiments and simulations confirmed the validity of the mathematical modeling method. This study will accelerate construction of various kinds of computation elements and computational circuits of RTRACS, and thus advance the research on autonomous DNA computers.

  5. Mechatronic sensory system for computer integrated manufacturing

    CSIR Research Space (South Africa)

    Kumile, CM

    2007-05-01

    Full Text Available (CIM) systems plays an important role towards integrating such flexible systems. The requirement of fast and cheap design and redesign of manufacturing systems therefore is gaining in importance, considering not only the products and the physical...

  6. The Simud-Tiu Valles hydrologic system: A multidisciplinary study of a possible site for future Mars on-site exploration

    Science.gov (United States)

    Pajola, Maurizio; Rossato, Sandro; Baratti, Emanuele; Mangili, Clara; Mancarella, Francesca; McBride, Karen; Coradini, Marcello

    2016-04-01

    When looking for traces of past life on Mars, we have to look primarily for places where water was present, possibly for long time intervals. The Simud and Tiu Valles are two large outflow channels connected to the north with the Chryse Basin, Oxia Palus quadrangle. The area, carved by water during the Noachian/Early Hesperian is characterized by a complex geological evolution. The geomorphological analysis shows the presence of fluvial and alluvial structures, interpreted as fluvial channels and terraces, debris flow fronts and short-lasting small water flows coexisting with maar-diatremes and mud volcanoes. Several morphological features indicate a change in water flux direction after the main erosive phase. During this period water originated from the Masursky crater and flown southwards into the Hydraotes Chaos. This phenomenon caused the studied area to become a depocenter where fine-grained material deposition took place, possibly in association with ponding water. This setting is potentially quite valuable as traces of life may have been preserved. The presence of water at various times over a period of about 1 Ga in the area is corroborated by mineralogical analyses of different areas that indicate the possible presence of hydrated minerals mixtures, such as sulfate-bearing deposits. Given the uniqueness of the evolution of this region, the long term interactions between fluvial, volcanic, and tectonic processes and its extremely favorable landing parameters (elevation, slope, roughness, rock distribution, thermal inertia, albedo, etc.), we decided to propose this location as a possible landing site for the ESA ExoMars 2018, the NASA Mars 2020 and future on-site missions.

  7. The Use of Explosion Aftershock Probabilities for Planning and Deployment of Seismic Aftershock Monitoring System for an On-site Inspection

    Science.gov (United States)

    Labak, P.; Ford, S. R.; Sweeney, J. J.; Smith, A. T.; Spivak, A.

    2011-12-01

    One of four elements of CTBT verification regime is On-site inspection (OSI). Since the sole purpose of an OSI shall be to clarify whether a nuclear weapon test explosion or any other nuclear explosion has been carried out, inspection activities can be conducted and techniques used in order to collect facts to support findings provided in inspection reports. Passive seismological monitoring, realized by the seismic aftershock monitoring (SAMS) is one of the treaty allowed techniques during an OSI. Effective planning and deployment of SAMS during the early stages of an OSI is required due to the nature of possible events recorded and due to the treaty related constrains on size of inspection area, size of inspection team and length of an inspection. A method, which may help in planning the SAMS deployment is presented. An estimate of aftershock activity due to a theoretical underground nuclear explosion is produced using a simple aftershock rate model (Ford and Walter, 2010). The model is developed with data from the Nevada Test Site and Semipalatinsk Test Site, which we take to represent soft- and hard-rock testing environments, respectively. Estimates of expected magnitude and number of aftershocks are calculated using the models for different testing and inspection scenarios. These estimates can help to plan the SAMS deployment for an OSI by giving a probabilistic assessment of potential aftershocks in the Inspection Area (IA). The aftershock assessment combined with an estimate of the background seismicity in the IA and an empirically-derived map of threshold magnitude for the SAMS network could aid the OSI team in reporting. We tested the hard-rock model to a scenario similar to the 2008 Integrated Field Exercise 2008 deployment in Kazakhstan and produce an estimate of possible recorded aftershock activity.

  8. Impact of new computing systems on computational mechanics and flight-vehicle structures technology

    Science.gov (United States)

    Noor, A. K.; Storaasli, O. O.; Fulton, R. E.

    1984-01-01

    Advances in computer technology which may have an impact on computational mechanics and flight vehicle structures technology were reviewed. The characteristics of supersystems, highly parallel systems, and small systems are summarized. The interrelations of numerical algorithms and software with parallel architectures are discussed. A scenario for future hardware/software environment and engineering analysis systems is presented. Research areas with potential for improving the effectiveness of analysis methods in the new environment are identified.

  9. Data systems and computer science programs: Overview

    Science.gov (United States)

    Smith, Paul H.; Hunter, Paul

    1991-01-01

    An external review of the Integrated Technology Plan for the Civil Space Program is presented. The topics are presented in viewgraph form and include the following: onboard memory and storage technology; advanced flight computers; special purpose flight processors; onboard networking and testbeds; information archive, access, and retrieval; visualization; neural networks; software engineering; and flight control and operations.

  10. Central Computer IMS Processing System (CIMS).

    Science.gov (United States)

    Wolfe, Howard

    As part of the IMS Version 3 tryout in 1971-72, software was developed to enable data submitted by IMS users to be transmitted to the central computer, which acted on the data to create IMS reports and to update the Pupil Data Base with criterion exercise and class roster information. The program logic is described, and the subroutines and…

  11. Cloud Computing Based E-Learning System

    Science.gov (United States)

    Al-Zoube, Mohammed; El-Seoud, Samir Abou; Wyne, Mudasser F.

    2010-01-01

    Cloud computing technologies although in their early stages, have managed to change the way applications are going to be developed and accessed. These technologies are aimed at running applications as services over the internet on a flexible infrastructure. Microsoft office applications, such as word processing, excel spreadsheet, access database…

  12. Cloud Computing Based E-Learning System

    Science.gov (United States)

    Al-Zoube, Mohammed; El-Seoud, Samir Abou; Wyne, Mudasser F.

    2010-01-01

    Cloud computing technologies although in their early stages, have managed to change the way applications are going to be developed and accessed. These technologies are aimed at running applications as services over the internet on a flexible infrastructure. Microsoft office applications, such as word processing, excel spreadsheet, access database…

  13. Evaluation of computer-based ultrasonic inservice inspection systems

    Energy Technology Data Exchange (ETDEWEB)

    Harris, R.V. Jr.; Angel, L.J.; Doctor, S.R.; Park, W.R.; Schuster, G.J.; Taylor, T.T. [Pacific Northwest Lab., Richland, WA (United States)

    1994-03-01

    This report presents the principles, practices, terminology, and technology of computer-based ultrasonic testing for inservice inspection (UT/ISI) of nuclear power plants, with extensive use of drawings, diagrams, and LTT images. The presentation is technical but assumes limited specific knowledge of ultrasonics or computers. The report is divided into 9 sections covering conventional LTT, computer-based LTT, and evaluation methodology. Conventional LTT topics include coordinate axes, scanning, instrument operation, RF and video signals, and A-, B-, and C-scans. Computer-based topics include sampling, digitization, signal analysis, image presentation, SAFI, ultrasonic holography, transducer arrays, and data interpretation. An evaluation methodology for computer-based LTT/ISI systems is presented, including questions, detailed procedures, and test block designs. Brief evaluations of several computer-based LTT/ISI systems are given; supplementary volumes will provide detailed evaluations of selected systems.

  14. Cloud Computing for Network Security Intrusion Detection System

    Directory of Open Access Journals (Sweden)

    Jin Yang

    2013-01-01

    Full Text Available In recent years, as a new distributed computing model, cloud computing has developed rapidly and become the focus of academia and industry. But now the security issue of cloud computing is a main critical problem of most enterprise customers faced. In the current network environment, that relying on a single terminal to check the Trojan virus is considered increasingly unreliable. This paper analyzes the characteristics of current cloud computing, and then proposes a comprehensive real-time network risk evaluation model for cloud computing based on the correspondence between the artificial immune system antibody and pathogen invasion intensity. The paper also combines assets evaluation system and network integration evaluation system, considering from the application layer, the host layer, network layer may be factors that affect the network risks. The experimental results show that this model improves the ability of intrusion detection and can support for the security of current cloud computing.

  15. Computer-aided diagnosis workstation and network system for chest diagnosis based on multislice CT images

    Science.gov (United States)

    Satoh, Hitoshi; Niki, Noboru; Eguchi, Kenji; Moriyama, Noriyuki; Ohmatsu, Hironobu; Masuda, Hideo; Machida, Suguru

    2008-03-01

    Mass screening based on multi-helical CT images requires a considerable number of images to be read. It is this time-consuming step that makes the use of helical CT for mass screening impractical at present. To overcome this problem, we have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically detects suspected lung cancers in helical CT images, a coronary artery calcification screening algorithm that automatically detects suspected coronary artery calcification and a vertebra body analysis algorithm for quantitative evaluation of osteoporosis likelihood by using helical CT scanner for the lung cancer mass screening. The function to observe suspicious shadow in detail are provided in computer-aided diagnosis workstation with these screening algorithms. We also have developed the telemedicine network by using Web medical image conference system with the security improvement of images transmission, Biometric fingerprint authentication system and Biometric face authentication system. Biometric face authentication used on site of telemedicine makes "Encryption of file" and Success in login" effective. As a result, patients' private information is protected. Based on these diagnostic assistance methods, we have developed a new computer-aided workstation and a new telemedicine network that can display suspected lesions three-dimensionally in a short time. The results of this study indicate that our radiological information system without film by using computer-aided diagnosis workstation and our telemedicine network system can increase diagnostic speed, diagnostic accuracy and security improvement of medical information.

  16. Computer graphics application in the engineering design integration system

    Science.gov (United States)

    Glatt, C. R.; Abel, R. W.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Stewart, W. A.

    1975-01-01

    The computer graphics aspect of the Engineering Design Integration (EDIN) system and its application to design problems were discussed. Three basic types of computer graphics may be used with the EDIN system for the evaluation of aerospace vehicles preliminary designs: offline graphics systems using vellum-inking or photographic processes, online graphics systems characterized by direct coupled low cost storage tube terminals with limited interactive capabilities, and a minicomputer based refresh terminal offering highly interactive capabilities. The offline line systems are characterized by high quality (resolution better than 0.254 mm) and slow turnaround (one to four days). The online systems are characterized by low cost, instant visualization of the computer results, slow line speed (300 BAUD), poor hard copy, and the early limitations on vector graphic input capabilities. The recent acquisition of the Adage 330 Graphic Display system has greatly enhanced the potential for interactive computer aided design.

  17. A concept for planning and management of on-site and centralised municipal wastewater treatment systems, a case study in Bangkok, Thailand. II: scenario-based pollutant load analysis.

    Science.gov (United States)

    Tsuzuki, Yoshiaki; Koottatep, Thammarat; Sinsupan, Thitiphon; Jiawkok, Supattra; Wongburana, Chira; Wattanachira, Suraphong; Sarathai, Yuttachai

    2013-01-01

    Scenario-based pollutant load analysis was conducted to develop a part of a concept for planning and management of wastewater treatment systems (WWTSs) under the mixture conditions of centralised and on-site WWTSs. Pollutant discharge indicators and pollutant removal efficiency functions were applied from another paper in the series, which were developed based on the existing conditions in urban and peri-urban areas of Bangkok, Thailand. Two scenarios were developed to describe development directions of the mixture conditions. Scenario 1 involves keeping the on-site wastewater treatment plants (WWTPs) within the areas of centralised WWTSs. Scenario 2 is dividing the centralised and on-site WWTS areas. Comparison of the smallest values of total pollutant discharge per capita (PDCtotal) between Scenarios 1 and 2 showed that the smallest PDCtotal in Scenario 1 was smaller than that in Scenario 2 for biological oxygen demand, chemical oxygen demand and total phosphorus whereas the smallest PDCtotal in Scenario 2 was smaller than that in Scenario 1 for total nitrogen, total coliforms and faecal coliforms. The results suggest that the mixture conditions could be a possible reason for smaller pollutant concentrations at centralised WWTPs. Quantitative scenario-based estimation of PDCtotal is useful and a prerequisite in planning and management of WWTSs.

  18. Security for small computer systems a practical guide for users

    CERN Document Server

    Saddington, Tricia

    1988-01-01

    Security for Small Computer Systems: A Practical Guide for Users is a guidebook for security concerns for small computers. The book provides security advice for the end-users of small computers in different aspects of computing security. Chapter 1 discusses the security and threats, and Chapter 2 covers the physical aspect of computer security. The text also talks about the protection of data, and then deals with the defenses against fraud. Survival planning and risk assessment are also encompassed. The last chapter tackles security management from an organizational perspective. The bo

  19. Fast high-resolution computer-generated hologram computation using multiple graphics processing unit cluster system.

    Science.gov (United States)

    Takada, Naoki; Shimobaba, Tomoyoshi; Nakayama, Hirotaka; Shiraki, Atsushi; Okada, Naohisa; Oikawa, Minoru; Masuda, Nobuyuki; Ito, Tomoyoshi

    2012-10-20

    To overcome the computational complexity of a computer-generated hologram (CGH), we implement an optimized CGH computation in our multi-graphics processing unit cluster system. Our system can calculate a CGH of 6,400×3,072 pixels from a three-dimensional (3D) object composed of 2,048 points in 55 ms. Furthermore, in the case of a 3D object composed of 4096 points, our system is 553 times faster than a conventional central processing unit (using eight threads).

  20. Software fault tolerance in computer operating systems

    Science.gov (United States)

    Iyer, Ravishankar K.; Lee, Inhwan

    1994-01-01

    This chapter provides data and analysis of the dependability and fault tolerance for three operating systems: the Tandem/GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Based on measurements from these systems, basic software error characteristics are investigated. Fault tolerance in operating systems resulting from the use of process pairs and recovery routines is evaluated. Two levels of models are developed to analyze error and recovery processes inside an operating system and interactions among multiple instances of an operating system running in a distributed environment. The measurements show that the use of process pairs in Tandem systems, which was originally intended for tolerating hardware faults, allows the system to tolerate about 70% of defects in system software that result in processor failures. The loose coupling between processors which results in the backup execution (the processor state and the sequence of events occurring) being different from the original execution is a major reason for the measured software fault tolerance. The IBM/MVS system fault tolerance almost doubles when recovery routines are provided, in comparison to the case in which no recovery routines are available. However, even when recovery routines are provided, there is almost a 50% chance of system failure when critical system jobs are involved.

  1. TRL Computer System User’s Guide

    Energy Technology Data Exchange (ETDEWEB)

    Engel, David W.; Dalton, Angela C.

    2014-01-31

    We have developed a wiki-based graphical user-interface system that implements our technology readiness level (TRL) uncertainty models. This document contains the instructions for using this wiki-based system.

  2. Computer Sciences and Data Systems, volume 1

    Science.gov (United States)

    1987-01-01

    Topics addressed include: software engineering; university grants; institutes; concurrent processing; sparse distributed memory; distributed operating systems; intelligent data management processes; expert system for image analysis; fault tolerant software; and architecture research.

  3. EVALUATION & TRENDS OF SURVEILLANCE SYSTEM NETWORK IN UBIQUITOUS COMPUTING ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Sunil Kr Singh

    2015-03-01

    Full Text Available With the emergence of ubiquitous computing, whole scenario of computing has been changed. It affected many inter disciplinary fields. This paper visions the impact of ubiquitous computing on video surveillance system. With increase in population and highly specific security areas, intelligent monitoring is the major requirement of modern world .The paper describes the evolution of surveillance system from analog to multi sensor ubiquitous system. It mentions the demand of context based architectures. It draws the benefit of merging of cloud computing to boost the surveillance system and at the same time reducing cost and maintenance. It analyzes some surveillance system architectures which are made for ubiquitous deployment. It provides major challenges and opportunities for the researchers to make surveillance system highly efficient and make them seamlessly embed in our environments.

  4. Information Hiding based Trusted Computing System Design

    Science.gov (United States)

    2014-07-18

    and the environment where the system operates (electrical network frequency signals), and how to improve the trust in a wireless sensor network with...the system (silicon PUF) and the environment where the system operates (ENF signals). We also study how to improve the trust in a wireless sensor...Harbin Institute of Technology, Shenzhen , China, May 26, 2013. (Host: Prof. Aijiao Cui) 13) “Designing Trusted Energy-Efficient Circuits and Systems

  5. Training Artisans On-Site

    Directory of Open Access Journals (Sweden)

    Edoghogho Ogbeifun

    2011-09-01

    Full Text Available The decline in apprenticeship in both the public and private sectors, the increasing use of sub-contractors as well as the uncoordinated approach in the informal sector are contributing factors to the shortage of skilled artisans in the construction industry. Artisans training can be introduced and implemented through the adoption of progressive implementation of construction processes commencing work from areas requiring low skill demands to areas of high skill demand. The success of this principle hinges on the collaborative effort of the key project stakeholders. The client should be willing to absorb extra cost and delays in the project; the design and contract documentation should facilitate on-site training, and  the consultant actively guide the contractor and the construction processes to achieve the training objectives. The exploratory research method was adopted in this study and research revealed that this principle was used in a project in the UK and in the development of infrastructure in the tourism industry of South Africa .It is being recommended that the principle be adapted by the public sector for the development of small size infrastructures that can be repeated in many places. This will boost the quality and quantity of artisans, enhance employability, reduce rural urban migration and alleviate poverty.Keywords: Skilled artisans, on-site training, progressive construction processes, project stakeholders, contract documentation. 

  6. On-site observations of physical work demands of train conductors and service electricians in the Netherlands.

    NARCIS (Netherlands)

    Botje, D.; Zoer, I.; Ruitenburg, M.M.; Frings-Dresen, H.W.; Sluiter, J.K.

    2010-01-01

    The objective of the present study was to assess the exposure to physical work demands of train conductors and service electricians at a railway company in the Netherlands. On-site observations were performed using the Task Recording and Analysis on Computer observation system to identify the mean

  7. On the Computational Capabilities of Physical Systems. Part 1; The Impossibility of Infallible Computation

    Science.gov (United States)

    Wolpert, David H.; Koga, Dennis (Technical Monitor)

    2000-01-01

    In this first of two papers, strong limits on the accuracy of physical computation are established. First it is proven that there cannot be a physical computer C to which one can pose any and all computational tasks concerning the physical universe. Next it is proven that no physical computer C can correctly carry out any computational task in the subset of such tasks that can be posed to C. This result holds whether the computational tasks concern a system that is physically isolated from C, or instead concern a system that is coupled to C. As a particular example, this result means that there cannot be a physical computer that can, for any physical system external to that computer, take the specification of that external system's state as input and then correctly predict its future state before that future state actually occurs; one cannot build a physical computer that can be assured of correctly 'processing information faster than the universe does'. The results also mean that there cannot exist an infallible, general-purpose observation apparatus, and that there cannot be an infallible, general-purpose control apparatus. These results do not rely on systems that are infinite, and/or non-classical, and/or obey chaotic dynamics. They also hold even if one uses an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing Machine. This generality is a direct consequence of the fact that a novel definition of computation - a definition of 'physical computation' - is needed to address the issues considered in these papers. While this definition does not fit into the traditional Chomsky hierarchy, the mathematical structure and impossibility results associated with it have parallels in the mathematics of the Chomsky hierarchy. The second in this pair of papers presents a preliminary exploration of some of this mathematical structure, including in particular that of prediction complexity, which is a 'physical computation

  8. Automated fermentation equipment. 2. Computer-fermentor system

    Energy Technology Data Exchange (ETDEWEB)

    Nyeste, L.; Szigeti, L.; Veres, A.; Pungor, E. Jr.; Kurucz, I.; Hollo, J.

    1981-02-01

    An inexpensive computer-operated system suitable for data collection and steady-state optimum control of fermentation processes is presented. With this system, minimum generation time has been determined as a function of temperature and pH in the turbidostat cultivation of a yeast strain. The applicability of the computer-fermentor system is also presented by the determination of the dynamic Kla value.

  9. Managing trust in information systems by using computer simulations

    OpenAIRE

    Zupančič, Eva

    2009-01-01

    Human factor is more and more important in new information systems and it should be also taken into consideration when developing new systems. Trust issues, which are tightly tied to human factor, are becoming an important topic in computer science. In this work we research trust in IT systems and present computer-based trust management solutions. After a review of qualitative and quantitative methods for trust management, a precise description of a simulation tool for trust management ana...

  10. Personal Computer System for Automatic Coronary Venous Flow Measurement

    OpenAIRE

    Dew, Robert B.

    1985-01-01

    We developed an automated system based on an IBM PC/XT Personal computer to measure coronary venous blood flow during cardiac catheterization. Flow is determined by a thermodilution technique in which a cold saline solution is infused through a catheter into the coronary venous system. Regional temperature fluctuations sensed by the catheter are used to determine great cardiac vein and coronary sinus blood flow. The computer system replaces manual methods of acquiring and analyzing temperatur...

  11. Improving the safety features of general practice computer systems

    OpenAIRE

    Anthony Avery; Boki Savelyich; Sheila Teasdale

    2003-01-01

    General practice computer systems already have a number of important safety features. However, there are problems in that general practitioners (GPs) have come to rely on hazard alerts when they are not foolproof. Furthermore, GPs do not know how to make best use of safety features on their systems. There are a number of solutions that could help to improve the safety features of general practice computer systems and also help to improve the abilities of healthcare professionals to use these ...

  12. Multiple-User, Multitasking, Virtual-Memory Computer System

    Science.gov (United States)

    Generazio, Edward R.; Roth, Don J.; Stang, David B.

    1993-01-01

    Computer system designed and programmed to serve multiple users in research laboratory. Provides for computer control and monitoring of laboratory instruments, acquisition and anlaysis of data from those instruments, and interaction with users via remote terminals. System provides fast access to shared central processing units and associated large (from megabytes to gigabytes) memories. Underlying concept of system also applicable to monitoring and control of industrial processes.

  13. Performance Models for Split-execution Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S [ORNL; McCaskey, Alex [ORNL; Schrock, Jonathan [ORNL; Seddiqi, Hadayat [ORNL; Britt, Keith A [ORNL; Imam, Neena [ORNL

    2016-01-01

    Split-execution computing leverages the capabilities of multiple computational models to solve problems, but splitting program execution across different computational models incurs costs associated with the translation between domains. We analyze the performance of a split-execution computing system developed from conventional and quantum processing units (QPUs) by using behavioral models that track resource usage. We focus on asymmetric processing models built using conventional CPUs and a family of special-purpose QPUs that employ quantum computing principles. Our performance models account for the translation of a classical optimization problem into the physical representation required by the quantum processor while also accounting for hardware limitations and conventional processor speed and memory. We conclude that the bottleneck in this split-execution computing system lies at the quantum-classical interface and that the primary time cost is independent of quantum processor behavior.

  14. Intelligent decision support systems for sustainable computing paradigms and applications

    CERN Document Server

    Abraham, Ajith; Siarry, Patrick; Sheng, Michael

    2017-01-01

    This unique book dicusses the latest research, innovative ideas, challenges and computational intelligence (CI) solutions in sustainable computing. It presents novel, in-depth fundamental research on achieving a sustainable lifestyle for society, either from a methodological or from an application perspective. Sustainable computing has expanded to become a significant research area covering the fields of computer science and engineering, electrical engineering and other engineering disciplines, and there has been an increase in the amount of literature on aspects sustainable computing such as energy efficiency and natural resources conservation that emphasizes the role of ICT (information and communications technology) in achieving system design and operation objectives. The energy impact/design of more efficient IT infrastructures is a key challenge in realizing new computing paradigms. The book explores the uses of computational intelligence (CI) techniques for intelligent decision support that can be explo...

  15. Resource requirements for digital computations on electrooptical systems.

    Science.gov (United States)

    Eshaghian, M M; Panda, D K; Kumar, V K

    1991-03-10

    In this paper we study the resource requirements of electrooptical organizations in performing digital computing tasks. We define a generic model of parallel computation using optical interconnects, called the optical model of computation (OMC). In this model, computation is performed in digital electronics and communication is performed using free space optics. Using this model we derive relationships between information transfer and computational resources in solving a given problem. To illustrate our results, we concentrate on a computationally intensive operation, 2-D digital image convolution. Irrespective of the input/output scheme and the order of computation, we show a lower bound of ?(nw) on the optical volume required for convolving a w x w kernel with an n x n image, if the input bits are given to the system only once.

  16. Resource requirements for digital computations on electrooptical systems

    Science.gov (United States)

    Eshaghian, Mary M.; Panda, Dhabaleswar K.; Kumar, V. K. Prasanna

    1991-03-01

    The resource requirements of electrooptical organizations in performing digital computing tasks are studied via a generic model of parallel computation using optical interconnects, called the 'optical model of computation' (OMC). In this model, computation is performed in digital electronics and communication is performed using free space optics. Relationships between information transfer and computational resources in solving a given problem are derived. A computationally intensive operation, two-dimensional digital image convolution is undertaken. Irrespective of the input/output scheme and the order of computation, a lower bound of Omega(nw) is obtained on the optical volume required for convolving a w x w kernel with an n x n image, if the input bits are given to the system only once.

  17. 14 CFR 417.123 - Computing systems and software.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Computing systems and software. 417.123... systems and software. (a) A launch operator must document a system safety process that identifies the... systems and software. (b) A launch operator must identify all safety-critical functions associated with...

  18. Design of Computer Fault Diagnosis and Troubleshooting System ...

    African Journals Online (AJOL)

    PROF. O. E. OSUAGWU

    2013-12-01

    Dec 1, 2013 ... We model our system using Object-Oriented Analysis and Design. (OOAD) and UML ... high-level concept of a system. ... on the design of an expert system for computer .... opened distributed application, has rich type system ...

  19. Establishing performance requirements of computer based systems subject to uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Robinson, D.

    1997-02-01

    An organized systems design approach is dictated by the increasing complexity of computer based systems. Computer based systems are unique in many respects but share many of the same problems that have plagued design engineers for decades. The design of complex systems is difficult at best, but as a design becomes intensively dependent on the computer processing of external and internal information, the design process quickly borders chaos. This situation is exacerbated with the requirement that these systems operate with a minimal quantity of information, generally corrupted by noise, regarding the current state of the system. Establishing performance requirements for such systems is particularly difficult. This paper briefly sketches a general systems design approach with emphasis on the design of computer based decision processing systems subject to parameter and environmental variation. The approach will be demonstrated with application to an on-board diagnostic (OBD) system for automotive emissions systems now mandated by the state of California and the Federal Clean Air Act. The emphasis is on an approach for establishing probabilistically based performance requirements for computer based systems.

  20. Computer Aided Facial Prosthetics Manufacturing System

    Directory of Open Access Journals (Sweden)

    Peng H.K.

    2016-01-01

    Full Text Available Facial deformities can impose burden to the patient. There are many solutions for facial deformities such as plastic surgery and facial prosthetics. However, current fabrication method of facial prosthetics is high-cost and time consuming. This study aimed to identify a new method to construct a customized facial prosthetic. A 3D scanner, computer software and 3D printer were used in this study. Results showed that the new developed method can be used to produce a customized facial prosthetics. The advantages of the developed method over the conventional process are low cost, reduce waste of material and pollution in order to meet the green concept.

  1. Artificial intelligence, expert systems, computer vision, and natural language processing

    Science.gov (United States)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  2. Artificial intelligence, expert systems, computer vision, and natural language processing

    Science.gov (United States)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  3. THE USE OF COMPUTER ALGEBRA SYSTEMS IN THE TEACHING PROCESS

    Directory of Open Access Journals (Sweden)

    Mychaylo Paszeczko

    2014-11-01

    Full Text Available This work discusses computational capabilities of the programs belonging to the CAS (Computer Algebra Systems. A review of commercial and non-commercial software has been done here as well. In addition, there has been one of the programs belonging to the this group (program Mathcad selected and its application to the chosen example has been presented. Computational capabilities and ease of handling were decisive factors for the selection.

  4. SOME PARADIGMS OF ARTIFICIAL INTELLIGENCE IN FINANCIAL COMPUTER SYSTEMS

    Directory of Open Access Journals (Sweden)

    Jerzy Balicki

    2015-12-01

    Full Text Available The article discusses some paradigms of artificial intelligence in the context of their applications in computer financial systems. The proposed approach has a significant po-tential to increase the competitiveness of enterprises, including financial institutions. However, it requires the effective use of supercomputers, grids and cloud computing. A reference is made to the computing environment for Bitcoin. In addition, we characterized genetic programming and artificial neural networks to prepare investment strategies on the stock exchange market.

  5. Emerging Trends in Computing, Informatics, Systems Sciences, and Engineering

    CERN Document Server

    Elleithy, Khaled

    2013-01-01

    Emerging Trends in Computing, Informatics, Systems Sciences, and Engineering includes a set of rigorously reviewed world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of  Industrial Electronics, Technology & Automation, Telecommunications and Networking, Systems, Computing Sciences and Software Engineering, Engineering Education, Instructional Technology, Assessment, and E-learning. This book includes the proceedings of the International Joint Conferences on Computer, Information, and Systems Sciences, and Engineering (CISSE 2010). The proceedings are a set of rigorously reviewed world-class manuscripts presenting the state of international practice in Innovative Algorithms and Techniques in Automation, Industrial Electronics and Telecommunications.

  6. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard

    2012-01-01

    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  7. Computer system organization the B5700/B6700 series

    CERN Document Server

    Organick, Elliott I

    1973-01-01

    Computer System Organization: The B5700/B6700 Series focuses on the organization of the B5700/B6700 Series developed by Burroughs Corp. More specifically, it examines how computer systems can (or should) be organized to support, and hence make more efficient, the running of computer programs that evolve with characteristically similar information structures.Comprised of nine chapters, this book begins with a background on the development of the B5700/B6700 operating systems, paying particular attention to their hardware/software architecture. The discussion then turns to the block-structured p

  8. Innovations and Advances in Computer, Information, Systems Sciences, and Engineering

    CERN Document Server

    Sobh, Tarek

    2013-01-01

    Innovations and Advances in Computer, Information, Systems Sciences, and Engineering includes the proceedings of the International Joint Conferences on Computer, Information, and Systems Sciences, and Engineering (CISSE 2011). The contents of this book are a set of rigorously reviewed, world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of  Industrial Electronics, Technology and Automation, Telecommunications and Networking, Systems, Computing Sciences and Software Engineering, Engineering Education, Instructional Technology, Assessment, and E-learning.

  9. Computational simulation of concurrent engineering for aerospace propulsion systems

    Science.gov (United States)

    Chamis, C. C.; Singhal, S. N.

    1992-01-01

    Results are summarized of an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulations methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties - fundamental in developing such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering for propulsion systems and systems in general. Benefits and facets needing early attention in the development are outlined.

  10. Computational simulation for concurrent engineering of aerospace propulsion systems

    Science.gov (United States)

    Chamis, C. C.; Singhal, S. N.

    1993-01-01

    Results are summarized for an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulation methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties--fundamental to develop such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering of propulsion systems and systems in general. Benefits and issues needing early attention in the development are outlined.

  11. Data entry system for INIS input using a personal computer

    Energy Technology Data Exchange (ETDEWEB)

    Ishikawa, Masashi (Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment)

    1990-01-01

    Input preparation for the INIS (International Nuclear Information System) has been performed by Japan Atomic Energy Research Institute since 1970. Instead of the input data preparation done by worksheets make out with the typewriters, new method with which data can be directly inputted into a diskette using personal computers is introduced. According to the popularization of personal computers and word processors, this system is easily applied to other system, so the outline and the future development on it are described. A shortcoming of this system is that spell-checking and data entry using authority files are hardly performed because of the limitation of hardware resources, and that data code conversion is needed because applied code systems between personal computer and main frame computer are quite different from each other. On the other hand, improving the timelyness of data entry is expected without duplication of keying. (author).

  12. Computational intelligence for decision support in cyber-physical systems

    CERN Document Server

    Ali, A; Riaz, Zahid

    2014-01-01

    This book is dedicated to applied computational intelligence and soft computing techniques with special reference to decision support in Cyber Physical Systems (CPS), where the physical as well as the communication segment of the networked entities interact with each other. The joint dynamics of such systems result in a complex combination of computers, software, networks and physical processes all combined to establish a process flow at system level. This volume provides the audience with an in-depth vision about how to ensure dependability, safety, security and efficiency in real time by making use of computational intelligence in various CPS applications ranging from the nano-world to large scale wide area systems of systems. Key application areas include healthcare, transportation, energy, process control and robotics where intelligent decision support has key significance in establishing dynamic, ever-changing and high confidence future technologies. A recommended text for graduate students and researche...

  13. Evolutionary Computational Methods for Identifying Emergent Behavior in Autonomous Systems

    Science.gov (United States)

    Terrile, Richard J.; Guillaume, Alexandre

    2011-01-01

    A technique based on Evolutionary Computational Methods (ECMs) was developed that allows for the automated optimization of complex computationally modeled systems, such as autonomous systems. The primary technology, which enables the ECM to find optimal solutions in complex search spaces, derives from evolutionary algorithms such as the genetic algorithm and differential evolution. These methods are based on biological processes, particularly genetics, and define an iterative process that evolves parameter sets into an optimum. Evolutionary computation is a method that operates on a population of existing computational-based engineering models (or simulators) and competes them using biologically inspired genetic operators on large parallel cluster computers. The result is the ability to automatically find design optimizations and trades, and thereby greatly amplify the role of the system engineer.

  14. On-site Real-Time Inspection System for Pump-impeller using X-band Linac X-ray Source

    Science.gov (United States)

    Yamamoto, Tomohiko; Natsui, Takuya; Taguchi, Hiroki; Taniguchi, Yoshihiro; Lee, Ki woo; Hashimoto, Eiko; Sakamoto, Fumito; Sakumi, Akira; Yusa, Noritaka; Uesaka, Mitsuru; Nakamura, Naoki; Yamamoto, Masashi; Tanabe, Eiji

    2009-03-01

    The methods of nondestructive testing (NDT) are generally ultrasonic, neutron, eddy-current and X-rays, NDT by using X-rays, in particular, is the most useful inspection technique having high resolution. We can especially evaluate corroded pipes of petrochemical complex, nuclear and thermal-power plants by the high energy X-ray NDT system. We develop a portable X-ray NDT system with X-band linac and magnetron. This system can generate a 950 keV electron beam. We are able to get X-ray images of samples with 1 mm spatial resolution. This system has application to real time impeller inspection because linac based X-ray sources are able to generate pulsed X-rays. So, we can inspect the rotating impeller if the X-ray pulse rate is synchronized with the impeller rotation rate. This system has application in condition based maintenance (CBM) of nuclear plants, for example. However, 950 keV X-ray source can only be used for thin tubes with 20 mm thickness. We have started design of a 3.95 MeV X-band linac for broader X-ray NDT application. We think that this X-ray NDT system will be useful for corrosion wastage and cracking in thicker tubes at nuclear plants and impeller of larger pumps. This system consists of X-band linac, thermionic cathode electron gun, magnetron and waveguide components. For achieving higher electric fields the 3.95 MeV X-band linac structure has the side-coupled acceleration structure. This structure has more efficient acceleration than the 950 keV linac with alternating periodic structure (APS). We adopt a 1.3 MW magnetron for the RF source. This accelerator system is about 30 cm long. The beam current is about 150 mA, and X-ray dose rate is 10 Gy@1 m/500 pps. In this paper, the detail of the whole system concept and the electromagnetic field of designed linac structure will be reported.

  15. Software design for resilient computer systems

    CERN Document Server

    Schagaev, Igor

    2016-01-01

    This book addresses the question of how system software should be designed to account for faults, and which fault tolerance features it should provide for highest reliability. The authors first show how the system software interacts with the hardware to tolerate faults. They analyze and further develop the theory of fault tolerance to understand the different ways to increase the reliability of a system, with special attention on the role of system software in this process. They further develop the general algorithm of fault tolerance (GAFT) with its three main processes: hardware checking, preparation for recovery, and the recovery procedure. For each of the three processes, they analyze the requirements and properties theoretically and give possible implementation scenarios and system software support required. Based on the theoretical results, the authors derive an Oberon-based programming language with direct support of the three processes of GAFT. In the last part of this book, they introduce a simulator...

  16. PROGTEST: A Computer System for the Analysis of Computational Computer Programs.

    Science.gov (United States)

    1980-04-01

    Richard Loller, Graphic Arts Branch Ms Linda Prieto , Word Processing Center A-i APPENDIX B CAA-D-80-1 DISTRIBUTION Addressee # of Copies Defense...Development Center ATTN: Alan Barnum Is Griffiss Air Force Base, NY 13441 B-6 CAA-D-80-1 Mr. Glen Ingram Scientific Computing Division Room A151

  17. Information Fusion Methods in Computer Pan-vision System

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Aiming at concrete tasks of information fusion in computer pan-vision (CPV) system, information fusion methods are studied thoroughly. Some research progresses are presented. Recognizing of vision testing object is realized by fusing vision information and non-vision auxiliary information, which contain recognition of material defects, intelligent robot's autonomous recognition for parts and computer to defect image understanding and recognition automatically.

  18. Python for Scientific Computing Education: Modeling of Queueing Systems

    Directory of Open Access Journals (Sweden)

    Vladimiras Dolgopolovas

    2014-01-01

    Full Text Available In this paper, we present the methodology for the introduction to scientific computing based on model-centered learning. We propose multiphase queueing systems as a basis for learning objects. We use Python and parallel programming for implementing the models and present the computer code and results of stochastic simulations.

  19. Portable capillary electrophoresis-system for on-site food analysis with lab-on-a-chip based contactless conductivity detection

    Science.gov (United States)

    Gärtner, Claudia; Sewart, René; Klemm, Richard; Becker, Holger

    2014-06-01

    A portable analytical system for the characterization of liquid environmental samples and beverages in food control was realized. The key element is the implementation of contactless conductivity detection on lab-on-a-chip basis ensuring the system to be operated in a label free mode. Typical target molecules such as small ionic species like Li+, Na+, K+, SO4 2- or NO3-, organic acids in wine whose concentration and ratio to each other documents the wine quality, or caffeine or phosphate in coke were detected. Results from sample matrices like various beverages as water, cola, tea, wine and milk, water from heaters, environmental samples and blood will be presented.

  20. Robust Security System for Critical Computers

    Directory of Open Access Journals (Sweden)

    Preet Inder Singh

    2012-06-01

    Full Text Available Among the various means of available resource protection including biometrics, password based system is most simple, user friendly, cost effective and commonly used, but this system having high sensitivity with attacks. Most of the advanced methods for authentication based on password encrypt the contents of password before storing or transmitting in physical domain. But all conventional cryptographic based encryption methods are having its own limitations, generally either in terms of complexity, efficiency or in terms of security. In this paper a simple method is developed that provide more secure and efficient means of authentication, at the same time simple in design for critical systems. Apart from protection, a step toward perfect security has taken by adding the feature of intruder detection along with the protection system. This is possible by merging various security systems with each other i.e password based security with keystroke dynamic, thumb impression with retina scan associated with the users. This new method is centrally based on user behavior and users related security system, which provides the robust security to the critical systems with intruder detection facilities.

  1. Fundamentals of computational intelligence neural networks, fuzzy systems, and evolutionary computation

    CERN Document Server

    Keller, James M; Fogel, David B

    2016-01-01

    This book covers the three fundamental topics that form the basis of computational intelligence: neural networks, fuzzy systems, and evolutionary computation. The text focuses on inspiration, design, theory, and practical aspects of implementing procedures to solve real-world problems. While other books in the three fields that comprise computational intelligence are written by specialists in one discipline, this book is co-written by current former Editor-in-Chief of IEEE Transactions on Neural Networks and Learning Systems, a former Editor-in-Chief of IEEE Transactions on Fuzzy Systems, and the founding Editor-in-Chief of IEEE Transactions on Evolutionary Computation. The coverage across the three topics is both uniform and consistent in style and notation. Discusses single-layer and multilayer neural networks, radial-basi function networks, and recurrent neural networks Covers fuzzy set theory, fuzzy relations, fuzzy logic interference, fuzzy clustering and classification, fuzzy measures and fuzz...

  2. Computer Resources Handbook for Flight Critical Systems.

    Science.gov (United States)

    1985-01-01

    4- lr , 4-21 71 -r:v-’.7. 7777 -.- - ~~ --- 2- ’K 2. N It- NATIONAL 8UURFAU OF E UORGoPY RESOLUI TESI 4.4, % % ! O0 ASI-TR-85,-502O (0 COMPUTER...associated with the ,.l-a, and the status of the originating unit or function is identifiel (e. g., ’.." 4, . ..-. operating in no rrrji / r estr i ct ed emrg...lllllEEEEElhEE IEEEEEEEEEEEEE Eu. -2w |’’ ".4 -, M.iii - /, - ,, IV. . ,,. 1 0 2-4 11M ~ 2 - Hill- 14 W15 NATIONAL BURAU OF S MCROGOPY RESOUYI TESI 5’W, 4

  3. On The Computational Capabilities of Physical Systems. Part 2; Relationship With Conventional Computer Science

    Science.gov (United States)

    Wolpert, David H.; Koga, Dennis (Technical Monitor)

    2000-01-01

    In the first of this pair of papers, it was proven that there cannot be a physical computer to which one can properly pose any and all computational tasks concerning the physical universe. It was then further proven that no physical computer C can correctly carry out all computational tasks that can be posed to C. As a particular example, this result means that no physical computer that can, for any physical system external to that computer, take the specification of that external system's state as input and then correctly predict its future state before that future state actually occurs; one cannot build a physical computer that can be assured of correctly "processing information faster than the universe does". These results do not rely on systems that are infinite, and/or non-classical, and/or obey chaotic dynamics. They also hold even if one uses an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing Machine. This generality is a direct consequence of the fact that a novel definition of computation - "physical computation" - is needed to address the issues considered in these papers, which concern real physical computers. While this novel definition does not fit into the traditional Chomsky hierarchy, the mathematical structure and impossibility results associated with it have parallels in the mathematics of the Chomsky hierarchy. This second paper of the pair presents a preliminary exploration of some of this mathematical structure. Analogues of Chomskian results concerning universal Turing Machines and the Halting theorem are derived, as are results concerning the (im)possibility of certain kinds of error-correcting codes. In addition, an analogue of algorithmic information complexity, "prediction complexity", is elaborated. A task-independent bound is derived on how much the prediction complexity of a computational task can differ for two different reference universal physical computers used to solve that task

  4. Computing handbook information systems and information technology

    CERN Document Server

    Topi, Heikki

    2014-01-01

    Disciplinary Foundations and Global ImpactEvolving Discipline of Information Systems Heikki TopiDiscipline of Information Technology Barry M. Lunt and Han ReichgeltInformation Systems as a Practical Discipline Juhani IivariInformation Technology Han Reichgelt, Joseph J. Ekstrom, Art Gowan, and Barry M. LuntSociotechnical Approaches to the Study of Information Systems Steve Sawyer and Mohammad Hossein JarrahiIT and Global Development Erkki SutinenUsing ICT for Development, Societal Transformation, and Beyond Sherif KamelTechnical Foundations of Data and Database ManagementData Models Avi Silber

  5. Computer systems for annotation of single molecule fragments

    Science.gov (United States)

    Schwartz, David Charles; Severin, Jessica

    2016-07-19

    There are provided computer systems for visualizing and annotating single molecule images. Annotation systems in accordance with this disclosure allow a user to mark and annotate single molecules of interest and their restriction enzyme cut sites thereby determining the restriction fragments of single nucleic acid molecules. The markings and annotations may be automatically generated by the system in certain embodiments and they may be overlaid translucently onto the single molecule images. An image caching system may be implemented in the computer annotation systems to reduce image processing time. The annotation systems include one or more connectors connecting to one or more databases capable of storing single molecule data as well as other biomedical data. Such diverse array of data can be retrieved and used to validate the markings and annotations. The annotation systems may be implemented and deployed over a computer network. They may be ergonomically optimized to facilitate user interactions.

  6. On-site analysis of volatile nitrosamines in food model systems by solid-phase microextraction coupled to a direct extraction device.

    Science.gov (United States)

    Ventanas, S; Ruiz, J

    2006-12-15

    Analysis of nitrosamine (NA) standards in a model system was carried out by extraction using SPME coupled to a direct extraction device (DED) and subsequent GC/MS in selected ion monitoring mode. Gelatine (20%, w/v) systems of a NA standard (10mugL(-1)) were prepared, in order to mimic food protein matrix systems such as meat and meat products, fish and so on. Different SPME fibre coatings were tested Both divinylbenzene/carboxen/polydimethylsiloxane (DVB/CAR/PDMS) and carboxen/polydimethylsiloxane (CAR/PDMS) fibres coupled to DED satisfactorily extracted all nine NA included in the studied standard (EPA 8270 nitrosamines mix, Sigma-Aldrich) from the gelatine system at 25 degrees C without any sample manipulation. Values of reproducibility, linearity and limit of detection for each type of fibre are reported. SPME-DED appears as a rapid, non-destructive technique for preliminary screening of the presence of toxic substances such as NA in solid foods.

  7. Comparison study of membrane filtration direct count and an automated coliform and Escherichia coli detection system for on-site water quality testing.

    Science.gov (United States)

    Habash, Marc; Johns, Robert

    2009-10-01

    This study compared an automated Escherichia coli and coliform detection system with the membrane filtration direct count technique for water testing. The automated instrument performed equal to or better than the membrane filtration test in analyzing E. coli-spiked samples and blind samples with interference from Proteus vulgaris or Aeromonas hydrophila.

  8. Criteria of Human-computer Interface Design for Computer Assisted Surgery Systems

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jian-guo; LIN Yan-ping; WANG Cheng-tao; LIU Zhi-hong; YANG Qing-ming

    2008-01-01

    In recent years, computer assisted surgery (CAS) systems become more and more common in clinical practices, but few specific design criteria have been proposed for human-computer interface (HCI) in CAS systems. This paper tried to give universal criteria of HCI design for CAS systems through introduction of demonstration application, which is total knee replacement (TKR) with a nonimage-based navigation system.A typical computer assisted process can be divided into four phases: the preoperative planning phase, the intraoperative registration phase, the intraoperative navigation phase and finally the postoperative assessment phase. The interface design for four steps is described respectively in the demonstration application. These criteria this paper summarized can be useful to software developers to achieve reliable and effective interfaces for new CAS systems more easily.

  9. Computational Fluid and Particle Dynamics in the Human Respiratory System

    CERN Document Server

    Tu, Jiyuan; Ahmadi, Goodarz

    2013-01-01

    Traditional research methodologies in the human respiratory system have always been challenging due to their invasive nature. Recent advances in medical imaging and computational fluid dynamics (CFD) have accelerated this research. This book compiles and details recent advances in the modelling of the respiratory system for researchers, engineers, scientists, and health practitioners. It breaks down the complexities of this field and provides both students and scientists with an introduction and starting point to the physiology of the respiratory system, fluid dynamics and advanced CFD modeling tools. In addition to a brief introduction to the physics of the respiratory system and an overview of computational methods, the book contains best-practice guidelines for establishing high-quality computational models and simulations. Inspiration for new simulations can be gained through innovative case studies as well as hands-on practice using pre-made computational code. Last but not least, students and researcher...

  10. Proceedings: Computer Science and Data Systems Technical Symposium, volume 1

    Science.gov (United States)

    Larsen, Ronald L.; Wallgren, Kenneth

    1985-01-01

    Progress reports and technical updates of programs being performed by NASA centers are covered. Presentations in viewgraph form are included for topics in three categories: computer science, data systems and space station applications.

  11. Proceedings: Computer Science and Data Systems Technical Symposium, volume 2

    Science.gov (United States)

    Larsen, Ronald L.; Wallgren, Kenneth

    1985-01-01

    Progress reports and technical updates of programs being performed by NASA centers are covered. Presentations in viewgraph form, along with abstracts, are included for topics in three catagories: computer science, data systems, and space station applications.

  12. The evolution of the PVM concurrent computing system

    Energy Technology Data Exchange (ETDEWEB)

    Giest, G.A. [Oak Ridge National Lab., TN (United States); Sunderam, V.S. [Emory Univ., Atlanta, GA (United States). Dept. of Mathematics and Computer Science

    1993-07-01

    Concurrent and distributed computing, using portable software systems or environments on general purpose networked computing platforms, has recently gained widespread attention. Many such systems have been developed, and several are in production use. This paper describes the evolution of the PVM system, a software infrastructure for concurrent computing in networked environments. PVM has evolved over the past years; it is currently in use at several hundred institutions worldwide for applications ranging from scientific supercomputing to high performance computations in medicine, discrete mathematics, and databases, and for learning parallel programming. We describe the historical evolution of the PVM system, outline the programming model and supported features, present results gained from its use, list representative applications from a variety of disciplines that PVM has been used for, and comment on future trends and ongoing research projects.

  13. Modeling Workflow Management in a Distributed Computing System ...

    African Journals Online (AJOL)

    Modeling Workflow Management in a Distributed Computing System Using Petri Nets. ... who use it to share information more rapidly and increases their productivity. ... Petri nets are an established tool for modelling and analyzing processes.

  14. Service Level Agreement (SLA) in Utility Computing Systems

    CERN Document Server

    Wu, Linlin

    2010-01-01

    In recent years, extensive research has been conducted in the area of Service Level Agreement (SLA) for utility computing systems. An SLA is a formal contract used to guarantee that consumers' service quality expectation can be achieved. In utility computing systems, the level of customer satisfaction is crucial, making SLAs significantly important in these environments. Fundamental issue is the management of SLAs, including SLA autonomy management or trade off among multiple Quality of Service (QoS) parameters. Many SLA languages and frameworks have been developed as solutions; however, there is no overall classification for these extensive works. Therefore, the aim of this chapter is to present a comprehensive survey of how SLAs are created, managed and used in utility computing environment. We discuss existing use cases from Grid and Cloud computing systems to identify the level of SLA realization in state-of-art systems and emerging challenges for future research.

  15. Computational unit for non-contact photonic system

    Science.gov (United States)

    Kochetov, Alexander V.; Skrylev, Pavel A.

    2005-06-01

    Requirements to the unified computational unit for non-contact photonic system have been formulated. Estimation of central processing unit performance and required memory size are calculated. Specialized microcontroller optimal to use as central processing unit has been selected. Memory chip types are determinated for system. The computational unit consists of central processing unit based on selected microcontroller, NVRAM memory, receiving circuit, SDRAM memory, control and power circuits. It functions, as performing unit that calculates required parameters ofrail track.

  16. Towards accurate quantum simulations of large systems with small computers.

    Science.gov (United States)

    Yang, Yonggang

    2017-01-24

    Numerical simulations are important for many systems. In particular, various standard computer programs have been developed for solving the quantum Schrödinger equations. However, the accuracy of these calculations is limited by computer capabilities. In this work, an iterative method is introduced to enhance the accuracy of these numerical calculations, which is otherwise prohibitive by conventional methods. The method is easily implementable and general for many systems.

  17. Cluster-based localization and tracking in ubiquitous computing systems

    CERN Document Server

    Martínez-de Dios, José Ramiro; Torres-González, Arturo; Ollero, Anibal

    2017-01-01

    Localization and tracking are key functionalities in ubiquitous computing systems and techniques. In recent years a very high variety of approaches, sensors and techniques for indoor and GPS-denied environments have been developed. This book briefly summarizes the current state of the art in localization and tracking in ubiquitous computing systems focusing on cluster-based schemes. Additionally, existing techniques for measurement integration, node inclusion/exclusion and cluster head selection are also described in this book.

  18. Towards accurate quantum simulations of large systems with small computers

    Science.gov (United States)

    Yang, Yonggang

    2017-01-01

    Numerical simulations are important for many systems. In particular, various standard computer programs have been developed for solving the quantum Schrödinger equations. However, the accuracy of these calculations is limited by computer capabilities. In this work, an iterative method is introduced to enhance the accuracy of these numerical calculations, which is otherwise prohibitive by conventional methods. The method is easily implementable and general for many systems. PMID:28117366

  19. CANONICAL COMPUTATIONAL FORMS FOR AR 2-D SYSTEMS

    NARCIS (Netherlands)

    ROCHA, P; WILLEMS, JC

    1990-01-01

    A canonical form for AR 2-D systems representations is introduced. This yields a method for computing the system trajectories by means of a line-by-line recursion, and displays some relevant information about the system structure such as the choice of inputs and initial conditions.

  20. Understanding and Improving the Performance Consistency of Distributed Computing Systems

    NARCIS (Netherlands)

    Yigitbasi, M.N.

    2012-01-01

    With the increasing adoption of distributed systems in both academia and industry, and with the increasing computational and storage requirements of distributed applications, users inevitably demand more from these systems. Moreover, users also depend on these systems for latency and throughput sens

  1. Computer Directed Training System (CDTS), User’s Manual

    Science.gov (United States)

    1983-07-01

    94111447030 OPInONAL FORM 2.2 BACK (4-77) DEPARTMENT OF THE AIR FORCE A AUL5-5 DISRIBceO LIMITED TO DOD, REFR TER REQUESTS TO THE ADPS MANAGER...SYSTEM SUMMARY 2.1 System Aplication . The Computer Directed Training System is used to prepare and present lessons that supplement local on-the-job

  2. A New Approach: Computer-Assisted Problem-Solving Systems

    Science.gov (United States)

    Gok, Tolga

    2010-01-01

    Computer-assisted problem solving systems are rapidly growing in educational use and with the advent of the Internet. These systems allow students to do their homework and solve problems online with the help of programs like Blackboard, WebAssign and LON-CAPA program etc. There are benefits and drawbacks of these systems. In this study, the…

  3. Understanding and Improving the Performance Consistency of Distributed Computing Systems

    NARCIS (Netherlands)

    Yigitbasi, M.N.

    2012-01-01

    With the increasing adoption of distributed systems in both academia and industry, and with the increasing computational and storage requirements of distributed applications, users inevitably demand more from these systems. Moreover, users also depend on these systems for latency and throughput sens

  4. A modular system for computational fluid dynamics

    Science.gov (United States)

    McCarthy, D. R.; Foutch, D. W.; Shurtleff, G. E.

    This paper describes the Modular System for Compuational Fluid Dynamics (MOSYS), a software facility for the construction and execution of arbitrary solution procedures on multizone, structured body-fitted grids. It focuses on the structure and capabilities of MOSYS and the philosophy underlying its design. The system offers different levels of capability depending on the objectives of the user. It enables the applications engineer to quickly apply a variety of methods to geometrically complex problems. The methods developer can implement new algorithms in a simple form, and immediately apply them to problems of both theoretical and practical interest. And for the code builder it consitutes a toolkit for fast construction of CFD codes tailored to various purposes. These capabilities are illustrated through applications to a particularly complex problem encountered in aircraft propulsion systems, namely, the analysis of a landing aircraft in reverse thrust.

  5. Integrated computer-aided retinal photocoagulation system

    Science.gov (United States)

    Barrett, Steven F.; Wright, Cameron H. G.; Oberg, Erik D.; Rockwell, Benjamin A.; Cain, Clarence P.; Jerath, Maya R.; Rylander, Henry G., III; Welch, Ashley J.

    1996-05-01

    Successful retinal tracking subsystem testing results in vivo on rhesus monkeys using an argon continuous wave laser and an ultra-short pulse laser are presented. Progress on developing an integrated robotic retinal laser surgery system is also presented. Several interesting areas of study have developed: (1) 'doughnut' shaped lesions that occur under certain combinations of laser power, spot size, and irradiation time complicating measurements of central lesion reflectance, (2) the optimal retinal field of view to achieve simultaneous tracking and lesion parameter control, and (3) a fully digital versus a hybrid analog/digital tracker using confocal reflectometry integrated system implementation. These areas are investigated in detail in this paper. The hybrid system warrants a separate presentation and appears in another paper at this conference.

  6. Secure system design and trustable computing

    CERN Document Server

    Potkonjak, Miodrag

    2016-01-01

    This book provides the foundations for understanding hardware security and trust, which have become major concerns for national security over the past decade.  Coverage includes issues related to security and trust in a variety of electronic devices and systems related to the security of hardware, firmware and software, spanning system applications, online transactions, and networking services.  This serves as an invaluable reference to the state-of-the-art research that is of critical significance to the security of, and trust in, modern society’s microelectronic-supported infrastructures.

  7. The Science of Computing: Expert Systems

    Science.gov (United States)

    Denning, Peter J.

    1986-01-01

    The creative urge of human beings is coupled with tremendous reverence for logic. The idea that the ability to reason logically--to be rational--is closely tied to intelligence was clear in the writings of Plato. The search for greater understanding of human intelligence led to the development of mathematical logic, the study of methods of proving the truth of statements by manipulating the symbols in which they are written without regard to the meanings of those symbols. By the nineteenth century a search was under way for a universal system of logic, one capable of proving anything provable in any other system.

  8. Architecture Research of Non-Stop Computer System

    Institute of Scientific and Technical Information of China (English)

    LIUXinsong; QIUYuanjie; YANGFeng; YANGongjun; GUPan; GAOKe

    2004-01-01

    Distributed & parallel server system with distributed & parallel I/O interface has solved the bottleneck between server system and client system, and also has solved the rebuilding problem after system fault. However, the system still has some shortcomings: the switch is the system bottleneck and the system is not adapted to WAN (Wide area network). Therefore, we put forward a new system architecture to overcome these shortcomings and develop the non-stop computer system. The basis of a non-stop system is rebuilt after system fault. The inner architecture of non-stop system must be redundant and the redundancy is the system fault-tolerance redundancy based on distributed mechanism and not backupredundancy. Analysis and test results declare that the system rebuild time after fault is in second scale and its rebuild capability is so strong that the system can be nonstop in the system's lifetime.

  9. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  10. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  11. Evolution and development of complex computational systems using the paradigm of metabolic computing in Epigenetic Tracking

    Directory of Open Access Journals (Sweden)

    Alessandro Fontana

    2013-09-01

    Full Text Available Epigenetic Tracking (ET is an Artificial Embryology system which allows for the evolution and development of large complex structures built from artificial cells. In terms of the number of cells, the complexity of the bodies generated with ET is comparable with the complexity of biological organisms. We have previously used ET to simulate the growth of multicellular bodies with arbitrary 3-dimensional shapes which perform computation using the paradigm of ``metabolic computing''. In this paper we investigate the memory capacity of such computational structures and analyse the trade-off between shape and computation. We now plan to build on these foundations to create a biologically-inspired model in which the encoding of the phenotype is efficient (in terms of the compactness of the genome and evolvable in tasks involving non-trivial computation, robust to damage and capable of self-maintenance and self-repair.

  12. 1st International Conference on Signal, Networks, Computing, and Systems

    CERN Document Server

    Mohapatra, Durga; Nagar, Atulya; Sahoo, Manmath

    2016-01-01

    The book is a collection of high-quality peer-reviewed research papers presented in the first International Conference on Signal, Networks, Computing, and Systems (ICSNCS 2016) held at Jawaharlal Nehru University, New Delhi, India during February 25–27, 2016. The book is organized in to two volumes and primarily focuses on theory and applications in the broad areas of communication technology, computer science and information security. The book aims to bring together the latest scientific research works of academic scientists, professors, research scholars and students in the areas of signal, networks, computing and systems detailing the practical challenges encountered and the solutions adopted.

  13. Efficient Data-parallel Computations on Distributed Systems

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Task scheduling determines the performance of NOW computing to a large extent.However,the computer system architecture, computing capability and sys tem load are rarely proposed together.In this paper,a biggest-heterogeneous scheduling algorithm is presented.It fully considers the system characterist ics (from application view), structure and state.So it always can utilize all processing resource under a reasonable premise.The results of experiment show the algorithm can significantly shorten the response time of jobs.

  14. Intrusion Detection System Inside Grid Computing Environment (IDS-IGCE

    Directory of Open Access Journals (Sweden)

    Basappa B. Kodada

    2012-01-01

    Full Text Available Grid Computing is a kind of important information technology which enables resource sharing globally to solve the large scale problem. It is based on networks and able to enable large scale aggregation and sharing of computational, data, sensors and other resources across institutional boundaries. Integrated Globus Tool Kit with Web services is to present OGSA (Open Grid Services Architecture as the standardservice grid architecture. In OGSA, everything is abstracted as a service, including computers, applications, data as well as instruments. The services and resources in Grid are heterogeneous and dynamic, and they also belong to different domains. Grid Services are still new to business system & asmore systems are being attached to it, any threat to it could bring collapse and huge harm. May be intruder come with a new form of attack. Grid Computing is a Global Infrastructure on the internet has led to asecurity attacks on the Computing Infrastructure. The wide varieties of IDS (Intrusion Detection System are available which are designed to handle the specific types of attacks. The technique of [27] will protect future attacks in Service Grid Computing Environment at the Grid Infrastructure but there is no technique can protect these types of attacks inside the grid at the node level. So this paper proposes the Architecture of IDS-IGCE (Intrusion Detection System – Inside Grid Computing Environment which can provide the protection against the complete threats inside the Grid Environment.

  15. Integrated computer control system architectural overview

    Energy Technology Data Exchange (ETDEWEB)

    Van Arsdall, P.

    1997-06-18

    This overview introduces the NIF Integrated Control System (ICCS) architecture. The design is abstract to allow the construction of many similar applications from a common framework. This summary lays the essential foundation for understanding the model-based engineering approach used to execute the design.

  16. Cloud computing principles, systems and applications

    CERN Document Server

    Antonopoulos, Nick

    2017-01-01

    This essential reference is a thorough and timely examination of the services, interfaces and types of applications that can be executed on cloud-based systems. Among other things, it identifies and highlights state-of-the-art techniques and methodologies.

  17. Soft computing in green and renewable energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Gopalakrishnan, Kasthurirangan [Iowa State Univ., Ames, IA (United States). Iowa Bioeconomy Inst.; US Department of Energy, Ames, IA (United States). Ames Lab; Kalogirou, Soteris [Cyprus Univ. of Technology, Limassol (Cyprus). Dept. of Mechanical Engineering and Materials Sciences and Engineering; Khaitan, Siddhartha Kumar (eds.) [Iowa State Univ. of Science and Technology, Ames, IA (United States). Dept. of Electrical Engineering and Computer Engineering

    2011-07-01

    Soft Computing in Green and Renewable Energy Systems provides a practical introduction to the application of soft computing techniques and hybrid intelligent systems for designing, modeling, characterizing, optimizing, forecasting, and performance prediction of green and renewable energy systems. Research is proceeding at jet speed on renewable energy (energy derived from natural resources such as sunlight, wind, tides, rain, geothermal heat, biomass, hydrogen, etc.) as policy makers, researchers, economists, and world agencies have joined forces in finding alternative sustainable energy solutions to current critical environmental, economic, and social issues. The innovative models, environmentally benign processes, data analytics, etc. employed in renewable energy systems are computationally-intensive, non-linear and complex as well as involve a high degree of uncertainty. Soft computing technologies, such as fuzzy sets and systems, neural science and systems, evolutionary algorithms and genetic programming, and machine learning, are ideal in handling the noise, imprecision, and uncertainty in the data, and yet achieve robust, low-cost solutions. As a result, intelligent and soft computing paradigms are finding increasing applications in the study of renewable energy systems. Researchers, practitioners, undergraduate and graduate students engaged in the study of renewable energy systems will find this book very useful. (orig.)

  18. Computer support system for residential environment evaluation for citizen participation

    Institute of Scientific and Technical Information of China (English)

    GE Jian; TEKNOMO Kardi; LU Jiang; HOKAO Kazunori

    2005-01-01

    Though the method of citizen participation in urban planning is quite well established, for a specific segment of residential environment, however, existing participation system has not coped adequately with the issue. The specific residential environment has detailed aspects that need positive and high level involvement of the citizens in participating in all stages and every field of the plan. One of the best and systematic methods to obtain a more involved citizen is through a citizen workshop. To get a more "educated" citizen who participates in the workshop, a special session to inform the citizen on what was previously gathered through a survey was revealed to be prerequisite before the workshop. The computer support system is one of the best tools for this purpose. This paper describes the development of the computer support system for residential environment evaluation system, which is an essential tool to give more information to the citizens before their participation in public workshop. The significant contribution of this paper is the educational system framework involved in the workshop on the public participation system through computer support, especially for residential environment. The framework, development and application of the computer support system are described. The application of a workshop on the computer support system was commented on as very valuable and helpful by the audience as it resulted in greater benefit to have wider range of participation, and deeper level of citizen understanding.

  19. A Massive Data Parallel Computational Framework for Petascale/Exascale Hybrid Computer Systems

    CERN Document Server

    Blazewicz, Marek; Diener, Peter; Koppelman, David M; Kurowski, Krzysztof; Löffler, Frank; Schnetter, Erik; Tao, Jian

    2012-01-01

    Heterogeneous systems are becoming more common on High Performance Computing (HPC) systems. Even using tools like CUDA and OpenCL it is a non-trivial task to obtain optimal performance on the GPU. Approaches to simplifying this task include Merge (a library based framework for heterogeneous multi-core systems), Zippy (a framework for parallel execution of codes on multiple GPUs), BSGP (a new programming language for general purpose computation on the GPU) and CUDA-lite (an enhancement to CUDA that transforms code based on annotations). In addition, efforts are underway to improve compiler tools for automatic parallelization and optimization of affine loop nests for GPUs and for automatic translation of OpenMP parallelized codes to CUDA. In this paper we present an alternative approach: a new computational framework for the development of massively data parallel scientific codes applications suitable for use on such petascale/exascale hybrid systems built upon the highly scalable Cactus framework. As the first...

  20. Dynamic self-assembly in living systems as computation.

    Energy Technology Data Exchange (ETDEWEB)

    Bouchard, Ann Marie; Osbourn, Gordon Cecil

    2004-06-01

    Biochemical reactions taking place in living systems that map different inputs to specific outputs are intuitively recognized as performing information processing. Conventional wisdom distinguishes such proteins, whose primary function is to transfer and process information, from proteins that perform the vast majority of the construction, maintenance, and actuation tasks of the cell (assembling and disassembling macromolecular structures, producing movement, and synthesizing and degrading molecules). In this paper, we examine the computing capabilities of biological processes in the context of the formal model of computing known as the random access machine (RAM) [Dewdney AK (1993) The New Turing Omnibus. Computer Science Press, New York], which is equivalent to a Turing machine [Minsky ML (1967) Computation: Finite and Infinite Machines. Prentice-Hall, Englewood Cliffs, NJ]. When viewed from the RAM perspective, we observe that many of these dynamic self-assembly processes - synthesis, degradation, assembly, movement - do carry out computational operations. We also show that the same computing model is applicable at other hierarchical levels of biological systems (e.g., cellular or organism networks as well as molecular networks). We present stochastic simulations of idealized protein networks designed explicitly to carry out a numeric calculation. We explore the reliability of such computations and discuss error-correction strategies (algorithms) employed by living systems. Finally, we discuss some real examples of dynamic self-assembly processes that occur in living systems, and describe the RAM computer programs they implement. Thus, by viewing the processes of living systems from the RAM perspective, a far greater fraction of these processes can be understood as computing than has been previously recognized.

  1. The Rabi Oscillation in Subdynamic System for Quantum Computing

    Directory of Open Access Journals (Sweden)

    Bi Qiao

    2015-01-01

    Full Text Available A quantum computation for the Rabi oscillation based on quantum dots in the subdynamic system is presented. The working states of the original Rabi oscillation are transformed to the eigenvectors of subdynamic system. Then the dissipation and decoherence of the system are only shown in the change of the eigenvalues as phase errors since the eigenvectors are fixed. This allows both dissipation and decoherence controlling to be easier by only correcting relevant phase errors. This method can be extended to general quantum computation systems.

  2. One approach for evaluating the Distributed Computing Design System (DCDS)

    Science.gov (United States)

    Ellis, J. T.

    1985-01-01

    The Distributed Computer Design System (DCDS) provides an integrated environment to support the life cycle of developing real-time distributed computing systems. The primary focus of DCDS is to significantly increase system reliability and software development productivity, and to minimize schedule and cost risk. DCDS consists of integrated methodologies, languages, and tools to support the life cycle of developing distributed software and systems. Smooth and well-defined transistions from phase to phase, language to language, and tool to tool provide a unique and unified environment. An approach to evaluating DCDS highlights its benefits.

  3. Computer modeling of properties of complex molecular systems

    Energy Technology Data Exchange (ETDEWEB)

    Kulkova, E.Yu. [Moscow State University of Technology “STANKIN”, Vadkovsky per., 1, Moscow 101472 (Russian Federation); Khrenova, M.G.; Polyakov, I.V. [Lomonosov Moscow State University, Chemistry Department, Leninskie Gory 1/3, Moscow 119991 (Russian Federation); Nemukhin, A.V. [Lomonosov Moscow State University, Chemistry Department, Leninskie Gory 1/3, Moscow 119991 (Russian Federation); N.M. Emanuel Institute of Biochemical Physics, Russian Academy of Sciences, Kosygina 4, Moscow 119334 (Russian Federation)

    2015-03-10

    Large molecular aggregates present important examples of strongly nonhomogeneous systems. We apply combined quantum mechanics / molecular mechanics approaches that assume treatment of a part of the system by quantum-based methods and the rest of the system with conventional force fields. Herein we illustrate these computational approaches by two different examples: (1) large-scale molecular systems mimicking natural photosynthetic centers, and (2) components of prospective solar cells containing titan dioxide and organic dye molecules. We demonstrate that modern computational tools are capable to predict structures and spectra of such complex molecular aggregates.

  4. Human computer interaction issues in Clinical Trials Management Systems.

    Science.gov (United States)

    Starren, Justin B; Payne, Philip R O; Kaufman, David R

    2006-01-01

    Clinical trials increasingly rely upon web-based Clinical Trials Management Systems (CTMS). As with clinical care systems, Human Computer Interaction (HCI) issues can greatly affect the usefulness of such systems. Evaluation of the user interface of one web-based CTMS revealed a number of potential human-computer interaction problems, in particular, increased workflow complexity associated with a web application delivery model and potential usability problems resulting from the use of ambiguous icons. Because these design features are shared by a large fraction of current CTMS, the implications extend beyond this individual system.

  5. Traditional foods and food systems: a revision of concepts emerging from qualitative surveys on-site in the Black Sea area and Italy.

    Science.gov (United States)

    D'Antuono, L Filippo

    2013-11-01

    The European FP7 BaSeFood project included a traditional food study contextually analysing their function in local food systems to stimulate consumers' awareness and indicate co-existence options for different scale exploitation. Background concepts were (1) the available traditional foods definitions; (2) the theoretical background of food quality perceptions; and (3) the different levels of food functions. Field investigations were carried out by face-to-face in-depth qualitative interviews with local stakeholders, in the Black Sea region and Italy, on all aspects of traditional food production chains: raw materials, products, processes and perceptions. Critical and intercultural comparisons represented the basis of data analysis. Eight hundred and thirty-nine foods were documented. The direct experience perception of traditional food value observed in local contexts is somewhat contrasting with the present European tendency to communicate traditional food nature through registration or proprietary standards. Traditional foods are generally a combination of energetic staples with other available ingredients; their intrinsic variability makes the definition of 'standard' recipes little more than an artefact of convenience; cross-country variations are determined by available ingredients, social conditions and nutritional needs. Commercial production requires some degree of raw material and process standardisation. New technologies and rules may stimulate traditional food evolution, but may also represent a barrier for local stakeholders. A trend to work within supply chains by local stakeholders was detected. Specific health promoting values were rarely perceived as a fundamental character. The stable inclusion of traditional food systems in present food supply chains requires a recovery of consumers' awareness of traditional food quality appreciation. © 2013 Society of Chemical Industry.

  6. Method to Compute CT System MTF

    Energy Technology Data Exchange (ETDEWEB)

    Kallman, Jeffrey S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-05-03

    The modulation transfer function (MTF) is the normalized spatial frequency representation of the point spread function (PSF) of the system. Point objects are hard to come by, so typically the PSF is determined by taking the numerical derivative of the system's response to an edge. This is the method we use, and we typically use it with cylindrical objects. Given a cylindrical object, we first put an active contour around it, as shown in Figure 1(a). The active contour lets us know where the boundary of the test object is. We next set a threshold (Figure 1(b)) and determine the center of mass of the above threshold voxels. For the purposes of determining the center of mass, each voxel is weighted identically (not by voxel value).

  7. COMPUTER SIMULATION SYSTEM OF STRETCH REDUCING MILL

    Institute of Scientific and Technical Information of China (English)

    B.Y. Sun; S.J. Yuan

    2007-01-01

    The principle of the stretch reducing process is analyzed and three models of pass design areestablished. The simulations are done about variables, such as, stress, strain, the stretches betweenthe stands, the size parameters of the steel tube, and the roll force parameters. According to itsproduct catalogs the system can automatically divide the pass series, formulate the rolling table,and simulate the basic technological parameters in the stretch reducing process. All modules areintegrated based on the developing environment of VB6. The system can draw simulation curvesand pass pictures. Three kinds of database including the material database, pass design database,and product database are devised using Microsoft Access, which can be directly edited, corrected,and searched.

  8. Computational Virtual Reality (VR) as a human-computer interface in the operation of telerobotic systems

    Science.gov (United States)

    Bejczy, Antal K.

    1995-01-01

    This presentation focuses on the application of computer graphics or 'virtual reality' (VR) techniques as a human-computer interface tool in the operation of telerobotic systems. VR techniques offer very valuable task realization aids for planning, previewing and predicting robotic actions, operator training, and for visual perception of non-visible events like contact forces in robotic tasks. The utility of computer graphics in telerobotic operation can be significantly enhanced by high-fidelity calibration of virtual reality images to actual TV camera images. This calibration will even permit the creation of artificial (synthetic) views of task scenes for which no TV camera views are available.

  9. STUDY ON HUMAN-COMPUTER SYSTEM FOR STABLE VIRTUAL DISASSEMBLY

    Institute of Scientific and Technical Information of China (English)

    Guan Qiang; Zhang Shensheng; Liu Jihong; Cao Pengbing; Zhong Yifang

    2003-01-01

    The cooperative work between human being and computer based on virtual reality (VR) is investigated to plan the disassembly sequences more efficiently. A three-layer model of human-computer cooperative virtual disassembly is built, and the corresponding human-computer system for stable virtual disassembly is developed. In this system, an immersive and interactive virtual disassembly environment has been created to provide planners with a more visual working scene. For cooperative disassembly, an intelligent module of stability analysis of disassembly operations is embedded into the human-computer system to assist planners to implement disassembly tasks better. The supporting matrix for stability analysis of disassembly operations is defined and the method of stability analysis is detailed. Based on the approach, the stability of any disassembly operation can be analyzed to instruct the manual virtual disassembly. At last, a disassembly case in the virtual environment is given to prove the validity of above ideas.

  10. Computational Modeling, Formal Analysis, and Tools for Systems Biology.

    Science.gov (United States)

    Bartocci, Ezio; Lió, Pietro

    2016-01-01

    As the amount of biological data in the public domain grows, so does the range of modeling and analysis techniques employed in systems biology. In recent years, a number of theoretical computer science developments have enabled modeling methodology to keep pace. The growing interest in systems biology in executable models and their analysis has necessitated the borrowing of terms and methods from computer science, such as formal analysis, model checking, static analysis, and runtime verification. Here, we discuss the most important and exciting computational methods and tools currently available to systems biologists. We believe that a deeper understanding of the concepts and theory highlighted in this review will produce better software practice, improved investigation of complex biological processes, and even new ideas and better feedback into computer science.

  11. A Computer-Mediated Instruction System, Applied to Its Own Operating System and Peripheral Equipment.

    Science.gov (United States)

    Winiecki, Roger D.

    Each semester students in the School of Health Sciences of Hunter College learn how to use a computer, how a computer system operates, and how peripheral equipment can be used. To overcome inadequate computer center services and equipment, programed subject matter and accompanying reference material were developed. The instructional system has a…

  12. Research of the grid computing system applied in optical simulation

    Science.gov (United States)

    Jin, Wei-wei; Wang, Yu-dong; Liu, Qiangsheng; Cen, Zhao-feng; Li, Xiao-tong; Lin, Yi-qun

    2008-03-01

    A grid computing in the field of optics is presented in this paper. Firstly, the basic principles and research background of grid computing are outlined in this paper, along with the overview of its applications and the development status quo. The paper also discusses several typical tasks scheduling algorithms. Secondly, it focuses on describing a task scheduling of grid computing applied in optical computation. The paper gives details about the task scheduling system, including the task partition, granularity selection and tasks allocation, especially the structure of the system. In addition, some details of communication on grid computing are also illustrated. In this system, the "makespan" and "load balancing" are comprehensively considered. Finally, we build a grid model to test the task scheduling strategy, and the results are analyzed in detail. Compared to one isolated computer, a grid comprised of one server and four processors can shorten the "makespan" to 1/4. At the same time, the experimental results of the simulation also illustrate that the proposed scheduling system is able to balance loads of all processors. In short, the system performs scheduling well in the grid environment.

  13. Computational strategies for three-dimensional flow simulations on distributed computer systems

    Science.gov (United States)

    Sankar, Lakshmi N.; Weed, Richard A.

    1995-08-01

    This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.

  14. Computational strategies for three-dimensional flow simulations on distributed computer systems

    Science.gov (United States)

    Sankar, Lakshmi N.; Weed, Richard A.

    1995-01-01

    This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.

  15. Diabetes Monitoring System Using Mobile Computing Technologies

    Directory of Open Access Journals (Sweden)

    Mashael Saud Bin-Sabbar

    2013-03-01

    Full Text Available Diabetes is a chronic disease that needs to regularly be monitored to keep the blood sugar levels within normal ranges. This monitoring depends on the diabetic treatment plan that is periodically reviewed by the endocrinologist. The frequent visit to the main hospital seems to be tiring and time consuming for both endocrinologist and diabetes patients. The patient may have to travel to the main city, paying a ticket and reserving a place to stay. Those expenses can be reduced by remotely monitoring the diabetes patients with the help of mobile devices. In this paper, we introduce our implementation of an integrated monitoring tool for the diabetes patients. The designed system provides a daily monitoring and monthly services. The daily monitoring includes recording the result of daily analysis and activates to be transmitted from a patient’s mobile device to a central database. The monthly services require the patient to visit a nearby care center in the patient home town to do the medical examination and checkups. The result of this visit entered into the system and then synchronized with the central database. Finally, the endocrinologist can remotely monitor the patient record and adjust the treatment plan and the insulin doses if need.

  16. Computational Control of Flexible Aerospace Systems

    Science.gov (United States)

    Sharpe, Lonnie, Jr.; Shen, Ji Yao

    1994-01-01

    The main objective of this project is to establish a distributed parameter modeling technique for structural analysis, parameter estimation, vibration suppression and control synthesis of large flexible aerospace structures. This report concentrates on the research outputs produced in the last two years of the project. The main accomplishments can be summarized as follows. A new version of the PDEMOD Code had been completed. A theoretical investigation of the NASA MSFC two-dimensional ground-based manipulator facility by using distributed parameter modelling technique has been conducted. A new mathematical treatment for dynamic analysis and control of large flexible manipulator systems has been conceived, which may provide a embryonic form of a more sophisticated mathematical model for future modified versions of the PDEMOD Codes.

  17. Information and computer-aided system for structural materials

    Energy Technology Data Exchange (ETDEWEB)

    Nekrashevitch, Yu.G.; Nizametdinov, Sh.U.; Polkovnikov, A.V.; Rumjantzev, V.P.; Surina, O.N. (Engineering Physics Inst., Moscow (Russia)); Kalinin, G.M.; Sidorenkov, A.V.; Strebkov, Yu.S. (Research and Development Inst. of Power Engineering, Moscow (Russia))

    1992-09-01

    An information and computer-aided system for structural materials data has been developed to provide data for the fusion and fission reactor system design. It is designed for designers, industrial engineers, and material science specialists and provides a friendly interface in an interactive mode. The database for structural materials contains the master files: Chemical composition, physical, mechanical, corrosion, technological properties, regulatory and technical documentation. The system is implemented on a PC/AT running the PS /2 operating system. (orig.).

  18. Method and system for environmentally adaptive fault tolerant computing

    Science.gov (United States)

    Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)

    2010-01-01

    A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.

  19. Patterns of Programmers' Use of Computer-Mediated Communications Systems

    Directory of Open Access Journals (Sweden)

    Chatpong Tangmanee

    2003-11-01

    Full Text Available Communication behavior of programmers plays an essential role in success of software development. Computer-mediated communication (CMC system, such as e-mail, or the World Wide Web (WWW, have substantial implications for coordinating work of programmers. Yet, no studies have dealt systematically with CMC behaviors of programmers. Drawing upon theories in organizational studies, information science, computer-mediated communication and software engineering, this research examines what programmers accomplish through CMC systems. Data were gathered from survey questionnaires mailed to 730 programmers, who are members of the Association of Computing Machinery (ACM and are involved in a variety of programming work. Based on factor analysis, the study found that programmers use CMC systems (1 to achieve progress in work-related tasks (i.e., task-related purposes, (2 to satisfy their social and emotional needs (i.e., socio-emotional purposes, and (3 to explore for information (i.e., exploring purposes. The findings of this research extend an insight into important patterns for which programmers use CMC systems. This insight has advanced theories of computer-mediated communication in the context of computer programmers. Also, practitioners, especially in software development, may use the results as guidelines in fostering a firm’s feasible network policy that fits with what their programming staff accomplish through computer-mediated communication.

  20. Complex system modelling and control through intelligent soft computations

    CERN Document Server

    Azar, Ahmad

    2015-01-01

    The book offers a snapshot of the theories and applications of soft computing in the area of complex systems modeling and control. It presents the most important findings discussed during the 5th International Conference on Modelling, Identification and Control, held in Cairo, from August 31-September 2, 2013. The book consists of twenty-nine selected contributions, which have been thoroughly reviewed and extended before their inclusion in the volume. The different chapters, written by active researchers in the field, report on both current theories and important applications of soft-computing. Besides providing the readers with soft-computing fundamentals, and soft-computing based inductive methodologies/algorithms, the book also discusses key industrial soft-computing applications, as well as multidisciplinary solutions developed for a variety of purposes, like windup control, waste management, security issues, biomedical applications and many others. It is a perfect reference guide for graduate students, r...

  1. Neuromorphic Computing – From Materials Research to Systems Architecture Roundtable

    Energy Technology Data Exchange (ETDEWEB)

    Schuller, Ivan K. [Univ. of California, San Diego, CA (United States); Stevens, Rick [Argonne National Lab. (ANL), Argonne, IL (United States); Univ. of Chicago, IL (United States); Pino, Robinson [Dept. of Energy (DOE) Office of Science, Washington, DC (United States); Pechan, Michael [Dept. of Energy (DOE) Office of Science, Washington, DC (United States)

    2015-10-29

    Computation in its many forms is the engine that fuels our modern civilization. Modern computation—based on the von Neumann architecture—has allowed, until now, the development of continuous improvements, as predicted by Moore’s law. However, computation using current architectures and materials will inevitably—within the next 10 years—reach a limit because of fundamental scientific reasons. DOE convened a roundtable of experts in neuromorphic computing systems, materials science, and computer science in Washington on October 29-30, 2015 to address the following basic questions: Can brain-like (“neuromorphic”) computing devices based on new material concepts and systems be developed to dramatically outperform conventional CMOS based technology? If so, what are the basic research challenges for materials sicence and computing? The overarching answer that emerged was: The development of novel functional materials and devices incorporated into unique architectures will allow a revolutionary technological leap toward the implementation of a fully “neuromorphic” computer. To address this challenge, the following issues were considered: The main differences between neuromorphic and conventional computing as related to: signaling models, timing/clock, non-volatile memory, architecture, fault tolerance, integrated memory and compute, noise tolerance, analog vs. digital, and in situ learning New neuromorphic architectures needed to: produce lower energy consumption, potential novel nanostructured materials, and enhanced computation Device and materials properties needed to implement functions such as: hysteresis, stability, and fault tolerance Comparisons of different implementations: spin torque, memristors, resistive switching, phase change, and optical schemes for enhanced breakthroughs in performance, cost, fault tolerance, and/or manufacturability.

  2. Semantic Computation in a Chinese Question-Answering System

    Institute of Scientific and Technical Information of China (English)

    李素建; 张健; 黄雄; 白硕; 刘群

    2002-01-01

    This paper introduces a kind of semantic computation and presents how tocombine it into our Chinese Question-Answering (QA) system. Based on two kinds of languageresources, Hownet and Cilin, we present an approach to computing the similarity and relevancybetween words. Using these results, we can calculate the relevancy between two sentences andthen get the optimal answer for the query in the system. The calculation adopts quantitativemethods and can be incorporated into QA systems easily, avoiding some difficulties in conven-tional NLP (Natural Language Processing) problems. The experiments show that the resultsare satisfactory.

  3. Modern Embedded Computing Designing Connected, Pervasive, Media-Rich Systems

    CERN Document Server

    Barry, Peter

    2012-01-01

    Modern embedded systems are used for connected, media-rich, and highly integrated handheld devices such as mobile phones, digital cameras, and MP3 players. All of these embedded systems require networking, graphic user interfaces, and integration with PCs, as opposed to traditional embedded processors that can perform only limited functions for industrial applications. While most books focus on these controllers, Modern Embedded Computing provides a thorough understanding of the platform architecture of modern embedded computing systems that drive mobile devices. The book offers a comprehen

  4. Fundamentals of power integrity for computer platforms and systems

    CERN Document Server

    DiBene, Joseph T

    2014-01-01

    An all-encompassing text that focuses on the fundamentals of power integrity Power integrity is the study of power distribution from the source to the load and the system level issues that can occur across it. For computer systems, these issues can range from inside the silicon to across the board and may egress into other parts of the platform, including thermal, EMI, and mechanical. With a focus on computer systems and silicon level power delivery, this book sheds light on the fundamentals of power integrity, utilizing the author's extensive background in the power integrity industry and un

  5. SLA for E-Learning System Based on Cloud Computing

    Directory of Open Access Journals (Sweden)

    Doaa Elmatary

    2015-10-01

    Full Text Available The Service Level Agreement (SLA becomes an important issue especially over the Cloud Computing and online services that based on the ‘pay-as-you-use’ fashion. Establishing the Service level agreements (SLAs, which can be defined as a negotiation between the service provider and the user, is needed for many types of current applications as the E-Learning systems. The work in this paper presents an idea of optimizing the SLA parameters to serve any E-Learning system over the Cloud Computing platform, with defining the negotiation process, the suitable frame work, and the sequence diagram to accommodate the E-Learning systems.

  6. An E-learning System based on Affective Computing

    Science.gov (United States)

    Duo, Sun; Song, Lu Xue

    In recent years, e-learning as a learning system is very popular. But the current e-learning systems cannot instruct students effectively since they do not consider the emotional state in the context of instruction. The emergence of the theory about "Affective computing" can solve this question. It can make the computer's intelligence no longer be a pure cognitive one. In this paper, we construct an emotional intelligent e-learning system based on "Affective computing". A dimensional model is put forward to recognize and analyze the student's emotion state and a virtual teacher's avatar is offered to regulate student's learning psychology with consideration of teaching style based on his personality trait. A "man-to-man" learning environment is built to simulate the traditional classroom's pedagogy in the system.

  7. The engineering design integration (EDIN) system. [digital computer program complex

    Science.gov (United States)

    Glatt, C. R.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Reiners, S. J.

    1974-01-01

    A digital computer program complex for the evaluation of aerospace vehicle preliminary designs is described. The system consists of a Univac 1100 series computer and peripherals using the Exec 8 operating system, a set of demand access terminals of the alphanumeric and graphics types, and a library of independent computer programs. Modification of the partial run streams, data base maintenance and construction, and control of program sequencing are provided by a data manipulation program called the DLG processor. The executive control of library program execution is performed by the Univac Exec 8 operating system through a user established run stream. A combination of demand and batch operations is employed in the evaluation of preliminary designs. Applications accomplished with the EDIN system are described.

  8. Automatic behaviour analysis system for honeybees using computer vision

    DEFF Research Database (Denmark)

    Tu, Gang Jun; Hansen, Mikkel Kragh; Kryger, Per

    2016-01-01

    -cost embedded computer with very limited computational resources as compared to an ordinary PC. The system succeeds in counting honeybees, identifying their position and measuring their in-and-out activity. Our algorithm uses background subtraction method to segment the images. After the segmentation stage......, the methods are primarily based on statistical analysis and inference. The regression statistics (i.e. R2) of the comparisons of system predictions and manual counts are 0.987 for counting honeybees, and 0.953 and 0.888 for measuring in-activity and out-activity, respectively. The experimental results...... demonstrate that this system can be used as a tool to detect the behaviour of honeybees and assess their state in the beehive entrance. Besides, the result of the computation time show that the Raspberry Pi is a viable solution in such real-time video processing system....

  9. Computer Aided Design System for Developing Musical Fountain Programs

    Institute of Scientific and Technical Information of China (English)

    刘丹; 张乃尧; 朱汉城

    2003-01-01

    A computer aided design system for developing musical fountain programs was developed with multiple functions such as intelligent design, 3-D animation, manual modification and synchronized motion to make the development process more efficient. The system first analyzed the music form and sentiment using many basic features of the music to select a basic fountain program. Then, this program is simulated with 3-D animation and modified manually to achieve the desired results. Finally, the program is transformed to a computer control program to control the musical fountain in time with the music. A prototype system for the musical fountain was also developed. It was tested with many styles of music and users were quite satisfied with its performance. By integrating various functions, the proposed computer aided design system for developing musical fountain programs greatly simplified the design of the musical fountain programs.

  10. Template based parallel checkpointing in a massively parallel computer system

    Science.gov (United States)

    Archer, Charles Jens; Inglett, Todd Alan

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  11. Dynamic detection for computer virus based on immune system

    Institute of Scientific and Technical Information of China (English)

    LI Tao

    2008-01-01

    Inspired by biological immune system,a new dynamic detection model for computer virus based on immune system is proposed.The quantitative description of the model is given.The problem of dynamic description for self and nonself in a computer virus immune system is solved,which reduces the size of self set.The new concept of dynamic tolerance,as well as the new mechanisms of gene evolution and gene coding for immature detectors is presented,improving the generating efficiency of mature detectors,reducing the false-negative and false-positive rates.Therefore,the difficult problem,in which the detector training cost is exponentially related to the size of self-set in a traditional computer immune system,is thus overcome.The theory analysis and experimental results show that the proposed model has better time efficiency and detecting ability than the classic model ARTIS.

  12. A Computer System for a Faculty of Education.

    Science.gov (United States)

    Hallworth, Herbert J.

    A computer system, introduced for use in statistics courses within a college of education, features the performance of a variety of functions, a relatively economic operation, and the facilitation of placing remote terminals in schools. The system provides an interactive statistics laboratory in which the student learns to write programs for the…

  13. Computer System Reliability Allocation Method and Supporting Tool

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper presents a computer system reliability allocationmethod that is based on the theory of statistic and Markovian chain,which can be used to allocate reliability to subsystem, to hybrid system and software modules. Arele vant supporting tool built by us is introduced.

  14. A review of residential computer oriented energy control systems

    Energy Technology Data Exchange (ETDEWEB)

    North, Greg

    2000-07-01

    The purpose of this report is to bring together as much information on Residential Computer Oriented Energy Control Systems as possible within a single document. This report identifies the main elements of the system and is intended to provide many technical options for the design and implementation of various energy related services.

  15. Load flow computations in hybrid transmission - distributed power systems

    NARCIS (Netherlands)

    Wobbes, E.D.; Lahaye, D.J.P.

    2013-01-01

    We interconnect transmission and distribution power systems and perform load flow computations in the hybrid network. In the largest example we managed to build, fifty copies of a distribution network consisting of fifteen nodes is connected to the UCTE study model, resulting in a system consisting

  16. Computer-Aided Communication Satellite System Analysis and Optimization.

    Science.gov (United States)

    Stagl, Thomas W.; And Others

    Various published computer programs for fixed/broadcast communication satellite system synthesis and optimization are discussed. The rationale for selecting General Dynamics/Convair's Satellite Telecommunication Analysis and Modeling Program (STAMP) in modified form to aid in the system costing and sensitivity analysis work in the Program on…

  17. Motivating Constraints of a Pedagogy-Embedded Computer Algebra System

    Science.gov (United States)

    Dana-Picard, Thierry

    2007-01-01

    The constraints of a computer algebra system (CAS) generally induce limitations on its usage. Via the pedagogical features implemented in such a system, "motivating constraints" can appear, encouraging advanced theoretical learning, providing a broader mathematical knowledge and more profound mathematical understanding. We discuss this issue,…

  18. Improving Computer Based Speech Therapy Using a Fuzzy Expert System

    OpenAIRE

    Ovidiu Andrei Schipor; Stefan Gheorghe Pentiuc; Maria Doina Schipor

    2012-01-01

    In this paper we present our work about Computer Based Speech Therapy systems optimization. We focus especially on using a fuzzy expert system in order to determine specific parameters of personalized therapy, i.e. the number, length and content of training sessions. The efficiency of this new approach was tested during an experiment performed with our CBST, named LOGOMON.

  19. Demonstrating Operating System Principles via Computer Forensics Exercises

    Science.gov (United States)

    Duffy, Kevin P.; Davis, Martin H., Jr.; Sethi, Vikram

    2010-01-01

    We explore the feasibility of sparking student curiosity and interest in the core required MIS operating systems course through inclusion of computer forensics exercises into the course. Students were presented with two in-class exercises. Each exercise demonstrated an aspect of the operating system, and each exercise was written as a computer…

  20. Demonstrating Operating System Principles via Computer Forensics Exercises

    Science.gov (United States)

    Duffy, Kevin P.; Davis, Martin H., Jr.; Sethi, Vikram

    2010-01-01

    We explore the feasibility of sparking student curiosity and interest in the core required MIS operating systems course through inclusion of computer forensics exercises into the course. Students were presented with two in-class exercises. Each exercise demonstrated an aspect of the operating system, and each exercise was written as a computer…

  1. Optical character recognition systems for different languages with soft computing

    CERN Document Server

    Chaudhuri, Arindam; Badelia, Pratixa; K Ghosh, Soumya

    2017-01-01

    The book offers a comprehensive survey of soft-computing models for optical character recognition systems. The various techniques, including fuzzy and rough sets, artificial neural networks and genetic algorithms, are tested using real texts written in different languages, such as English, French, German, Latin, Hindi and Gujrati, which have been extracted by publicly available datasets. The simulation studies, which are reported in details here, show that soft-computing based modeling of OCR systems performs consistently better than traditional models. Mainly intended as state-of-the-art survey for postgraduates and researchers in pattern recognition, optical character recognition and soft computing, this book will be useful for professionals in computer vision and image processing alike, dealing with different issues related to optical character recognition.

  2. 8th International Conference on Computer Recognition Systems

    CERN Document Server

    Jackowski, Konrad; Kurzynski, Marek; Wozniak, Michał; Zolnierek, Andrzej

    2013-01-01

    The computer recognition systems are nowadays one of the most promising directions in artificial intelligence. This book is the most comprehensive study of this field. It contains a collection of 86 carefully selected articles contributed by experts of pattern recognition. It reports on current research with respect to both methodology and applications. In particular, it includes the following sections: Biometrics Data Stream Classification and Big Data Analytics  Features, learning, and classifiers Image processing and computer vision Medical applications Miscellaneous applications Pattern recognition and image processing in robotics  Speech and word recognition This book is a great reference tool for scientists who deal with the problems of designing computer pattern recognition systems. Its target readers can be the as well researchers as students of computer science, artificial intelligence or robotics.

  3. Experimental quantum computing to solve systems of linear equations.

    Science.gov (United States)

    Cai, X-D; Weedbrook, C; Su, Z-E; Chen, M-C; Gu, Mile; Zhu, M-J; Li, Li; Liu, Nai-Le; Lu, Chao-Yang; Pan, Jian-Wei

    2013-06-07

    Solving linear systems of equations is ubiquitous in all areas of science and engineering. With rapidly growing data sets, such a task can be intractable for classical computers, as the best known classical algorithms require a time proportional to the number of variables N. A recently proposed quantum algorithm shows that quantum computers could solve linear systems in a time scale of order log(N), giving an exponential speedup over classical computers. Here we realize the simplest instance of this algorithm, solving 2×2 linear equations for various input vectors on a quantum computer. We use four quantum bits and four controlled logic gates to implement every subroutine required, demonstrating the working principle of this algorithm.

  4. Snore related signals processing in a private cloud computing system.

    Science.gov (United States)

    Qian, Kun; Guo, Jian; Xu, Huijie; Zhu, Zhaomeng; Zhang, Gongxuan

    2014-09-01

    Snore related signals (SRS) have been demonstrated to carry important information about the obstruction site and degree in the upper airway of Obstructive Sleep Apnea-Hypopnea Syndrome (OSAHS) patients in recent years. To make this acoustic signal analysis method more accurate and robust, big SRS data processing is inevitable. As an emerging concept and technology, cloud computing has motivated numerous researchers and engineers to exploit applications both in academic and industry field, which could have an ability to implement a huge blue print in biomedical engineering. Considering the security and transferring requirement of biomedical data, we designed a system based on private cloud computing to process SRS. Then we set the comparable experiments of processing a 5-hour audio recording of an OSAHS patient by a personal computer, a server and a private cloud computing system to demonstrate the efficiency of the infrastructure we proposed.

  5. 9th International Conference on Computer Recognition Systems

    CERN Document Server

    Jackowski, Konrad; Kurzyński, Marek; Woźniak, Michał; Żołnierek, Andrzej

    2016-01-01

    The computer recognition systems are nowadays one of the most promising directions in artificial intelligence. This book is the most comprehensive study of this field. It contains a collection of 79 carefully selected articles contributed by experts of pattern recognition. It reports on current research with respect to both methodology and applications. In particular, it includes the following sections: Features, learning, and classifiers Biometrics Data Stream Classification and Big Data Analytics Image processing and computer vision Medical applications Applications RGB-D perception: recent developments and applications This book is a great reference tool for scientists who deal with the problems of designing computer pattern recognition systems. Its target readers can be the as well researchers as students of computer science, artificial intelligence or robotics.  .

  6. TOWARD HIGHLY SECURE AND AUTONOMIC COMPUTING SYSTEMS: A HIERARCHICAL APPROACH

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hsien-Hsin S

    2010-05-11

    The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniques and system software for achieving a robust, secure, and reliable computing system toward our goal.

  7. Fault-tolerant clock synchronization validation methodology. [in computer systems

    Science.gov (United States)

    Butler, Ricky W.; Palumbo, Daniel L.; Johnson, Sally C.

    1987-01-01

    A validation method for the synchronization subsystem of a fault-tolerant computer system is presented. The high reliability requirement of flight-crucial systems precludes the use of most traditional validation methods. The method presented utilizes formal design proof to uncover design and coding errors and experimentation to validate the assumptions of the design proof. The experimental method is described and illustrated by validating the clock synchronization system of the Software Implemented Fault Tolerance computer. The design proof of the algorithm includes a theorem that defines the maximum skew between any two nonfaulty clocks in the system in terms of specific system parameters. Most of these parameters are deterministic. One crucial parameter is the upper bound on the clock read error, which is stochastic. The probability that this upper bound is exceeded is calculated from data obtained by the measurement of system parameters. This probability is then included in a detailed reliability analysis of the system.

  8. SD-CAS: Spin Dynamics by Computer Algebra System.

    Science.gov (United States)

    Filip, Xenia; Filip, Claudiu

    2010-11-01

    A computer algebra tool for describing the Liouville-space quantum evolution of nuclear 1/2-spins is introduced and implemented within a computational framework named Spin Dynamics by Computer Algebra System (SD-CAS). A distinctive feature compared with numerical and previous computer algebra approaches to solving spin dynamics problems results from the fact that no matrix representation for spin operators is used in SD-CAS, which determines a full symbolic character to the performed computations. Spin correlations are stored in SD-CAS as four-entry nested lists of which size increases linearly with the number of spins into the system and are easily mapped into analytical expressions in terms of spin operator products. For the so defined SD-CAS spin correlations a set of specialized functions and procedures is introduced that are essential for implementing basic spin algebra operations, such as the spin operator products, commutators, and scalar products. They provide results in an abstract algebraic form: specific procedures to quantitatively evaluate such symbolic expressions with respect to the involved spin interaction parameters and experimental conditions are also discussed. Although the main focus in the present work is on laying the foundation for spin dynamics symbolic computation in NMR based on a non-matrix formalism, practical aspects are also considered throughout the theoretical development process. In particular, specific SD-CAS routines have been implemented using the YACAS computer algebra package (http://yacas.sourceforge.net), and their functionality was demonstrated on a few illustrative examples.

  9. Radiation Tolerant, FPGA-Based SmallSat Computer System

    Science.gov (United States)

    LaMeres, Brock J.; Crum, Gary A.; Martinez, Andres; Petro, Andrew

    2015-01-01

    The Radiation Tolerant, FPGA-based SmallSat Computer System (RadSat) computing platform exploits a commercial off-the-shelf (COTS) Field Programmable Gate Array (FPGA) with real-time partial reconfiguration to provide increased performance, power efficiency and radiation tolerance at a fraction of the cost of existing radiation hardened computing solutions. This technology is ideal for small spacecraft that require state-of-the-art on-board processing in harsh radiation environments but where using radiation hardened processors is cost prohibitive.

  10. COMPUTER VISION APPLIED IN THE PRECISION CONTROL SYSTEM

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Computer vision and its application in the precision control system are discussed. In the process of fabricating, the accuracy of the products should be controlled reasonably and completely. The precision should be kept and adjusted according to the information of feedback got from the measurement on-line or out-line in different procedures. Computer vision is one of the useful methods to do this. Computer vision and the image manipulation are presented, and based on this, a n-dimensional vector to appraise on precision of machining is given.

  11. Research on the Teaching System of the University Computer Foundation

    OpenAIRE

    2016-01-01

    Inonal students, the teaching contents, classification, hierarchical teaching methods with the combination of professional level training, as well as for top-notch students after class to promote comprehensive training methods for different students, establish online Q & A, test platform, to strengthen the integration professional education and computer education and training system of college computer basic course of study and exploration, and the popularization and application of the basic ...

  12. Object-oriented models of functionally integrated computer systems

    OpenAIRE

    Kaasbøll, Jens

    1994-01-01

    Functional integration is the compatibility between the structure, culture and competence of an organization and its computer systems, specifically the availability of data and functionality and the consistency of user interfaces. Many people use more than one computer program in their work, and they experience problems relating to functional integration. Various solutions can be considered for different tasks and technology; e.g. to design a common userinterface shell for several application...

  13. Software Requirements for a System to Compute Mean Failure Cost

    Energy Technology Data Exchange (ETDEWEB)

    Aissa, Anis Ben [University of Tunis, Belvedere, Tunisia; Abercrombie, Robert K [ORNL; Sheldon, Frederick T [ORNL; Mili, Ali [New Jersey Insitute of Technology

    2010-01-01

    In earlier works, we presented a computational infrastructure that allows an analyst to estimate the security of a system in terms of the loss that each stakeholder. We also demonstrated this infrastructure through the results of security breakdowns for the ecommerce case. In this paper, we illustrate this infrastructure by an application that supports the computation of the Mean Failure Cost (MFC) for each stakeholder.

  14. A distributed deadlock detection algorithm for mobile computing system

    Institute of Scientific and Technical Information of China (English)

    CHENG Xin; LIU Hong-wei; ZUO De-cheng; JIN Feng; YANG Xiao-zong

    2005-01-01

    The mode of mobile computing originated from distributed computing and it has the un-idempotent operation property, therefore the deadlock detection algorithm designed for mobile computing systems will face challenges with regard to correctness and high efficiency. This paper attempts a fundamental study of deadlock detection for the AND model of mobile computing systems. First, the existing deadlock detection algorithms for distributed systems are classified into the resource node dependent (RD) and the resource node independent (RI) categories, and their corresponding weaknesses are discussed. Afterwards a new RI algorithm based on the AND model of mobile computing system is presented. The novelties of our algorithm are that: 1 ) the blocked nodes inform their predecessors and successors simultaneously; 2 ) the detection messages ( agents )hold the predecessors information of their originator; 3) no agent is stored midway. Additionally, the quit-inform scheme is introduced to treat the excessive victim quitting problem raised by the overlapped cycles. By these methods the proposed algorithm can detect a cycle of size n within n - 2 steps and with ( n2 - n - 2)/2 agents. The performance of our algorithm is compared with the most competitive RD and RI algorithms for distributed systems on a mobile agent simulation platform. Experiment results point out that our algorithm outperforms the two algorithms under the vast majority of resource configurations and concurrent workloads. The correctness of the proposed algorithm is formally proven by the invariant verification technique.

  15. Hot Chips and Hot Interconnects for High End Computing Systems

    Science.gov (United States)

    Saini, Subhash

    2005-01-01

    I will discuss several processors: 1. The Cray proprietary processor used in the Cray X1; 2. The IBM Power 3 and Power 4 used in an IBM SP 3 and IBM SP 4 systems; 3. The Intel Itanium and Xeon, used in the SGI Altix systems and clusters respectively; 4. IBM System-on-a-Chip used in IBM BlueGene/L; 5. HP Alpha EV68 processor used in DOE ASCI Q cluster; 6. SPARC64 V processor, which is used in the Fujitsu PRIMEPOWER HPC2500; 7. An NEC proprietary processor, which is used in NEC SX-6/7; 8. Power 4+ processor, which is used in Hitachi SR11000; 9. NEC proprietary processor, which is used in Earth Simulator. The IBM POWER5 and Red Storm Computing Systems will also be discussed. The architectures of these processors will first be presented, followed by interconnection networks and a description of high-end computer systems based on these processors and networks. The performance of various hardware/programming model combinations will then be compared, based on latest NAS Parallel Benchmark results (MPI, OpenMP/HPF and hybrid (MPI + OpenMP). The tutorial will conclude with a discussion of general trends in the field of high performance computing, (quantum computing, DNA computing, cellular engineering, and neural networks).

  16. Architectural requirements for the Red Storm computing system.

    Energy Technology Data Exchange (ETDEWEB)

    Camp, William J.; Tomkins, James Lee

    2003-10-01

    This report is based on the Statement of Work (SOW) describing the various requirements for delivering 3 new supercomputer system to Sandia National Laboratories (Sandia) as part of the Department of Energy's (DOE) Accelerated Strategic Computing Initiative (ASCI) program. This system is named Red Storm and will be a distributed memory, massively parallel processor (MPP) machine built primarily out of commodity parts. The requirements presented here distill extensive architectural and design experience accumulated over a decade and a half of research, development and production operation of similar machines at Sandia. Red Storm will have an unusually high bandwidth, low latency interconnect, specially designed hardware and software reliability features, a light weight kernel compute node operating system and the ability to rapidly switch major sections of the machine between classified and unclassified computing environments. Particular attention has been paid to architectural balance in the design of Red Storm, and it is therefore expected to achieve an atypically high fraction of its peak speed of 41 TeraOPS on real scientific computing applications. In addition, Red Storm is designed to be upgradeable to many times this initial peak capability while still retaining appropriate balance in key design dimensions. Installation of the Red Storm computer system at Sandia's New Mexico site is planned for 2004, and it is expected that the system will be operated for a minimum of five years following installation.

  17. Optical interconnection networks for high-performance computing systems.

    Science.gov (United States)

    Biberman, Aleksandr; Bergman, Keren

    2012-04-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers.

  18. Cluster based parallel database management system for data intensive computing

    Institute of Scientific and Technical Information of China (English)

    Jianzhong LI; Wei ZHANG

    2009-01-01

    This paper describes a computer-cluster based parallel database management system (DBMS), InfiniteDB, developed by the authors. InfiniteDB aims at efficiently sup-port data intensive computing in response to the rapid grow-ing in database size and the need of high performance ana-lyzing of massive databases. It can be efficiently executed in the computing system composed by thousands of computers such as cloud computing system. It supports the parallelisms of intra-query, inter-query, intra-operation, inter-operation and pipelining. It provides effective strategies for managing massive databases including the multiple data declustering methods, the declustering-aware algorithms for relational operations and other database operations, and the adaptive query optimization method. It also provides the functions of parallel data warehousing and data mining, the coordinator-wrapper mechanism to support the integration of heteroge-neous information resources on the Internet, and the fault tol-erant and resilient infrastructures. It has been used in many applications and has proved quite effective for data intensive computing.

  19. Computational requirements for on-orbit identification of space systems

    Science.gov (United States)

    Hadaegh, Fred Y.

    1988-01-01

    For the future space systems, on-orbit identification (ID) capability will be required to complement on-orbit control, due to the fact that the dynamics of large space structures, spacecrafts, and antennas will not be known sufficiently from ground modeling and testing. The computational requirements for ID of flexible structures such as the space station (SS) or the large deployable reflectors (LDR) are however, extensive due to the large number of modes, sensors, and actuators. For these systems the ID algorithm operations need not be computed in real-time, only in near real-time, or an appropriate mission time. Consequently the space systems will need advanced processors and efficient parallel processing algorithm design and architectures to implement the identification algorithms in near real-time. The MAX computer currently being developed may handle such computational requirements. The purpose is to specify the on-board computational requirements for dynamic and static identification for large space structures. The computational requirements for six ID algorithms are presented in the context of three examples: the JPL/AFAL ground antenna facility, the space station (SS), and the large deployable reflector (LDR).

  20. 78 FR 18353 - Guidance for Industry: Blood Establishment Computer System Validation in the User's Facility...

    Science.gov (United States)

    2013-03-26

    ... HUMAN SERVICES Food and Drug Administration Guidance for Industry: Blood Establishment Computer System... ``Guidance for Industry: Blood Establishment Computer System Validation in the User's Facility'' dated April... establishment computer system validation program, consistent with recognized principles of software...