WorldWideScience

Sample records for reliability analysis workstation

  1. From LESSEPS to the workstation for reliability engineers

    International Nuclear Information System (INIS)

    Ancelin, C.; Bouissou, M.; Collet, J.; Gallois, M.; Magne, L.; Villatte, N.; Yedid, C.; Mulet-Marquis, D.

    1994-01-01

    Three Mile Island and Chernobyl in the nuclear industry, Challenger, in the space industry, Seveso and Bhopal in the chemical industry - all these accidents show how difficult it is to forecast all likely accident scenarios that may occur in complex systems. This was, however, the objective of the probabilistic safety assessment (PSA) performed by EDF at the Paluel nuclear power plant. The full computerization of this study led to the LESSEPS project, aimed at automating three different steps: generation of reliability models -based on the use of expert systems, qualitative and quantitative processing of these models using computer codes, and overall management of PSA studies. This paper presents the results obtained and the gradual transformation of this first generation of tools into a workstation aimed at integrating reliability studies at all stages of an industrial process. (author)

  2. Zoning and workstation analysis in interventional cardiology

    International Nuclear Information System (INIS)

    Degrange, J.P.

    2009-01-01

    As interventional cardiology can induce high doses not only for patients but also for the personnel, the delimitation of regulated areas (or zoning) and workstation analysis (dosimetry) are very important in terms of radioprotection. This paper briefly recalls methods and tools for the different steps to perform zoning and workstation analysis. It outlines the peculiarities of interventional cardiology, presents methods and tools adapted to interventional cardiology, and then discusses the same issues but for workstation analysis. It also outlines specific problems which can be met, and their possible adapted solutions

  3. Advanced Satellite Workstation - An integrated workstation environment for operational support of satellite system planning and analysis

    Science.gov (United States)

    Hamilton, Marvin J.; Sutton, Stewart A.

    A prototype integrated environment, the Advanced Satellite Workstation (ASW), which was developed and delivered for evaluation and operator feedback in an operational satellite control center, is described. The current ASW hardware consists of a Sun Workstation and Macintosh II Workstation connected via an ethernet Network Hardware and Software, Laser Disk System, Optical Storage System, and Telemetry Data File Interface. The central objective of ASW is to provide an intelligent decision support and training environment for operator/analysis of complex systems such as satellites. Compared to the many recent workstation implementations that incorporate graphical telemetry displays and expert systems, ASW provides a considerably broader look at intelligent, integrated environments for decision support, based on the premise that the central features of such an environment are intelligent data access and integrated toolsets.

  4. Physics analysis workstation

    International Nuclear Information System (INIS)

    Johnstad, H.

    1989-06-01

    The Physics Analysis Workstation (PAW) is a high-level program providing data presentation and statistical or mathematical analysis. PAW has been developed at CERN as an instrument to assist physicists in the analysis and presentation of their data. The program is interfaced to a high level graphics package, based on basic underlying graphics. 3-D graphics capabilities are being implemented. The major objects in PAW are 1 or 2 dimensional binned event data with fixed number of entries per event, vectors, functions, graphics pictures, and macros. Command input is handled by an integrated user interface package, which allows for a variety of choices for input, either with typed commands, or in a tree structure menu driven mode. 6 refs., 1 fig

  5. CALIPSO: an interactive image analysis software package for desktop PACS workstations

    Science.gov (United States)

    Ratib, Osman M.; Huang, H. K.

    1990-07-01

    The purpose of this project is to develop a low cost workstation for quantitative analysis of multimodality images using a Macintosh II personal computer. In the current configuration the Macintosh operates as a stand alone workstation where images are imported either from a central PACS server through a standard Ethernet network or recorded through video digitizer board. The CALIPSO software developed contains a large variety ofbasic image display and manipulation tools. We focused our effort however on the design and implementation ofquantitative analysis methods that can be applied to images from different imaging modalities. Analysis modules currently implemented include geometric and densitometric volumes and ejection fraction calculation from radionuclide and cine-angiograms Fourier analysis ofcardiac wall motion vascular stenosis measurement color coded parametric display of regional flow distribution from dynamic coronary angiograms automatic analysis ofmyocardial distribution ofradiolabelled tracers from tomoscintigraphic images. Several of these analysis tools were selected because they use similar color coded andparametric display methods to communicate quantitative data extracted from the images. 1. Rationale and objectives of the project Developments of Picture Archiving and Communication Systems (PACS) in clinical environment allow physicians and radiologists to assess radiographic images directly through imaging workstations (''). This convenient access to the images is often limited by the number of workstations available due in part to their high cost. There is also an increasing need for quantitative analysis ofthe images. During thepast decade

  6. A real-time data-acquisition and analysis system with distributed UNIX workstations

    International Nuclear Information System (INIS)

    Yamashita, H.; Miyamoto, K.; Maruyama, K.; Hirosawa, H.; Nakayoshi, K.; Emura, T.; Sumi, Y.

    1996-01-01

    A compact data-acquisition system using three RISC/UNIX TM workstations (SUN TM /SPARCstation TM ) with real-time capabilities of monitoring and analysis has been developed for the study of photonuclear reactions with the large-acceptance spectrometer TAGX. One workstation acquires data from memory modules in the front-end electronics (CAMAC and TKO) with a maximum speed of 300 Kbytes/s, where data size times instantaneous rate is 1 Kbyte x 300 Hz. Another workstation, which has real-time capability for run monitoring, gets the data with a buffer manager called NOVA. The third workstation analyzes the data and reconstructs the event. In addition to a general hardware and software description, priority settings and run control by shell scripts are described. This system has recently been used successfully in a two month long experiment. (orig.)

  7. Image sequence analysis workstation for multipoint motion analysis

    Science.gov (United States)

    Mostafavi, Hassan

    1990-08-01

    This paper describes an application-specific engineering workstation designed and developed to analyze motion of objects from video sequences. The system combines the software and hardware environment of a modem graphic-oriented workstation with the digital image acquisition, processing and display techniques. In addition to automation and Increase In throughput of data reduction tasks, the objective of the system Is to provide less invasive methods of measurement by offering the ability to track objects that are more complex than reflective markers. Grey level Image processing and spatial/temporal adaptation of the processing parameters is used for location and tracking of more complex features of objects under uncontrolled lighting and background conditions. The applications of such an automated and noninvasive measurement tool include analysis of the trajectory and attitude of rigid bodies such as human limbs, robots, aircraft in flight, etc. The system's key features are: 1) Acquisition and storage of Image sequences by digitizing and storing real-time video; 2) computer-controlled movie loop playback, freeze frame display, and digital Image enhancement; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored Image sequence; 4) model-based estimation and tracking of the six degrees of freedom of a rigid body: 5) field-of-view and spatial calibration: 6) Image sequence and measurement data base management; and 7) offline analysis software for trajectory plotting and statistical analysis.

  8. The microcomputer workstation - An alternate hardware architecture for remotely sensed image analysis

    Science.gov (United States)

    Erickson, W. K.; Hofman, L. B.; Donovan, W. E.

    1984-01-01

    Difficulties regarding the digital image analysis of remotely sensed imagery can arise in connection with the extensive calculations required. In the past, an expensive large to medium mainframe computer system was needed for performing these calculations. For image-processing applications smaller minicomputer-based systems are now used by many organizations. The costs for such systems are still in the range from $100K to $300K. Recently, as a result of new developments, the use of low-cost microcomputers for image processing and display systems appeared to have become feasible. These developments are related to the advent of the 16-bit microprocessor and the concept of the microcomputer workstation. Earlier 8-bit microcomputer-based image processing systems are briefly examined, and a computer workstation architecture is discussed. Attention is given to a microcomputer workstation developed by Stanford University, and the design and implementation of a workstation network.

  9. Low Cost Desktop Image Analysis Workstation With Enhanced Interactive User Interface

    Science.gov (United States)

    Ratib, Osman M.; Huang, H. K.

    1989-05-01

    A multimodality picture archiving and communication system (PACS) is in routine clinical use in the UCLA Radiology Department. Several types workstations are currently implemented for this PACS. Among them, the Apple Macintosh II personal computer was recently chosen to serve as a desktop workstation for display and analysis of radiological images. This personal computer was selected mainly because of its extremely friendly user-interface, its popularity among the academic and medical community and its low cost. In comparison to other microcomputer-based systems the Macintosh II offers the following advantages: the extreme standardization of its user interface, file system and networking, and the availability of a very large variety of commercial software packages. In the current configuration the Macintosh II operates as a stand-alone workstation where images are imported from a centralized PACS server through an Ethernet network using a standard TCP-IP protocol, and stored locally on magnetic disk. The use of high resolution screens (1024x768 pixels x 8bits) offer sufficient performance for image display and analysis. We focused our project on the design and implementation of a variety of image analysis algorithms ranging from automated structure and edge detection to sophisticated dynamic analysis of sequential images. Specific analysis programs were developed for ultrasound images, digitized angiograms, MRI and CT tomographic images and scintigraphic images.

  10. EPRI engineering workstation software - Discussion and demonstration

    International Nuclear Information System (INIS)

    Stewart, R.P.; Peterson, C.E.; Agee, L.J.

    1992-01-01

    Computing technology is undergoing significant changes with respect to engineering applications in the electric utility industry. These changes result mainly from the introduction of several UNIX workstations that provide mainframe calculational capability at much lower costs. The workstations are being coupled with microcomputers through local area networks to provide engineering groups with a powerful and versatile analysis capability. PEGASYS, the Professional Engineering Graphic Analysis System, is a software package for use with engineering analysis codes executing in a workstation environment. PEGASYS has a menu driven, user-friendly interface to provide pre-execution support for preparing unput and graphical packages for post-execution analysis and on-line monitoring capability for engineering codes. The initial application of this software is for use with RETRAN-02 operating on an IBM RS/6000 workstation using X-Windows/UNIX and a personal computer under DOS

  11. An Imaging And Graphics Workstation For Image Sequence Analysis

    Science.gov (United States)

    Mostafavi, Hassan

    1990-01-01

    This paper describes an application-specific engineering workstation designed and developed to analyze imagery sequences from a variety of sources. The system combines the software and hardware environment of the modern graphic-oriented workstations with the digital image acquisition, processing and display techniques. The objective is to achieve automation and high throughput for many data reduction tasks involving metric studies of image sequences. The applications of such an automated data reduction tool include analysis of the trajectory and attitude of aircraft, missile, stores and other flying objects in various flight regimes including launch and separation as well as regular flight maneuvers. The workstation can also be used in an on-line or off-line mode to study three-dimensional motion of aircraft models in simulated flight conditions such as wind tunnels. The system's key features are: 1) Acquisition and storage of image sequences by digitizing real-time video or frames from a film strip; 2) computer-controlled movie loop playback, slow motion and freeze frame display combined with digital image sharpening, noise reduction, contrast enhancement and interactive image magnification; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored image sequence; 4) automatic and manual field-of-view and spatial calibration; 5) image sequence data base generation and management, including the measurement data products; 6) off-line analysis software for trajectory plotting and statistical analysis; 7) model-based estimation and tracking of object attitude angles; and 8) interface to a variety of video players and film transport sub-systems.

  12. Workstations studies and radiation protection

    International Nuclear Information System (INIS)

    Lahaye, T.; Donadille, L.; Rehel, J.L.; Paquet, F.; Beneli, C.; Cordoliani, Y.S.; Vrigneaud, J.M.; Gauron, C.; Petrequin, A.; Frison, D.; Jeannin, B.; Charles, D.; Carballeda, G.; Crouail, P.; Valot, C.

    2006-01-01

    This day on the workstations studies for the workers follow-up, was organised by the research and health section. Devoted to the company doctors, for the competent persons in radiation protection, for the engineers of safety, it presented examples of methodologies and applications in the medical, industrial domain and the research, so contributing to a better understanding and an application of regulatory measures. The analysis of the workstation has to allow a reduction of the exposures and the risks and lead to the optimization of the medical follow-up. The agenda of this day included the different subjects as follow: evolution of the regulation in matter of demarcation of the regulated zones where the measures of workers protection are strengthened; presentation of the I.R.S.N. guide of help to the realization of a workstation study; implementation of a workstation study: case of radiology; the workstation studies in the research area; Is it necessary to impose the operational dosimetry in the services of radiodiagnostic? The experience feedback of a competent person in radiation protection (P.C.R.) in a hospital environment; radiation protection: elaboration of a good practices guide in medical field; the activities file in nuclear power plant: an evaluation tool of risks for the prevention. Methodological presentation and examples; insulated workstation study; the experience feedback of a provider; Contribution of the ergonomics to the determiners characterization in the ionizing radiation exposure situations;The workstations studies for the internal contamination in the fuel cycle facilities and the consideration of the results in the medical follow-up; R.E.L.I.R. necessity of workstation studies; the consideration of the human factor. (N.C.)

  13. An approach to develop a PSA workstation in KAERI

    International Nuclear Information System (INIS)

    Kim, T. W.; Han, S. H.; Park, C. K.

    1995-01-01

    This paper describes three kinds of efforts for the development of PSA workstation in KAERI; Development of a PSA tool, KIRAP, Reliability Database Development, Living PSA tool development. Korea has 9 nuclear power plants (NPPs) in operation and 9 NPPs under design or construction. For the NPPs recently constructed or designed, the probabilistic safety assessments (PSAs) have been performed by the Government requirements. For these PSAs, the MSDOS version of KIRAP has been used. For the consistent data management and the easiness of information handling needed in PSA, APSA workstation, KIRAP-Win is under development under Windows environment. For the reliability database on component failure rate, human error rate, and common cause failure rate, data used in international PSA or reliability data handbook are collected and processed to use in Korean new plants' PSAs. Finally, an effort for the development of a living PSA tool in KAERI based on dynamic PSA concept is described

  14. Field analysis: approach to the design of teleoperator workstation

    International Nuclear Information System (INIS)

    Saint-Jean, T.; Lescoat, D.A.

    1986-04-01

    Following a brief review of theoretical scope this paper will characterize a methodology to the design of teleoperation workstations. This methodology is illustrated by an example - field analysis of a telemanipulation task in a hot cell. Practical informations are given: operating strategy different from the written procedure, team work organization, different skills. Recommendations are suggested as regards the writing of procedures, the training of personnel and the work organisation

  15. Effect of Active Workstation on Energy Expenditure and Job Performance: A Systematic Review and Meta-analysis.

    Science.gov (United States)

    Cao, Chunmei; Liu, Yu; Zhu, Weimo; Ma, Jiangjun

    2016-05-01

    Recently developed active workstation could become a potential means for worksite physical activity and wellness promotion. The aim of this review was to quantitatively examine the effectiveness of active workstation in energy expenditure and job performance. The literature search was conducted in 6 databases (PubMed, SPORTDiscuss, Web of Science, ProQuest, ScienceDirect, and Scopuse) for articles published up to February 2014, from which a systematic review and meta-analysis was conducted. The cumulative analysis for EE showed there was significant increase in EE using active workstation [mean effect size (MES): 1.47; 95% confidence interval (CI): 1.22 to 1.72, P job performance indicated 2 findings: (1) active workstation did not affect selective attention, processing speed, speech quality, reading comprehension, interpretation and accuracy of transcription; and (2) it could decrease the efficiency of typing speed (MES: -0.55; CI: -0.88 to -0.21, P job performance were significantly lower, others were not. As a result there was little effect on real-life work productivity if we made a good arrangement of job tasks.

  16. SunFast: A sun workstation based, fuel analysis scoping tool for pressurized water reactors

    International Nuclear Information System (INIS)

    Bohnhoff, W.J.

    1991-05-01

    The objective of this research was to develop a fuel cycle scoping program for light water reactors and implement the program on a workstation class computer. Nuclear fuel management problems are quite formidable due to the many fuel arrangement options available. Therefore, an engineer must perform multigroup diffusion calculations for a variety of different strategies in order to determine an optimum core reload. Standard fine mesh finite difference codes result in a considerable computational cost. A better approach is to build upon the proven reliability of currently available mainframe computer programs, and improve the engineering efficiency by taking advantage of the most useful characteristic of workstations: enhanced man/machine interaction. This dissertation contains a description of the methods and a user's guide for the interactive fuel cycle scoping program, SunFast. SunFast provides computational speed and accuracy of solution along with a synergetic coupling between the user and the machine. It should prove to be a valuable tool when extensive sets of similar calculations must be done at a low cost as is the case for assessing fuel management strategies. 40 refs

  17. VMware workstation

    CERN Document Server

    van Vugt, Sander

    2013-01-01

    This book is a practical, step-by-step guide to creating and managing virtual machines using VMware Workstation.VMware Workstation: No Experience Necessary is for developers as well as system administrators who want to efficiently set up a test environment .You should have basic networking knowledge, and prior experience with Virtual Machines and VMware Player would be beneficial

  18. Real-time monitoring/emergency response modeling workstation for a tritium facility

    International Nuclear Information System (INIS)

    Lawver, B.S.; Sims, J.M.; Baskett, R.L.

    1993-01-01

    At Lawrence Livermore National Laboratory (LLNL) we have developed a real-time system to monitor two stacks on our tritium handling facility. The monitors transmit the stack data to a workstation, which computes a three-dimensional numerical model of atmospheric dispersion. The workstation also collects surface and upper air data from meteorological towers and a sodar. The complex meteorological and terrain setting in the Livermore Valley demands more sophisticated resolution of the three-dimensional structure of the atmosphere to reliably calculate plume dispersion than afforded by Gaussian models. We experience both mountain valley and sea breeze flows. To address these complexities, we have implemented the three-dimensional diagnostic MATHEW mass-adjusted wind field and ADPIC particle-in-cell dispersion models on the workstation for use in real-time emergency response modeling. Both MATHEW and ADPIC have shown their utility in a variety of complex settings over the last 15 yr within the U.S. Department of Energy's Atmospheric Release Advisory Capability (ARAC) project. Faster workstations and real-time instruments allow utilization of more complex three-dimensional models, which provides a foundation for building a real-time monitoring and emergency response workstation for a tritium facility. The stack monitors are two ion chambers per stack

  19. Design and analysis of wudu’ (ablution) workstation for elderly in Malaysia

    Science.gov (United States)

    Aman, A.; Dawal, S. Z. M.; Rahman, N. I. A.

    2017-06-01

    Wudu’ (Ablution) workstation is one of the facilities used by most Muslims in all categories. At present, there are numbers of design guidelines for praying facilities but still lacking on wudu’ (ablution) area specification especially or elderly. Thus, It is timely to develop an ergonomic wudu’ workstation for elderly to perform ablution independently and confidently. This study was conducted to design an ergonomic ablution unit for the Muslim’s elderly in Malaysia. An ablution workstation was designed based on elderly anthropometric dimensions and was then analyse using CATIA V5R21 for posture investigation using RULAs. The results of the study has identified significant anthropometric dimensions in designing wudu’ (ablution) workstation for elderly people. This study can be considered as preliminary study for the development of an ergonomic ablution design for elderly. This effort will become one of the significant social contributions to our elderly population in developing our nation holistically.

  20. NET remote workstation

    International Nuclear Information System (INIS)

    Leinemann, K.

    1990-10-01

    The goal of this NET study was to define the functionality of a remote handling workstation and its hardware and software architecture. The remote handling workstation has to fulfill two basic functions: (1) to provide the man-machine interface (MMI), that means the interface to the control system of the maintenance equipment and to the working environment (telepresence) and (2) to provide high level (task level) supporting functions (software tools) during the maintenance work and in the preparation phase. Concerning the man-machine interface, an important module of the remote handling workstation besides the standard components of man-machine interfacing is a module for graphical scene presentation supplementing viewing by TV. The technique of integrated viewing is well known from JET BOOM and TARM control using the GBsim and KISMET software. For integration of equipment dependent MMI functions the remote handling workstation provides a special software module interface. Task level support of the operator is based on (1) spatial (geometric/kinematic) models, (2) remote handling procedure models, and (3) functional models of the equipment. These models and the related simulation modules are used for planning, programming, execution monitoring, and training. The workstation provides an intelligent handbook guiding the operator through planned procedures illustrated by animated graphical sequences. For unplanned situations decision aids are available. A central point of the architectural design was to guarantee a high flexibility with respect to hardware and software. Therefore the remote handling workstation is designed as an open system based on widely accepted standards allowing the stepwise integration of the various modules starting with the basic MMI and the spatial simulation as standard components. (orig./HP) [de

  1. Engineering workstation: Sensor modeling

    Science.gov (United States)

    Pavel, M; Sweet, B.

    1993-01-01

    The purpose of the engineering workstation is to provide an environment for rapid prototyping and evaluation of fusion and image processing algorithms. Ideally, the algorithms are designed to optimize the extraction of information that is useful to a pilot for all phases of flight operations. Successful design of effective fusion algorithms depends on the ability to characterize both the information available from the sensors and the information useful to a pilot. The workstation is comprised of subsystems for simulation of sensor-generated images, image processing, image enhancement, and fusion algorithms. As such, the workstation can be used to implement and evaluate both short-term solutions and long-term solutions. The short-term solutions are being developed to enhance a pilot's situational awareness by providing information in addition to his direct vision. The long term solutions are aimed at the development of complete synthetic vision systems. One of the important functions of the engineering workstation is to simulate the images that would be generated by the sensors. The simulation system is designed to use the graphics modeling and rendering capabilities of various workstations manufactured by Silicon Graphics Inc. The workstation simulates various aspects of the sensor-generated images arising from phenomenology of the sensors. In addition, the workstation can be used to simulate a variety of impairments due to mechanical limitations of the sensor placement and due to the motion of the airplane. Although the simulation is currently not performed in real-time, sequences of individual frames can be processed, stored, and recorded in a video format. In that way, it is possible to examine the appearance of different dynamic sensor-generated and fused images.

  2. The Impact of Ergonomically Designed Workstations on Shoulder EMG Activity during Carpet Weaving

    Directory of Open Access Journals (Sweden)

    Majid Motamedzade

    2014-12-01

    Full Text Available Background: The present study aimed to evaluate the biomechanical exposure to the trapezius muscle activity in female weavers for a prolonged period in the workstation A (suggested by previous studies and workstation B (proposed by the present study. Methods: Electromyography data were collected from nine females during four hours for each ergonomically designed workstation at the Ergonomics Laboratory, Hamadan, Iran. The design criteria for ergonomically designed workstations were: 1 weaving height (20 and 3 cm above elbow height for workstations A and B, respectively, and 2 seat type (10° and 0° forwardsloping seat for workstations A and B, respectively. Results: The amplitude probability distribution function (APDF analysis showed that the left and right upper trapezius muscle activity was almost similar at each workstation. Trapezius muscle activity in the workstation A was significantly greater than workstations B (P<0.001. Conclusion: In general, use of workstation B leads to significantly reduced muscle activity levels in the upper trapezius as compared to workstation A in weavers. Despite the positive impact of workstation B in reducing trapezius muscle activity, it seems that constrained postures of the upper arm during weaving may be associated with musculoskeletal symptoms.

  3. Experiences with installing and benchmarking SCALE 4.0 on workstations

    International Nuclear Information System (INIS)

    Montierth, L.M.; Briggs, J.B.

    1992-01-01

    The advent of economical, high-speed workstations has placed on the criticality engineer's desktop the means to perform computational analysis that was previously possible only on mainframe computers. With this capability comes the need to modify and maintain criticality codes for use on a variety of different workstations. Due to the use of nonstandard coding, compiler differences [in lieu of American National Standards Institute (ANSI) standards], and other machine idiosyncrasies, there is a definite need to systematically test and benchmark all codes ported to workstations. Once benchmarked, a user environment must be maintained to ensure that user code does not become corrupted. The goal in creating a workstation version of the criticality safety analysis sequence (CSAS) codes in SCALE 4.0 was to start with the Cray versions and change as little source code as possible yet produce as generic a code as possible. To date, this code has been ported to the IBM RISC 6000, Data General AViiON 400, Silicon Graphics 4D-35 (all using the same source code), and to the Hewlett Packard Series 700 workstations. The code is maintained under a configuration control procedure. In this paper, the authors address considerations that pertain to the installation and benchmarking of CSAS

  4. A real-time monitoring/emergency response modeling workstation for a tritium facility

    International Nuclear Information System (INIS)

    Lawver, B.S.; Sims, J.M.; Baskett, R.L.

    1993-07-01

    At Lawrence Livermore National Laboratory (LLNL) we developed a real-time system to monitor two stacks on our tritium handling facility. The monitors transmit the stack data to a workstation which computes a 3D numerical model of atmospheric dispersion. The workstation also collects surface and upper air data from meteorological towers and a sodar. The complex meteorological and terrain setting in the Livermore Valley demands more sophisticated resolution of the three-dimensional structure of the atmosphere to reliably calculate plume dispersion than afforded by Gaussian models. We experience both mountain valley and sea breeze flows. To address these complexities, we have implemented the three-dimensional diagnostic MATHEW mass-adjusted wind field and ADPIC particle-in-cell dispersion models on the workstation for use in real-time emergency response modeling. Both MATHEW and ADPIC have shown their utility in a variety of complex settings over the last 15 years within the Department of Energy's Atmospheric Release Advisory Capability (ARAC[1,2]) project

  5. Portfolio: a prototype workstation for development and evaluation of tools for analysis and management of digital portal images

    International Nuclear Information System (INIS)

    Boxwala, Aziz A.; Chaney, Edward L.; Fritsch, Daniel S.; Friedman, Charles P.; Rosenman, Julian G.

    1998-01-01

    Purpose: The purpose of this investigation was to design and implement a prototype physician workstation, called PortFolio, as a platform for developing and evaluating, by means of controlled observer studies, user interfaces and interactive tools for analyzing and managing digital portal images. The first observer study was designed to measure physician acceptance of workstation technology, as an alternative to a view box, for inspection and analysis of portal images for detection of treatment setup errors. Methods and Materials: The observer study was conducted in a controlled experimental setting to evaluate physician acceptance of the prototype workstation technology exemplified by PortFolio. PortFolio incorporates a windows user interface, a compact kit of carefully selected image analysis tools, and an object-oriented data base infrastructure. The kit evaluated in the observer study included tools for contrast enhancement, registration, and multimodal image visualization. Acceptance was measured in the context of performing portal image analysis in a structured protocol designed to simulate clinical practice. The acceptability and usage patterns were measured from semistructured questionnaires and logs of user interactions. Results: Radiation oncologists, the subjects for this study, perceived the tools in PortFolio to be acceptable clinical aids. Concerns were expressed regarding user efficiency, particularly with respect to the image registration tools. Conclusions: The results of our observer study indicate that workstation technology is acceptable to radiation oncologists as an alternative to a view box for clinical detection of setup errors from digital portal images. Improvements in implementation, including more tools and a greater degree of automation in the image analysis tasks, are needed to make PortFolio more clinically practical

  6. Benchmarking of SIMULATE-3 on engineering workstations

    International Nuclear Information System (INIS)

    Karlson, C.F.; Reed, M.L.; Webb, J.R.; Elzea, J.D.

    1990-01-01

    The nuclear fuel management department of Arizona Public Service Company (APS) has evaluated various computer platforms for a departmental engineering and business work-station local area network (LAN). Historically, centralized mainframe computer systems have been utilized for engineering calculations. Increasing usage and the resulting longer response times on the company mainframe system and the relative cost differential between a mainframe upgrade and workstation technology justified the examination of current workstations. A primary concern was the time necessary to turn around routine reactor physics reload and analysis calculations. Computers ranging from a Definicon 68020 processing board in an AT compatible personal computer up to an IBM 3090 mainframe were benchmarked. The SIMULATE-3 advanced nodal code was selected for benchmarking based on its extensive use in nuclear fuel management. SIMULATE-3 is used at APS for reload scoping, design verification, core follow, and providing predictions of reactor behavior under nominal conditions and planned reactor maneuvering, such as axial shape control during start-up and shutdown

  7. Workstations take over conceptual design

    Science.gov (United States)

    Kidwell, George H.

    1987-01-01

    Workstations provide sufficient computing memory and speed for early evaluations of aircraft design alternatives to identify those worthy of further study. It is recommended that the programming of such machines permit integrated calculations of the configuration and performance analysis of new concepts, along with the capability of changing up to 100 variables at a time and swiftly viewing the results. Computations can be augmented through links to mainframes and supercomputers. Programming, particularly debugging operations, are enhanced by the capability of working with one program line at a time and having available on-screen error indices. Workstation networks permit on-line communication among users and with persons and computers outside the facility. Application of the capabilities is illustrated through a description of NASA-Ames design efforts for an oblique wing for a jet performed on a MicroVAX network.

  8. Workstations studies and radiation protection; Etudes de postes et radioprotection

    Energy Technology Data Exchange (ETDEWEB)

    Lahaye, T. [Direction des relations du travail, 75 - Paris (France); Donadille, L.; Rehel, J.L.; Paquet, F. [Institut de Radioprotection et de Surete Nucleaire, 92 - Fontenay-aux-Roses (France); Beneli, C. [Paris-5 Univ., 75 (France); Cordoliani, Y.S. [Societe Francaise de Radioprotection, 92 - Fontenay-aux-Roses (France); Vrigneaud, J.M. [Assistance Publique - Hopitaux de Paris, 75 (France); Gauron, C. [Institut National de Recherche et de Securite, 75 - Paris (France); Petrequin, A.; Frison, D. [Association des Medecins du Travail des Salaries du Nucleaire (France); Jeannin, B. [Electricite de France (EDF), 75 - Paris (France); Charles, D. [Polinorsud (France); Carballeda, G. [cabinet Indigo Ergonomie, 33 - Merignac (France); Crouail, P. [Centre d' Etude sur l' Evaluation de la Protection dans le Domaine Nucleaire, 92 - Fontenay-aux-Roses (France); Valot, C. [IMASSA, 91 - Bretigny-sur-Orge (France)

    2006-07-01

    This day on the workstations studies for the workers follow-up, was organised by the research and health section. Devoted to the company doctors, for the competent persons in radiation protection, for the engineers of safety, it presented examples of methodologies and applications in the medical, industrial domain and the research, so contributing to a better understanding and an application of regulatory measures. The analysis of the workstation has to allow a reduction of the exposures and the risks and lead to the optimization of the medical follow-up. The agenda of this day included the different subjects as follow: evolution of the regulation in matter of demarcation of the regulated zones where the measures of workers protection are strengthened; presentation of the I.R.S.N. guide of help to the realization of a workstation study; implementation of a workstation study: case of radiology; the workstation studies in the research area; Is it necessary to impose the operational dosimetry in the services of radiodiagnostic? The experience feedback of a competent person in radiation protection (P.C.R.) in a hospital environment; radiation protection: elaboration of a good practices guide in medical field; the activities file in nuclear power plant: an evaluation tool of risks for the prevention. Methodological presentation and examples; insulated workstation study; the experience feedback of a provider; Contribution of the ergonomics to the determiners characterization in the ionizing radiation exposure situations;The workstations studies for the internal contamination in the fuel cycle facilities and the consideration of the results in the medical follow-up; R.E.L.I.R. necessity of workstation studies; the consideration of the human factor. (N.C.)

  9. ANL statement of site strategy for computing workstations

    Energy Technology Data Exchange (ETDEWEB)

    Fenske, K.R. (ed.); Boxberger, L.M.; Amiot, L.W.; Bretscher, M.E.; Engert, D.E.; Moszur, F.M.; Mueller, C.J.; O' Brien, D.E.; Schlesselman, C.G.; Troyer, L.J.

    1991-11-01

    This Statement of Site Strategy describes the procedure at Argonne National Laboratory for defining, acquiring, using, and evaluating scientific and office workstations and related equipment and software in accord with DOE Order 1360.1A (5-30-85), and Laboratory policy. It is Laboratory policy to promote the installation and use of computing workstations to improve productivity and communications for both programmatic and support personnel, to ensure that computing workstations acquisitions meet the expressed need in a cost-effective manner, and to ensure that acquisitions of computing workstations are in accord with Laboratory and DOE policies. The overall computing site strategy at ANL is to develop a hierarchy of integrated computing system resources to address the current and future computing needs of the laboratory. The major system components of this hierarchical strategy are: Supercomputers, Parallel computers, Centralized general purpose computers, Distributed multipurpose minicomputers, and Computing workstations and office automation support systems. Computing workstations include personal computers, scientific and engineering workstations, computer terminals, microcomputers, word processing and office automation electronic workstations, and associated software and peripheral devices costing less than $25,000 per item.

  10. VAX Professional Workstation goes graphic

    International Nuclear Information System (INIS)

    Downward, J.G.

    1984-01-01

    The VAX Professional Workstation (VPW) is a collection of programs and procedures designed to provide an integrated work-station environment for the staff at KMS Fusion's research laboratories. During the past year numerous capabilities have been added to VPW, including support for VT125/VT240/4014 graphic workstations, editing windows, and additional desk utilities. Graphics workstation support allows users to create, edit, and modify graph data files, enter the data via a graphic tablet, create simple plots with DATATRIEVE or DECgraph on ReGIS terminals, or elaborate plots with TEKGRAPH on ReGIS or Tektronix terminals. Users may assign display error bars to the data and interactively plot it in a variety of ways. Users also can create and display viewgraphs. Hard copy output for a large network of office terminals is obtained by multiplexing each terminal's video output into a recently developed video multiplexer front ending a single channel video hard copy unit

  11. An open architecture for medical image workstation

    Science.gov (United States)

    Liang, Liang; Hu, Zhiqiang; Wang, Xiangyun

    2005-04-01

    Dealing with the difficulties of integrating various medical image viewing and processing technologies with a variety of clinical and departmental information systems and, in the meantime, overcoming the performance constraints in transferring and processing large-scale and ever-increasing image data in healthcare enterprise, we design and implement a flexible, usable and high-performance architecture for medical image workstations. This architecture is not developed for radiology only, but for any workstations in any application environments that may need medical image retrieving, viewing, and post-processing. This architecture contains an infrastructure named Memory PACS and different kinds of image applications built on it. The Memory PACS is in charge of image data caching, pre-fetching and management. It provides image applications with a high speed image data access and a very reliable DICOM network I/O. In dealing with the image applications, we use dynamic component technology to separate the performance-constrained modules from the flexibility-constrained modules so that different image viewing or processing technologies can be developed and maintained independently. We also develop a weakly coupled collaboration service, through which these image applications can communicate with each other or with third party applications. We applied this architecture in developing our product line and it works well. In our clinical sites, this architecture is applied not only in Radiology Department, but also in Ultrasonic, Surgery, Clinics, and Consultation Center. Giving that each concerned department has its particular requirements and business routines along with the facts that they all have different image processing technologies and image display devices, our workstations are still able to maintain high performance and high usability.

  12. Files for workstations with ionizing radiation risks: variation in the use of gamma densitometers

    International Nuclear Information System (INIS)

    Tournadre, A.

    2008-01-01

    After a brief presentation of the different gamma-densitometers proposed by MLPC to measure roadway density, and having outlined the support role of the provider, the author describes the form and content of workstation files for workstations exhibiting a risk related to ionizing radiation. He gives an analytical overview of dose calculation: analysis of instrument use phases, exposure duration, dose rates and way of introducing these dose rates in the workstation file. He formulates how different procedures are to be followed by the radiation protection expert within the company. He outlines that workstation files are very useful as information feedback tool

  13. [PACS-based endoscope image acquisition workstation].

    Science.gov (United States)

    Liu, J B; Zhuang, T G

    2001-01-01

    A practical PACS-based Endoscope Image Acquisition Workstation is here introduced. By a Multimedia Video Card, the endoscope video is digitized and captured dynamically or statically into computer. This workstation realizes a variety of functions such as the endoscope video's acquisition and display, as well as the editing, processing, managing, storage, printing, communication of related information. Together with other medical image workstation, it can make up the image sources of PACS for hospitals. In addition, it can also act as an independent endoscopy diagnostic system.

  14. Non-contact methods for NDT of aeronautical structures : An image processing workstation for thermography

    OpenAIRE

    Azzarelli, Luciano; Chimenti, Massimo; Salvetti, Ovidio

    1992-01-01

    The main goals of the Istituto di Elaborazione della Informazione in Task 4., Subtasks 4.3.1 (Image Processing) and 4.3.2 (Workstation Architecture) were the study of thermograms features, the design of the architecture of a customized workstation and the project of specialized algorithms for thermal image analysis. Thermograms features pertain to data acquisition, data archiving and data processing; following general study some basic requirements for the workstation were defined. "Data acqui...

  15. Office ergonomics: deficiencies in computer workstation design.

    Science.gov (United States)

    Shikdar, Ashraf A; Al-Kindi, Mahmoud A

    2007-01-01

    The objective of this research was to study and identify ergonomic deficiencies in computer workstation design in typical offices. Physical measurements and a questionnaire were used to study 40 workstations. Major ergonomic deficiencies were found in physical design and layout of the workstations, employee postures, work practices, and training. The consequences in terms of user health and other problems were significant. Forty-five percent of the employees used nonadjustable chairs, 48% of computers faced windows, 90% of the employees used computers more than 4 hrs/day, 45% of the employees adopted bent and unsupported back postures, and 20% used office tables for computers. Major problems reported were eyestrain (58%), shoulder pain (45%), back pain (43%), arm pain (35%), wrist pain (30%), and neck pain (30%). These results indicated serious ergonomic deficiencies in office computer workstation design, layout, and usage. Strategies to reduce or eliminate ergonomic deficiencies in computer workstation design were suggested.

  16. Imaging workstations for computer-aided primatology: promises and pitfalls.

    Science.gov (United States)

    Vannier, M W; Conroy, G C

    1989-01-01

    In this paper, the application of biomedical imaging workstations to primatology will be explained and evaluated. The technological basis, computer hardware and software aspects, and the various uses of several types of workstations will all be discussed. The types of workstations include: (1) Simple - these display-only workstations, which function as electronic light boxes, have applications as terminals to picture archiving and communication (PAC) systems. (2) Diagnostic reporting - image-processing workstations that include the ability to perform straightforward manipulations of gray scale and raw data values will be considered for operations such as histogram equalization (whether adaptive or global), gradient edge finders, contour generation, and region of interest, as well as other related functions. (3) Manipulation systems - three-dimensional modeling and computer graphics with application to radiation therapy treatment planning, and surgical planning and evaluation will be considered. A technology of prime importance in the function of these workstations lies in communications and networking. The hierarchical organization of an electronic computer network and workstation environment with the interrelationship of simple, diagnostic reporting, and manipulation workstations to a coaxial or fiber optic network will be analyzed.

  17. UWGSP6: a diagnostic radiology workstation of the future

    Science.gov (United States)

    Milton, Stuart W.; Han, Sang; Choi, Hyung-Sik; Kim, Yongmin

    1993-06-01

    The Univ. of Washington's Image Computing Systems Lab. (ICSL) has been involved in research into the development of a series of PACS workstations since the middle 1980's. The most recent research, a joint UW-IBM project, attempted to create a diagnostic radiology workstation using an IBM RISC System 6000 (RS6000) computer workstation and the X-Window system. While the results are encouraging, there are inherent limitations in the workstation hardware which prevent it from providing an acceptable level of functionality for diagnostic radiology. Realizing the RS6000 workstation's limitations, a parallel effort was initiated to design a workstation, UWGSP6 (Univ. of Washington Graphics System Processor #6), that provides the required functionality. This paper documents the design of UWGSP6, which not only addresses the requirements for a diagnostic radiology workstation in terms of display resolution, response time, etc., but also includes the processing performance necessary to support key functions needed in the implementation of algorithms for computer-aided diagnosis. The paper includes a description of the workstation architecture, and specifically its image processing subsystem. Verification of the design through hardware simulation is then discussed, and finally, performance of selected algorithms based on detailed simulation is provided.

  18. The concepts and functions of a FEM workstation

    International Nuclear Information System (INIS)

    Brown, R.R.; Gloudeman, J.F.

    1982-01-01

    Recent advances in microprocessor-based computer hardware and associated software provide a basis for the development of a FEM workstation. The key requirements for such a workstation are reviewed and the recent hardware and software developments are discussed that make such a workstation both technically and economically feasible at this time. (orig.)

  19. Nuclear plant analyzer desktop workstation

    International Nuclear Information System (INIS)

    Beelman, R.J.

    1990-01-01

    In 1983 the U.S. Nuclear Regulatory Commission (USNRC) commissioned the Idaho National Engineering Laboratory (INEL) to develop a Nuclear Plant Analyzer (NPA). The NPA was envisioned as a graphical aid to assist reactor safety analysts in comprehending the results of thermal-hydraulic code calculations. The development was to proceed in three distinct phases culminating in a desktop reactor safety workstation. The desktop NPA is now complete. The desktop NPA is a microcomputer based reactor transient simulation, visualization and analysis tool developed at INEL to assist an analyst in evaluating the transient behavior of nuclear power plants by means of graphic displays. The NPA desktop workstation integrates advanced reactor simulation codes with online computer graphics allowing reactor plant transient simulation and graphical presentation of results. The graphics software, written exclusively in ANSI standard C and FORTRAN 77 and implemented over the UNIX/X-windows operating environment, is modular and is designed to interface to the NRC's suite of advanced thermal-hydraulic codes to the extent allowed by that code. Currently, full, interactive, desktop NPA capabilities are realized only with RELAP5

  20. A RISC/UNIX workstation second stage trigger

    International Nuclear Information System (INIS)

    Foreman, W.M.; Amann, J.F.; Fu, S.; Kozlowski, T.; Naivar, F.J.; Oothoudt, M.A.; Shelley, F.

    1992-01-01

    Recent advances in Reduced Instruction Set Computer (RISC) workstations have greatly altered the economics of processing power available for experiments. In addition VME interfaces available for many of these workstations make it possible to use them in experiment frontends for filtering and compressing data. Such a second stage trigger has been implemented at LAMPF using a commercially available workstation and VME interface. The implementation is described and measurements of data transfer speeds are presented in this paper

  1. Integrated telemedicine workstation for intercontinental grand rounds

    Science.gov (United States)

    Willis, Charles E.; Leckie, Robert G.; Brink, Linda; Goeringer, Fred

    1995-04-01

    The Telemedicine Spacebridge to Moscow was a series of intercontinental sessions sponsored jointly by NASA and the Moscow Academy of Medicine. To improve the quality of medical images presented, the MDIS Project developed a workstation for acquisition, storage, and interactive display of radiology and pathology images. The workstation was based on a Macintosh IIfx platform with a laser digitizer for radiographs and video capture capability for microscope images. Images were transmitted via the Russian Lyoutch Satellite which had only a single video channel available and no high speed data channels. Two workstations were configured -- one for use at the Uniformed Services University of Health Sciences in Bethesda, MD. and the other for use at the Hospital of the Interior in Moscow, Russia. The two workstations were used may times during 16 sessions. As clinicians used the systems, we modified the original configuration to improve interactive use. This project demonstrated that numerous acquisition and output devices could be brought together in a single interactive workstation. The video images were satisfactory for remote consultation in a grand rounds format.

  2. Helical computed tomography and the workstation: introduction to a symbiosis

    International Nuclear Information System (INIS)

    Garcia-Santos, J.M.

    1997-01-01

    We do a brief introduction to the possibilities of an helical computed tomography system when it is associated with a powerful workstation. The fast and volumetric way of acquisition constitutes, basically, the main advantage of this sort of computed tomography. The anatomical and radio pathological study, in a workstation, of the acquired information (thanks to multiplanar and 3D reconstruction), increases significantly our capacity of analysis in each patient. Only the clinical and radiological experience will tell us which is the right place that this symbiosis occupies within our diagnosis tools. (Author) 11 refs

  3. The integrated workstation, a realtime data acquisition, analysis and display system

    International Nuclear Information System (INIS)

    Treadway, T.R. III.

    1991-05-01

    The Integrated Workstation was developed at Lawrence Livermore National Laboratory to consolidate the data from many widely dispersed systems in order to provide an overall indication of the enrichment performance of the Atomic Vapor Laser Isotope Separation experiments. In order to accomplish this task a Hewlett Packard 9000/835 turboSRX was employed to acquire over 150 analog input signals. Following the data acquisition, a spreadsheet-type analysis package and interpreter was used to derive 300 additional values. These values were the results of applying physics models to the raw data. Following the calculations were plotted and archived for post-run analysis and report generation. Both the modeling calculations, and real-time plot configurations can be dynamically reconfigured as needed. Typical sustained data acquisition and display rates of the system was 1 Hz. However rates exceeding 2.5 Hz have been obtained. This paper will discuss the instrumentation, architecture, implementation, usage, and results of this system in a set of experiments that occurred in 1989. 2 figs

  4. Comparison of computer workstation with film for detecting setup errors

    International Nuclear Information System (INIS)

    Fritsch, D.S.; Boxwala, A.A.; Raghavan, S.; Coffee, C.; Major, S.A.; Muller, K.E.; Chaney, E.L.

    1997-01-01

    Purpose/Objective: Workstations designed for portal image interpretation by radiation oncologists provide image displays and image processing and analysis tools that differ significantly compared with the standard clinical practice of inspecting portal films on a light box. An implied but unproved assumption associated with the clinical implementation of workstation technology is that patient care is improved, or at least not adversely affected. The purpose of this investigation was to conduct observer studies to test the hypothesis that radiation oncologists can detect setup errors using a workstation at least as accurately as when following standard clinical practice. Materials and Methods: A workstation, PortFolio, was designed for radiation oncologists to display and inspect digital portal images for setup errors. PortFolio includes tools to enhance images; align cross-hairs, field edges, and anatomic structures on reference and acquired images; measure distances and angles; and view registered images superimposed on one another. In a well designed and carefully controlled observer study, nine radiation oncologists, including attendings and residents, used PortFolio to detect setup errors in realistic digitally reconstructed portal (DRPR) images computed from the NLM visible human data using a previously described approach † . Compared with actual portal images where absolute truth is ill defined or unknown, the DRPRs contained known translation or rotation errors in the placement of the fields over target regions in the pelvis and head. Twenty DRPRs with randomly induced errors were computed for each site. The induced errors were constrained to a plane at the isocenter of the target volume and perpendicular to the central axis of the treatment beam. Images used in the study were also printed on film. Observers interpreted the film-based images using standard clinical practice. The images were reviewed in eight sessions. During each session five images were

  5. Inter-rater reliability of an observation-based ergonomics assessment checklist for office workers.

    Science.gov (United States)

    Pereira, Michelle Jessica; Straker, Leon Melville; Comans, Tracy Anne; Johnston, Venerina

    2016-12-01

    To establish the inter-rater reliability of an observation-based ergonomics assessment checklist for computer workers. A 37-item (38-item if a laptop was part of the workstation) comprehensive observational ergonomics assessment checklist comparable to government guidelines and up to date with empirical evidence was developed. Two trained practitioners assessed full-time office workers performing their usual computer-based work and evaluated the suitability of workstations used. Practitioners assessed each participant consecutively. The order of assessors was randomised, and the second assessor was blinded to the findings of the first. Unadjusted kappa coefficients between the raters were obtained for the overall checklist and subsections that were formed from question-items relevant to specific workstation equipment. Twenty-seven office workers were recruited. The inter-rater reliability between two trained practitioners achieved moderate to good reliability for all except one checklist component. This checklist has mostly moderate to good reliability between two trained practitioners. Practitioner Summary: This reliable ergonomics assessment checklist for computer workers was designed using accessible government guidelines and supplemented with up-to-date evidence. Employers in Queensland (Australia) can fulfil legislative requirements by using this reliable checklist to identify and subsequently address potential risk factors for work-related injury to provide a safe working environment.

  6. Human reliability analysis

    International Nuclear Information System (INIS)

    Dougherty, E.M.; Fragola, J.R.

    1988-01-01

    The authors present a treatment of human reliability analysis incorporating an introduction to probabilistic risk assessment for nuclear power generating stations. They treat the subject according to the framework established for general systems theory. Draws upon reliability analysis, psychology, human factors engineering, and statistics, integrating elements of these fields within a systems framework. Provides a history of human reliability analysis, and includes examples of the application of the systems approach

  7. Insulation coordination workstation for AC and DC substations

    International Nuclear Information System (INIS)

    Booth, R.R.; Hileman, A.R.

    1990-01-01

    The Insulation Coordination Workstation was designed to aid the substation design engineer in the insulation coordination process. The workstation utilizes state of the art computer technology to present a set of tools necessary for substation insulation coordination, and to support the decision making process for all aspects of insulation coordination. The workstation is currently being developed for personal computers supporting OS/2 Presentation Manager. Modern Computer-Aided Software Engineering (CASE) technology was utilized to create an easily expandable framework which currently consists of four modules, each accessing a central application database. The heart of the workstation is a library of user-friendly application programs for the calculation of important voltage stresses used for the evaluation of insulation coordination. The Oneline Diagram is a graphic interface for data entry into the EPRI distributed EMTP program, which allows the creation of complex systems on the CRT screen using simple mouse clicks and keyboard entries. Station shielding is graphically represented in the Geographic Viewport using a three-dimensional substation model, and the interactive plotting package allows plotting of EPRI EMTP output results on the CRT screen, printer, or pen plotter. The Insulation Coordination Workstation was designed by Advanced Systems Technology (AST), a division of ABB Power Systems, Inc., and sponsored by the Electric Power Research Institute under RP 2323-5, AC/DC Insulation Coordination Workstation

  8. The role of the mainframe terminated : mainframe versus workstation

    CERN Document Server

    Williams, D O

    1991-01-01

    I. What mainframes? - The surgeon-general has determined that you shall treat all costs with care ( continental effects, discounts assumed, next month's or last month's prices, optimism of the reporter. II. Typical mainframe hardware III. Typical mainframe software IV. What workstations? VI. Typical workstation hardware VII. Typical workstation software VIII. Titan vs PDP-7s XIX.Historic answer X. Amdahl's Law....

  9. A real-time monitoring/emergency response workstation using a 3-D numerical model initialized with SODAR

    International Nuclear Information System (INIS)

    Lawver, B.S.; Sullivan, T.J.; Baskett, R.L.

    1993-01-01

    Many workstation based emergency response dispersion modeling systems provide simple Gaussian models driven by single meteorological tower inputs to estimate the downwind consequences from accidental spills or stack releases. Complex meteorological or terrain settings demand more sophisticated resolution of the three-dimensional structure of the atmosphere to reliably calculate plume dispersion. Mountain valleys and sea breeze flows are two common examples of such settings. To address these complexities, we have implemented the three-dimensional-diagnostic MATHEW mass-adjusted wind field and ADPIC particle-in-cell dispersion models on a workstation for use in real-time emergency response modeling. Both MATHEW and ADPIC have shown their utility in a variety of complex settings over the last 15 years within the Department of Energy's Atmospheric Release Advisory Capability project

  10. Evaluating biomechanics of user-selected sitting and standing computer workstation.

    Science.gov (United States)

    Lin, Michael Y; Barbir, Ana; Dennerlein, Jack T

    2017-11-01

    A standing computer workstation has now become a popular modern work place intervention to reduce sedentary behavior at work. However, user's interaction related to a standing computer workstation and its differences with a sitting workstation need to be understood to assist in developing recommendations for use and set up. The study compared the differences in upper extremity posture and muscle activity between user-selected sitting and standing workstation setups. Twenty participants (10 females, 10 males) volunteered for the study. 3-D posture, surface electromyography, and user-reported discomfort were measured while completing simulated tasks with each participant's self-selected workstation setups. Sitting computer workstation associated with more non-neutral shoulder postures and greater shoulder muscle activity, while standing computer workstation induced greater wrist adduction angle and greater extensor carpi radialis muscle activity. Sitting computer workstation also associated with greater shoulder abduction postural variation (90th-10th percentile) while standing computer workstation associated with greater variation for should rotation and wrist extension. Users reported similar overall discomfort levels within the first 10 min of work but had more than twice as much discomfort while standing than sitting after 45 min; with most discomfort reported in the low back for standing and shoulder for sitting. These different measures provide understanding in users' different interactions with sitting and standing and by alternating between the two configurations in short bouts may be a way of changing the loading pattern on the upper extremity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Development of PSA workstation KIRAP

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Un; Han, Sang Hoon; Kim, Kil You; Yang, Jun Eon; Jeong, Won Dae; Chang, Seung Cheol; Sung, Tae Yong; Kang, Dae Il; Park, Jin Hee; Lee, Yoon Hwan; Hwang, Mi Jeong

    1997-01-01

    Advanced Research Group of Korea Atomic Energy Research Institute has been developing the Probabilistic Safety Assessment(PSA) workstation KIRAP from 1992. This report describes the recent development activities of PSA workstation KIRAP. The first is to develop and improve the methodologies for PSA quantification, that are the incorporation of fault tree modularization technique, the improvement of cut set generation method, the development of rule-based recovery, the development of methodology to solve a fault tree which has the logical loops and to handle a fault tree which has several initiators. These methodologies are incorporated in the PSA quantification software KIRAP-CUT. The second is to convert PSA modeling softwares for Windows, which have been used on the DOS environment since 1987. The developed softwares are the fault tree editor KWTREE, the event tree editor CONPAS, and Data manager KWDBMAN for event data and common cause failure (CCF) data. With the development of PSA workstation, it makes PSA modeling and PSA quantification and automation easier and faster. (author). 8 refs.

  12. Development of PSA workstation KIRAP

    International Nuclear Information System (INIS)

    Kim, Tae Un; Han, Sang Hoon; Kim, Kil You; Yang, Jun Eon; Jeong, Won Dae; Chang, Seung Cheol; Sung, Tae Yong; Kang, Dae Il; Park, Jin Hee; Lee, Yoon Hwan; Hwang, Mi Jeong.

    1997-01-01

    Advanced Research Group of Korea Atomic Energy Research Institute has been developing the Probabilistic Safety Assessment(PSA) workstation KIRAP from 1992. This report describes the recent development activities of PSA workstation KIRAP. The first is to develop and improve the methodologies for PSA quantification, that are the incorporation of fault tree modularization technique, the improvement of cut set generation method, the development of rule-based recovery, the development of methodology to solve a fault tree which has the logical loops and to handle a fault tree which has several initiators. These methodologies are incorporated in the PSA quantification software KIRAP-CUT. The second is to convert PSA modeling softwares for Windows, which have been used on the DOS environment since 1987. The developed softwares are the fault tree editor KWTREE, the event tree editor CONPAS, and Data manager KWDBMAN for event data and common cause failure (CCF) data. With the development of PSA workstation, it makes PSA modeling and PSA quantification and automation easier and faster. (author). 8 refs

  13. A nuclear power plant system engineering workstation

    International Nuclear Information System (INIS)

    Mason, J.H.; Crosby, J.W.

    1989-01-01

    System engineers offer an approach for effective technical support for operation and maintenance of nuclear power plants. System engineer groups are being set up by most utilities in the United States. Institute of Nuclear Power operations (INPO) and U.S. Nuclear Regulatory Commission (NRC) have endorsed the concept. The INPO Good Practice and a survey of system engineer programs in the southeastern United States provide descriptions of system engineering programs. The purpose of this paper is to describe a process for developing a design for a department-level information network of workstations for system engineering groups. The process includes the following: (1) application of a formal information engineering methodology, (2) analysis of system engineer functions and activities; (3) use of Electric Power Research Institute (EPRI) Plant Information Network (PIN) data; (4) application of the Information Engineering Workbench. The resulting design for this system engineer workstation can provide a reference for design of plant-specific systems

  14. A Next Generation BioPhotonics Workstation

    DEFF Research Database (Denmark)

    Glückstad, Jesper; Palima, Darwin; Tauro, Sandeep

    2011-01-01

    We are developing a Next Generation BioPhotonics Workstation to be applied in research on regulated microbial cell growth including their underlying physiological mechanisms, in vivo characterization of cell constituents and manufacturing of nanostructures and meta-materials.......We are developing a Next Generation BioPhotonics Workstation to be applied in research on regulated microbial cell growth including their underlying physiological mechanisms, in vivo characterization of cell constituents and manufacturing of nanostructures and meta-materials....

  15. Radiology workstation for mammography: preliminary observations, eyetracker studies, and design

    Science.gov (United States)

    Beard, David V.; Johnston, Richard E.; Pisano, Etta D.; Hemminger, Bradley M.; Pizer, Stephen M.

    1991-07-01

    For the last four years, the UNC FilmPlane project has focused on constructing a radiology workstation facilitating CT interpretations equivalent to those with film and viewbox. Interpretation of multiple CT studies was originally chosen because handling such large numbers of images was considered to be one of the most difficult tasks that could be performed with a workstation. The authors extend the FilmPlane design to address mammography. The high resolution and contrast demands coupled with the number of images often cross- compared make mammography a difficult challenge for the workstation designer. This paper presents the results of preliminary work with workstation interpretation of mammography. Background material is presented to justify why the authors believe electronic mammographic workstations could improve health care delivery. The results of several observation sessions and a preliminary eyetracker study of multiple-study mammography interpretations are described. Finally, tentative conclusions of what a mammographic workstation might look like and how it would meet clinical demand to be effective are presented.

  16. Analysis and Application of Reliability

    International Nuclear Information System (INIS)

    Jeong, Hae Seong; Park, Dong Ho; Kim, Jae Ju

    1999-05-01

    This book tells of analysis and application of reliability, which includes definition, importance and historical background of reliability, function of reliability and failure rate, life distribution and assumption of reliability, reliability of unrepaired system, reliability of repairable system, sampling test of reliability, failure analysis like failure analysis by FEMA and FTA, and cases, accelerated life testing such as basic conception, acceleration and acceleration factor, and analysis of accelerated life testing data, maintenance policy about alternation and inspection.

  17. Post-deployment usability evaluation of a radiology workstation

    NARCIS (Netherlands)

    Jorritsma, Wiard; Cnossen, Fokie; Dierckx, Rudi A.; Oudkerk, Matthijs; Van Ooijen, Peter M. A.

    Objectives: To determine the number, nature and severity of usability issues radiologists encounter while using a commercially available radiology workstation in clinical practice, and to assess how well the results of a pre-deployment usability evaluation of this workstation generalize to clinical

  18. The safety monitor and RCM workstation as complementary tools in risk based maintenance optimization

    International Nuclear Information System (INIS)

    Rawson, P.D.

    2000-01-01

    Reliability Centred Maintenance (RCM) represents a proven technique for rendering maintenance activities safer, more effective, and less expensive, in terms of systems unavailability and resource management. However, it is believed that RCM can be enhanced by the additional consideration of operational plant risk. This paper discusses how two computer-based tools, i.e., the RCM Workstation and the Safety Monitor, can complement each other in helping to create a living preventive maintenance strategy. (author)

  19. Power electronics reliability analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Mark A.; Atcitty, Stanley

    2009-12-01

    This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.

  20. [Design and development of the DSA digital subtraction workstation].

    Science.gov (United States)

    Peng, Wen-Xian; Peng, Tian-Zhou; Xia, Shun-Ren; Jin, Guang-Bo

    2008-05-01

    According to the patient examination criterion and the demands of all related departments, the DSA digital subtraction workstation has been successfully designed and is introduced in this paper by analyzing the characteristic of video source of DSA which was manufactured by GE Company and has no DICOM standard interface. The workstation includes images-capturing gateway and post-processing software. With the developed workstation, all images from this early DSA equipment are transformed into DICOM format and then are shared in different machines.

  1. Microbial Diagnostic Array Workstation (MDAW: a web server for diagnostic array data storage, sharing and analysis

    Directory of Open Access Journals (Sweden)

    Chang Yung-Fu

    2008-09-01

    Full Text Available Abstract Background Microarrays are becoming a very popular tool for microbial detection and diagnostics. Although these diagnostic arrays are much simpler when compared to the traditional transcriptome arrays, due to the high throughput nature of the arrays, the data analysis requirements still form a bottle neck for the widespread use of these diagnostic arrays. Hence we developed a new online data sharing and analysis environment customised for diagnostic arrays. Methods Microbial Diagnostic Array Workstation (MDAW is a database driven application designed in MS Access and front end designed in ASP.NET. Conclusion MDAW is a new resource that is customised for the data analysis requirements for microbial diagnostic arrays.

  2. The Temple Translator's Workstation Project

    National Research Council Canada - National Science Library

    Vanni, Michelle; Zajac, Remi

    1996-01-01

    .... The Temple Translator's Workstation is incorporated into a Tipster document management architecture and it allows both translator/analysts and monolingual analysts to use the machine- translation...

  3. Utilization of a multimedia PACS workstation for surgical planning of epilepsy

    Science.gov (United States)

    Soo Hoo, Kent; Wong, Stephen T.; Hawkins, Randall A.; Knowlton, Robert C.; Laxer, Kenneth D.; Rowley, Howard A.

    1997-05-01

    Surgical treatment of temporal lobe epilepsy requires the localization of the epileptogenic zone for surgical resection. Currently, clinicians utilize electroencephalography, various neuroimaging modalities, and psychological tests together to determine the location of this zone. We investigate how a multimedia neuroimaging workstation built on top of the UCSF Picture Archiving and Communication System can be used to aid surgical planning of epilepsy and related brain diseases. This usage demonstrates the ability of the workstation to retrieve image and textural data from PACS and other image sources, register multimodality images, visualize and render 3D data sets, analyze images, generate new image and text data from the analysis, and organize all data in a relational database management system.

  4. Analysis on the influence of supply method on a workstation with the help of dynamic simulation

    Directory of Open Access Journals (Sweden)

    Gavriluță Alin

    2017-01-01

    Full Text Available Considering the need of flexibility in any manufacturing process, the choice of the supply method of an assembly workstation can be a decision with instead influence on its performances. Using dynamic simulation, this article wants to compare the effect on a workstation cycle time of three different supply methods: supply on stock, supply in “Strike Zone” and synchronous supply. This study is part of an extended work that has the aim of compering by 3D layout design and dynamic simulation, different supply methods on an assembly line performances.

  5. The biomechanical and physiological effect of two dynamic workstations

    NARCIS (Netherlands)

    Botter, J.; Burford, E.M.; Commissaris, D.; Könemann, R.; Mastrigt, S.H.V.; Ellegast, R.P.

    2013-01-01

    The aim of this research paper was to investigate the effect, both biomechanically and physiologically, of two dynamic workstations currently available on the commercial market. The dynamic workstations tested, namely the Treadmill Desk by LifeSpan and the LifeBalance Station by RightAngle, were

  6. Design of a tritium decontamination workstation based on plasma cleaning

    International Nuclear Information System (INIS)

    Antoniazzi, A.B.; Shmayda, W.T.; Fishbien, B.F.

    1993-01-01

    A design for a tritium decontamination workstation based on plasma cleaning is presented. The activity of tritiated surfaces are significantly reduced through plasma-surface interactions within the workstation. Such a workstation in a tritium environment can routinely be used to decontaminate tritiated tools and components. The main advantage of such a station is the lack of low level tritiated liquid waste. Gaseous tritiated species are the waste products with can with present technology be separated and contained

  7. The scheme and implementing of workstation configuration for medical imaging information system

    International Nuclear Information System (INIS)

    Tao Yonghao; Miao Jingtao

    2002-01-01

    Objective: To discuss the scheme and implementing for workstation configuration of medical imaging information system which would be adapted to the practice situation of China. Methods: The workstations were logically divided into PACS workstations and RIS workstations, the former applied to three kinds of diagnostic practice: the small matrix images, large matrix images, and high resolution gray scale display application, and the latter consisted of many different models which depended upon the usage and function process. Results: A dual screen configuration for image diagnostic workstation integrated the image viewing and reporting procedure physically, while the small matrix images as CT or MR were operated on 17 in (1 in = 2.54 cm) color monitors, the conventional X-ray diagnostic procedure was implemented based on 21 in color monitors or portrait format gray scale 2 K by 2.5 K monitors. All other RIS workstations not involved in image process were set up with a common PC configuration. Conclusion: The essential principle for designing a workstation scheme of medical imaging information system should satisfy the basic requirements of medical image diagnosis and fit into the available investment situation

  8. Modelling of Energy Expenditure at Welding Workstations: Effect of ...

    African Journals Online (AJOL)

    The welding workstation usually generates intense heat during operations, which may affect the welder's health if not properly controlled, and can also affect the performance of the welder at work. Consequently, effort to control the conditions of the welding workstation is essential, and is therefore pursued in this paper.

  9. Flexible structure control experiments using a real-time workstation for computer-aided control engineering

    Science.gov (United States)

    Stieber, Michael E.

    1989-01-01

    A Real-Time Workstation for Computer-Aided Control Engineering has been developed jointly by the Communications Research Centre (CRC) and Ruhr-Universitaet Bochum (RUB), West Germany. The system is presently used for the development and experimental verification of control techniques for large space systems with significant structural flexibility. The Real-Time Workstation essentially is an implementation of RUB's extensive Computer-Aided Control Engineering package KEDDC on an INTEL micro-computer running under the RMS real-time operating system. The portable system supports system identification, analysis, control design and simulation, as well as the immediate implementation and test of control systems. The Real-Time Workstation is currently being used by CRC to study control/structure interaction on a ground-based structure called DAISY, whose design was inspired by a reflector antenna. DAISY emulates the dynamics of a large flexible spacecraft with the following characteristics: rigid body modes, many clustered vibration modes with low frequencies and extremely low damping. The Real-Time Workstation was found to be a very powerful tool for experimental studies, supporting control design and simulation, and conducting and evaluating tests withn one integrated environment.

  10. Workout at work: laboratory test of psychological and performance outcomes of active workstations.

    Science.gov (United States)

    Sliter, Michael; Yuan, Zhenyu

    2015-04-01

    With growing concerns over the obesity epidemic in the United States and other developed countries, many organizations have taken steps to incorporate healthy workplace practices. However, most workers are still sedentary throughout the day--a major contributor to individual weight gain. The current study sought to gather preliminary evidence of the efficacy of active workstations, which are a possible intervention that could increase employees' physical activity while they are working. We conducted an experimental study, in which boredom, task satisfaction, stress, arousal, and performance were evaluated and compared across 4 randomly assigned conditions: seated workstation, standing workstation, cycling workstation, and walking workstation. Additionally, body mass index (BMI) and exercise habits were examined as moderators to determine whether differences in these variables would relate to increased benefits in active conditions. The results (n = 180) showed general support for the benefits of walking workstations, whereby participants in the walking condition had higher satisfaction and arousal and experienced less boredom and stress than those in the passive conditions. Cycling workstations, on the other hand, tended to relate to reduced satisfaction and performance when compared with other conditions. The moderators did not impact these relationships, indicating that walking workstations might have psychological benefits to individuals, regardless of BMI and exercise habits. The results of this study are a preliminary step in understanding the work implications of active workstations. (c) 2015 APA, all rights reserved).

  11. Interpretation of digital breast tomosynthesis: preliminary study on comparison with picture archiving and communication system (PACS) and dedicated workstation.

    Science.gov (United States)

    Kim, Young Seon; Chang, Jung Min; Yi, Ann; Shin, Sung Ui; Lee, Myung Eun; Kim, Won Hwa; Cho, Nariya; Moon, Woo Kyung

    2017-08-01

    To compare the diagnostic accuracy and efficiency in the interpretation of digital breast tomosynthesis (DBT) images using a picture archiving and communication system (PACS) and a dedicated workstation. 97 DBT images obtained for screening or diagnostic purposes were stored in both a workstation and a PACS and evaluated in combination with digital mammography by three independent radiologists retrospectively. Breast Imaging-Reporting and Data System final assessments and likelihood of malignancy (%) were assigned and the interpretation time when using the workstation and PACS was recorded. Receiver operating characteristic curve analysis, sensitivities and specificities were compared with histopathological examination and follow-up data as a reference standard. Area under the receiver operating characteristic curve values for cancer detection (0.839 vs 0.815, p = 0.6375) and sensitivity (81.8% vs 75.8%, p = 0.2188) showed no statistically significant differences between the workstation and PACS. However, specificity was significantly higher when analysing on the workstation than when using PACS (83.7% vs 76.9%, p = 0.009). When evaluating DBT images using PACS, only one case was deemed necessary to be reanalysed using the workstation. The mean time to interpret DBT images on PACS (1.68 min/case) was significantly longer than that to interpret on the workstation (1.35 min/case) (p < 0.0001). Interpretation of DBT images using PACS showed comparable diagnostic performance to a dedicated workstation, even though it required a longer reading time. Advances in knowledge: Interpretation of DBT images using PACS is an alternative to evaluate the images when a dedicated workstation is not available.

  12. The effects of FreeSurfer version, workstation type, and Macintosh operating system version on anatomical volume and cortical thickness measurements.

    Science.gov (United States)

    Gronenschild, Ed H B M; Habets, Petra; Jacobs, Heidi I L; Mengelers, Ron; Rozendaal, Nico; van Os, Jim; Marcelis, Machteld

    2012-01-01

    FreeSurfer is a popular software package to measure cortical thickness and volume of neuroanatomical structures. However, little if any is known about measurement reliability across various data processing conditions. Using a set of 30 anatomical T1-weighted 3T MRI scans, we investigated the effects of data processing variables such as FreeSurfer version (v4.3.1, v4.5.0, and v5.0.0), workstation (Macintosh and Hewlett-Packard), and Macintosh operating system version (OSX 10.5 and OSX 10.6). Significant differences were revealed between FreeSurfer version v5.0.0 and the two earlier versions. These differences were on average 8.8 ± 6.6% (range 1.3-64.0%) (volume) and 2.8 ± 1.3% (1.1-7.7%) (cortical thickness). About a factor two smaller differences were detected between Macintosh and Hewlett-Packard workstations and between OSX 10.5 and OSX 10.6. The observed differences are similar in magnitude as effect sizes reported in accuracy evaluations and neurodegenerative studies.The main conclusion is that in the context of an ongoing study, users are discouraged to update to a new major release of either FreeSurfer or operating system or to switch to a different type of workstation without repeating the analysis; results thus give a quantitative support to successive recommendations stated by FreeSurfer developers over the years. Moreover, in view of the large and significant cross-version differences, it is concluded that formal assessment of the accuracy of FreeSurfer is desirable.

  13. The effects of FreeSurfer version, workstation type, and Macintosh operating system version on anatomical volume and cortical thickness measurements.

    Directory of Open Access Journals (Sweden)

    Ed H B M Gronenschild

    Full Text Available FreeSurfer is a popular software package to measure cortical thickness and volume of neuroanatomical structures. However, little if any is known about measurement reliability across various data processing conditions. Using a set of 30 anatomical T1-weighted 3T MRI scans, we investigated the effects of data processing variables such as FreeSurfer version (v4.3.1, v4.5.0, and v5.0.0, workstation (Macintosh and Hewlett-Packard, and Macintosh operating system version (OSX 10.5 and OSX 10.6. Significant differences were revealed between FreeSurfer version v5.0.0 and the two earlier versions. These differences were on average 8.8 ± 6.6% (range 1.3-64.0% (volume and 2.8 ± 1.3% (1.1-7.7% (cortical thickness. About a factor two smaller differences were detected between Macintosh and Hewlett-Packard workstations and between OSX 10.5 and OSX 10.6. The observed differences are similar in magnitude as effect sizes reported in accuracy evaluations and neurodegenerative studies.The main conclusion is that in the context of an ongoing study, users are discouraged to update to a new major release of either FreeSurfer or operating system or to switch to a different type of workstation without repeating the analysis; results thus give a quantitative support to successive recommendations stated by FreeSurfer developers over the years. Moreover, in view of the large and significant cross-version differences, it is concluded that formal assessment of the accuracy of FreeSurfer is desirable.

  14. Multidisciplinary System Reliability Analysis

    Science.gov (United States)

    Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)

    2001-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  15. PAW [Physics Analysis Workstation] at Fermilab: CORE based graphics implementation of HIGZ [High Level Interface to Graphics and Zebra

    International Nuclear Information System (INIS)

    Johnstad, H.

    1989-06-01

    The Physics Analysis Workstation system (PAW) is primarily intended to be the last link in the analysis chain of experimental data. The graphical part of PAW is based on HIGZ (High Level Interface to Graphics and Zebra), which is based on the OSI and ANSI standard Graphics Kernel System (GKS). HIGZ is written in the context of PAW. At Fermilab, the CORE based graphics system DI-3000 by Precision Visuals Inc., is widely used in the analysis of experimental data. The graphical part of the PAW routines has been totally rewritten and implemented in the Fermilab environment. 3 refs

  16. Test-retest reliability and concurrent validity of a web-based questionnaire measuring workstation and individual correlates of work postures during computer work

    NARCIS (Netherlands)

    IJmker, S.; Mikkers, J.; Blatter, B.M.; Beek, A.J. van der; Mechelen, W. van; Bongers, P.M.

    2008-01-01

    Introduction: "Ergonomic" questionnaires are widely used in epidemiological field studies to study the association between workstation characteristics, work posture and musculoskeletal disorders among office workers. Findings have been inconsistent regarding the putative adverse effect of work

  17. The Impact of Active Workstations on Workplace Productivity and Performance: A Systematic Review.

    Science.gov (United States)

    Ojo, Samson O; Bailey, Daniel P; Chater, Angel M; Hewson, David J

    2018-02-27

    Active workstations have been recommended for reducing sedentary behavior in the workplace. It is important to understand if the use of these workstations has an impact on worker productivity. The aim of this systematic review was to examine the effect of active workstations on workplace productivity and performance. A total of 3303 articles were initially identified by a systematic search and seven articles met eligibility criteria for inclusion. A quality appraisal was conducted to assess risk of bias, confounding, internal and external validity, and reporting. Most of the studies reported cognitive performance as opposed to productivity. Five studies assessed cognitive performance during use of an active workstation, usually in a single session. Sit-stand desks had no detrimental effect on performance, however, some studies with treadmill and cycling workstations identified potential decreases in performance. Many of the studies lacked the power required to achieve statistical significance. Three studies assessed workplace productivity after prolonged use of an active workstation for between 12 and 52 weeks. These studies reported no significant effect on productivity. Active workstations do not appear to decrease workplace performance.

  18. SCWEB, Scientific Workstation Evaluation Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Raffenetti, R C [Computing Services-Support Services Division, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, Illinois 60439 (United States)

    1988-06-16

    1 - Description of program or function: The SCWEB (Scientific Workstation Evaluation Benchmark) software includes 16 programs which are executed in a well-defined scenario to measure the following performance capabilities of a scientific workstation: implementation of FORTRAN77, processor speed, memory management, disk I/O, monitor (or display) output, scheduling of processing (multiprocessing), and scheduling of print tasks (spooling). 2 - Method of solution: The benchmark programs are: DK1, DK2, and DK3, which do Fourier series fitting based on spline techniques; JC1, which checks the FORTRAN function routines which produce numerical results; JD1 and JD2, which solve dense systems of linear equations in double- and single-precision, respectively; JD3 and JD4, which perform matrix multiplication in single- and double-precision, respectively; RB1, RB2, and RB3, which perform substantial amounts of I/O processing on files other than the input and output files; RR1, which does intense single-precision floating-point multiplication in a tight loop, RR2, which initializes a 512x512 integer matrix in a manner which skips around in the address space rather than initializing each consecutive memory cell in turn; RR3, which writes alternating text buffers to the output file; RR4, which evaluates the timer routines and demonstrates that they conform to the specification; and RR5, which determines whether the workstation is capable of executing a 4-megabyte program

  19. ARCIMBOLDO_LITE: single-workstation implementation and use.

    Science.gov (United States)

    Sammito, Massimo; Millán, Claudia; Frieske, Dawid; Rodríguez-Freire, Eloy; Borges, Rafael J; Usón, Isabel

    2015-09-01

    ARCIMBOLDO solves the phase problem at resolutions of around 2 Å or better through massive combination of small fragments and density modification. For complex structures, this imposes a need for a powerful grid where calculations can be distributed, but for structures with up to 200 amino acids in the asymmetric unit a single workstation may suffice. The use and performance of the single-workstation implementation, ARCIMBOLDO_LITE, on a pool of test structures with 40-120 amino acids and resolutions between 0.54 and 2.2 Å is described. Inbuilt polyalanine helices and iron cofactors are used as search fragments. ARCIMBOLDO_BORGES can also run on a single workstation to solve structures in this test set using precomputed libraries of local folds. The results of this study have been incorporated into an automated, resolution- and hardware-dependent parameterization. ARCIMBOLDO has been thoroughly rewritten and three binaries are now available: ARCIMBOLDO_LITE, ARCIMBOLDO_SHREDDER and ARCIMBOLDO_BORGES. The programs and libraries can be downloaded from http://chango.ibmb.csic.es/ARCIMBOLDO_LITE.

  20. Comparison of computer workstation with light box for detecting setup errors from portal images

    International Nuclear Information System (INIS)

    Boxwala, Aziz A.; Chaney, Edward L.; Fritsch, Daniel S.; Raghavan, Suraj; Coffey, Christopher S.; Major, Stacey A.; Muller, Keith E.

    1999-01-01

    Purpose: Observer studies were conducted to test the hypothesis that radiation oncologists using a computer workstation for portal image analysis can detect setup errors at least as accurately as when following standard clinical practice of inspecting portal films on a light box. Methods and Materials: In a controlled observer study, nine radiation oncologists used a computer workstation, called PortFolio, to detect setup errors in 40 realistic digitally reconstructed portal radiograph (DRPR) images. PortFolio is a prototype workstation for radiation oncologists to display and inspect digital portal images for setup errors. PortFolio includes tools for image enhancement; alignment of crosshairs, field edges, and anatomic structures on reference and acquired images; measurement of distances and angles; and viewing registered images superimposed on one another. The test DRPRs contained known in-plane translation or rotation errors in the placement of the fields over target regions in the pelvis and head. Test images used in the study were also printed on film for observers to view on a light box and interpret using standard clinical practice. The mean accuracy for error detection for each approach was measured and the results were compared using repeated measures analysis of variance (ANOVA) with the Geisser-Greenhouse test statistic. Results: The results indicate that radiation oncologists participating in this study could detect and quantify in-plane rotation and translation errors more accurately with PortFolio compared to standard clinical practice. Conclusions: Based on the results of this limited study, it is reasonable to conclude that workstations similar to PortFolio can be used efficaciously in clinical practice

  1. Out-of-core nuclear fuel cycle optimization utilizing an engineering workstation

    International Nuclear Information System (INIS)

    Turinsky, P.J.; Comes, S.A.

    1986-01-01

    Within the past several years, rapid advances in computer technology have resulted in substantial increases in their performance. The net effect is that problems that could previously only be executed on mainframe computers can now be executed on micro- and minicomputers. The authors are interested in developing an engineering workstation for nuclear fuel management applications. An engineering workstation is defined as a microcomputer with enhanced graphics and communication capabilities. Current fuel management applications range from using workstations as front-end/back-end processors for mainframe computers to completing fuel management scoping calculations. More recently, interest in using workstations for final in-core design calculations has appeared. The authors have used the VAX 11/750 minicomputer, which is not truly an engineering workstation but has comparable performance, to complete both in-core and out-of-core fuel management scoping studies. In this paper, the authors concentrate on our out-of-core research. While much previous work in this area has dealt with decisions concerned with equilibrium cycles, the current project addresses the more realistic situation of nonequilibrium cycles

  2. Assessment of a cooperative workstation.

    Science.gov (United States)

    Beuscart, R J; Molenda, S; Souf, N; Foucher, C; Beuscart-Zephir, M C

    1996-01-01

    Groupware and new Information Technologies have now made it possible for people in different places to work together in synchronous cooperation. Very often, designers of this new type of software are not provided with a model of the common workspace, which is prejudicial to software development and its acceptance by potential users. The authors take the example of a task of medical co-diagnosis, using a multi-media communication workstation. Synchronous cooperative work is made possible by using local ETHERNET or public ISDN Networks. A detailed ergonomic task analysis studies the cognitive functioning of the physicians involved, compares their behaviour in the normal and the mediatized situations, and leads to an interpretation of the likely causes for success or failure of CSCW tools.

  3. The Impact of Active Workstations on Workplace Productivity and Performance: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Samson O. Ojo

    2018-02-01

    Full Text Available Active workstations have been recommended for reducing sedentary behavior in the workplace. It is important to understand if the use of these workstations has an impact on worker productivity. The aim of this systematic review was to examine the effect of active workstations on workplace productivity and performance. A total of 3303 articles were initially identified by a systematic search and seven articles met eligibility criteria for inclusion. A quality appraisal was conducted to assess risk of bias, confounding, internal and external validity, and reporting. Most of the studies reported cognitive performance as opposed to productivity. Five studies assessed cognitive performance during use of an active workstation, usually in a single session. Sit-stand desks had no detrimental effect on performance, however, some studies with treadmill and cycling workstations identified potential decreases in performance. Many of the studies lacked the power required to achieve statistical significance. Three studies assessed workplace productivity after prolonged use of an active workstation for between 12 and 52 weeks. These studies reported no significant effect on productivity. Active workstations do not appear to decrease workplace performance.

  4. The transition of GTDS to the Unix workstation environment

    Science.gov (United States)

    Carter, D.; Metzinger, R.; Proulx, R.; Cefola, P.

    1995-01-01

    Future Flight Dynamics systems should take advantage of the possibilities provided by current and future generations of low-cost, high performance workstation computing environments with Graphical User Interface. The port of the existing mainframe Flight Dynamics systems to the workstation environment offers an economic approach for combining the tremendous engineering heritage that has been encapsulated in these systems with the advantages of the new computing environments. This paper will describe the successful transition of the Draper Laboratory R&D version of GTDS (Goddard Trajectory Determination System) from the IBM Mainframe to the Unix workstation environment. The approach will be a mix of historical timeline notes, descriptions of the technical problems overcome, and descriptions of associated SQA (software quality assurance) issues.

  5. Initial experience with a nuclear medicine viewing workstation

    Science.gov (United States)

    Witt, Robert M.; Burt, Robert W.

    1992-07-01

    Graphical User Interfaced (GUI) workstations are now available from commercial vendors. We recently installed a GUI workstation in our nuclear medicine reading room for exclusive use of staff and resident physicians. The system is built upon a Macintosh platform and has been available as a DELTAmanager from MedImage and more recently as an ICON V from Siemens Medical Systems. The workstation provides only display functions and connects to our existing nuclear medicine imaging system via ethernet. The system has some processing capabilities to create oblique, sagittal and coronal views from transverse tomographic views. Hard copy output is via a screen save device and a thermal color printer. The DELTAmanager replaced a MicroDELTA workstation which had both process and view functions. The mouse activated GUI has made remarkable changes to physicians'' use of the nuclear medicine viewing system. Training time to view and review studies has been reduced from hours to about 30-minutes. Generation of oblique views and display of brain and heart tomographic studies has been reduced from about 30-minutes of technician''s time to about 5-minutes of physician''s time. Overall operator functionality has been increased so that resident physicians with little prior computer experience can access all images on the image server and display pertinent patient images when consulting with other staff.

  6. HUMAN RELIABILITY ANALYSIS DENGAN PENDEKATAN COGNITIVE RELIABILITY AND ERROR ANALYSIS METHOD (CREAM

    Directory of Open Access Journals (Sweden)

    Zahirah Alifia Maulida

    2015-01-01

    Full Text Available Kecelakaan kerja pada bidang grinding dan welding menempati urutan tertinggi selama lima tahun terakhir di PT. X. Kecelakaan ini disebabkan oleh human error. Human error terjadi karena pengaruh lingkungan kerja fisik dan non fisik.Penelitian kali menggunakan skenario untuk memprediksi serta mengurangi kemungkinan terjadinya error pada manusia dengan pendekatan CREAM (Cognitive Reliability and Error Analysis Method. CREAM adalah salah satu metode human reliability analysis yang berfungsi untuk mendapatkan nilai Cognitive Failure Probability (CFP yang dapat dilakukan dengan dua cara yaitu basic method dan extended method. Pada basic method hanya akan didapatkan nilai failure probabailty secara umum, sedangkan untuk extended method akan didapatkan CFP untuk setiap task. Hasil penelitian menunjukkan faktor- faktor yang mempengaruhi timbulnya error pada pekerjaan grinding dan welding adalah kecukupan organisasi, kecukupan dari Man Machine Interface (MMI & dukungan operasional, ketersediaan prosedur/ perencanaan, serta kecukupan pelatihan dan pengalaman. Aspek kognitif pada pekerjaan grinding yang memiliki nilai error paling tinggi adalah planning dengan nilai CFP 0.3 dan pada pekerjaan welding yaitu aspek kognitif execution dengan nilai CFP 0.18. Sebagai upaya untuk mengurangi nilai error kognitif pada pekerjaan grinding dan welding rekomendasi yang diberikan adalah memberikan training secara rutin, work instrucstion yang lebih rinci dan memberikan sosialisasi alat. Kata kunci: CREAM (cognitive reliability and error analysis method, HRA (human reliability analysis, cognitive error Abstract The accidents in grinding and welding sectors were the highest cases over the last five years in PT. X and it caused by human error. Human error occurs due to the influence of working environment both physically and non-physically. This study will implement an approaching scenario called CREAM (Cognitive Reliability and Error Analysis Method. CREAM is one of human

  7. ERGONOMICs IN THE COMPUTER WORKsTATION

    African Journals Online (AJOL)

    2010-09-09

    Sep 9, 2010 ... in relation to their work environment and working surroundings. ... prolonged computer usage and application of ergonomics in the workstation. Design:One hundred and .... Occupational Health and Safety Services should.

  8. A cycling workstation to facilitate physical activity in office settings.

    Science.gov (United States)

    Elmer, Steven J; Martin, James C

    2014-07-01

    Facilitating physical activity during the workday may help desk-bound workers reduce risks associated with sedentary behavior. We 1) evaluated the efficacy of a cycling workstation to increase energy expenditure while performing a typing task and 2) fabricated a power measurement system to determine the accuracy and reliability of an exercise cycle. Ten individuals performed 10 min trials of sitting while typing (SIT type) and pedaling while typing (PED type). Expired gases were recorded and typing performance was assessed. Metabolic cost during PED type was ∼ 2.5 × greater compared to SIT type (255 ± 14 vs. 100 ± 11 kcal h(-1), P physical activity without compromising typing performance. The exercise cycle's inaccuracy could be misleading to users. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  9. RELIABILITY ANALYSIS OF BENDING ELIABILITY ANALYSIS OF ...

    African Journals Online (AJOL)

    eobe

    Reliability analysis of the safety levels of the criteria slabs, have been .... was also noted [2] that if the risk level or β < 3.1), the ... reliability analysis. A study [6] has shown that all geometric variables, ..... Germany, 1988. 12. Hasofer, A. M and ...

  10. Real-time on a standard UNIX workstation?

    International Nuclear Information System (INIS)

    Glanzman, T.

    1992-09-01

    This is a report of an ongoing R ampersand D project which is investigating the use of standard UNIX workstations for the real-time data acquisition from a major new experimental initiative, the SLAC B Factory (PEP II). For this work an IBM RS/6000 workstation running the AIX operating system is used. Real-time extensions to the UNIX operating system are explored and performance measured. These extensions comprise a set of AIX-specific and POSIX-compliant system services. Benchmark comparisons are made with embedded processor technologies. Results are presented for a simple prototype on-line system for laboratory-testing of a new prototype drift chamber

  11. Reliability analysis techniques in power plant design

    International Nuclear Information System (INIS)

    Chang, N.E.

    1981-01-01

    An overview of reliability analysis techniques is presented as applied to power plant design. The key terms, power plant performance, reliability, availability and maintainability are defined. Reliability modeling, methods of analysis and component reliability data are briefly reviewed. Application of reliability analysis techniques from a design engineering approach to improving power plant productivity is discussed. (author)

  12. Reliability analysis of shutdown system

    International Nuclear Information System (INIS)

    Kumar, C. Senthil; John Arul, A.; Pal Singh, Om; Suryaprakasa Rao, K.

    2005-01-01

    This paper presents the results of reliability analysis of Shutdown System (SDS) of Indian Prototype Fast Breeder Reactor. Reliability analysis carried out using Fault Tree Analysis predicts a value of 3.5 x 10 -8 /de for failure of shutdown function in case of global faults and 4.4 x 10 -8 /de for local faults. Based on 20 de/y, the frequency of shutdown function failure is 0.7 x 10 -6 /ry, which meets the reliability target, set by the Indian Atomic Energy Regulatory Board. The reliability is limited by Common Cause Failure (CCF) of actuation part of SDS and to a lesser extent CCF of electronic components. The failure frequency of individual systems is -3 /ry, which also meets the safety criteria. Uncertainty analysis indicates a maximum error factor of 5 for the top event unavailability

  13. Integrating reliability analysis and design

    International Nuclear Information System (INIS)

    Rasmuson, D.M.

    1980-10-01

    This report describes the Interactive Reliability Analysis Project and demonstrates the advantages of using computer-aided design systems (CADS) in reliability analysis. Common cause failure problems require presentations of systems, analysis of fault trees, and evaluation of solutions to these. Results have to be communicated between the reliability analyst and the system designer. Using a computer-aided design system saves time and money in the analysis of design. Computer-aided design systems lend themselves to cable routing, valve and switch lists, pipe routing, and other component studies. At EG and G Idaho, Inc., the Applicon CADS is being applied to the study of water reactor safety systems

  14. Simulation model of a single-server order picking workstation using aggregate process times

    NARCIS (Netherlands)

    Andriansyah, R.; Etman, L.F.P.; Rooda, J.E.; Biles, W.E.; Saltelli, A.; Dini, C.

    2009-01-01

    In this paper we propose a simulation modeling approach based on aggregate process times for the performance analysis of order picking workstations in automated warehouses with first-in-first-out processing of orders. The aggregate process time distribution is calculated from tote arrival and

  15. A PC/workstation cluster computing environment for reservoir engineering simulation applications

    International Nuclear Information System (INIS)

    Hermes, C.E.; Koo, J.

    1995-01-01

    Like the rest of the petroleum industry, Texaco has been transferring its applications and databases from mainframes to PC's and workstations. This transition has been very positive because it provides an environment for integrating applications, increases end-user productivity, and in general reduces overall computing costs. On the down side, the transition typically results in a dramatic increase in workstation purchases and raises concerns regarding the cost and effective management of computing resources in this new environment. The workstation transition also places the user in a Unix computing environment which, to say the least, can be quite frustrating to learn and to use. This paper describes the approach, philosophy, architecture, and current status of the new reservoir engineering/simulation computing environment developed at Texaco's E and P Technology Dept. (EPTD) in Houston. The environment is representative of those under development at several other large oil companies and is based on a cluster of IBM and Silicon Graphics Intl. (SGI) workstations connected by a fiber-optics communications network and engineering PC's connected to local area networks, or Ethernets. Because computing resources and software licenses are shared among a group of users, the new environment enables the company to get more out of its investments in workstation hardware and software

  16. Multi-Disciplinary System Reliability Analysis

    Science.gov (United States)

    Mahadevan, Sankaran; Han, Song

    1997-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code developed under the leadership of NASA Lewis Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multi-disciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  17. The use of bicycle workstations to increase physical activity in secondary classrooms

    Directory of Open Access Journals (Sweden)

    Alicia Fedewa

    2017-11-01

    Full Text Available Background To date, the majority of interventions have implemented classroom-based physical activity (PA at the elementary level; however, there is both the potential and need to explore student outcomes at high-school level as well, given that very few studies have incorporated classroom-based PA interventions for adolescents. One exception has been the use of bicycle workstations within secondary classrooms. Using bicycle workstations in lieu of traditional chairs in a high school setting shows promise for enhancing adolescents’ physical activity during the school day. Participants and procedure The present study explored the effects of integrating bicycle workstations into a secondary classroom setting for four months in a sample of 115 adolescents using an A-B-A-B withdrawal design. The study took place in one Advanced Placement English classroom across five groups of students. Physical activity outcomes included average heart rate, and caloric expenditure. Behavioural outcomes included percentage of on-task/off-task behaviour and number of teacher prompts in redirecting off-task behaviour. Feasibility and acceptability data of using the bicycle workstations were also collected. Results Findings showed significant improvements in physical activity as measured by heart rate and caloric expenditure, although heart rate percentage remained in the low intensity range when students were on the bicycle workstations. No effects were found on students’ on-task behaviour when using the bicycle workstations. Overall, students found the bikes acceptable to use but noted disadvantages of them as well. Conclusions Using bicycle workstations in high-school settings appears promising for enhancing low-intensity physical activity among adolescents. The limitations of the present study and implications for physical activity interventions in secondary schools are discussed.

  18. Fundamentals and applications of systems reliability analysis

    International Nuclear Information System (INIS)

    Boesebeck, K.; Heuser, F.W.; Kotthoff, K.

    1976-01-01

    The lecture gives a survey on the application of methods of reliability analysis to assess the safety of nuclear power plants. Possible statements of reliability analysis in connection with specifications of the atomic licensing procedure are especially dealt with. Existing specifications of safety criteria are additionally discussed with the help of reliability analysis by the example of the reliability analysis of a reactor protection system. Beyond the limited application to single safety systems, the significance of reliability analysis for a closed risk concept is explained in the last part of the lecture. (orig./LH) [de

  19. Communication System Simulation Workstation

    Science.gov (United States)

    1990-01-30

    SIMULATION WORKSTATION Grant # AFOSR-89-0117 Submitted to: DEPARTMENT OF AIR FORCE AIR FORCE OFFICE OF SCIENTIFIC RESEARCH BOLLING AIR FORCE BASE , DC...CORRESPONOENCiA. PAGUETES. CONIIUCE. r ACTUHA. Y CONOCIMIENTO DE EMBAROUES. THIS PURCHASE ORDER [,rccion Cablegralica .1,1 Addrv~s NO MUST APPEAR ON ALL...sub-band decomposition was developed, PKX, based on the modulation of a single prototype filter. This technicde was introduced first by Nassbauner and

  20. HiRel: Hybrid Automated Reliability Predictor (HARP) integrated reliability tool system, (version 7.0). Volume 1: HARP introduction and user's guide

    Science.gov (United States)

    Bavuso, Salvatore J.; Rothmann, Elizabeth; Dugan, Joanne Bechta; Trivedi, Kishor S.; Mittal, Nitin; Boyd, Mark A.; Geist, Robert M.; Smotherman, Mark D.

    1994-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed to be compatible with most computing platforms and operating systems, and some programs have been beta tested, within the aerospace community for over 8 years. Volume 1 provides an introduction to the HARP program. Comprehensive information on HARP mathematical models can be found in the references.

  1. Reliability analysis of software based safety functions

    International Nuclear Information System (INIS)

    Pulkkinen, U.

    1993-05-01

    The methods applicable in the reliability analysis of software based safety functions are described in the report. Although the safety functions also include other components, the main emphasis in the report is on the reliability analysis of software. The check list type qualitative reliability analysis methods, such as failure mode and effects analysis (FMEA), are described, as well as the software fault tree analysis. The safety analysis based on the Petri nets is discussed. The most essential concepts and models of quantitative software reliability analysis are described. The most common software metrics and their combined use with software reliability models are discussed. The application of software reliability models in PSA is evaluated; it is observed that the recent software reliability models do not produce the estimates needed in PSA directly. As a result from the study some recommendations and conclusions are drawn. The need of formal methods in the analysis and development of software based systems, the applicability of qualitative reliability engineering methods in connection to PSA and the need to make more precise the requirements for software based systems and their analyses in the regulatory guides should be mentioned. (orig.). (46 refs., 13 figs., 1 tab.)

  2. A standardized non-instrumental tool for characterizing workstations concerned with exposure to engineered nanomaterials

    Science.gov (United States)

    Canu I, Guseva; C, Ducros; S, Ducamp; L, Delabre; S, Audignon-Durand; C, Durand; Y, Iwatsubo; D, Jezewski-Serra; Bihan O, Le; S, Malard; A, Radauceanu; M, Reynier; M, Ricaud; O, Witschger

    2015-05-01

    carbon nanotubes. Among the tasks observed there were: nanomaterial characterisation analysis (8), weighing (7), synthesis (6), functionalization (5), and transfer (5). The manipulated quantities were usually very small. After analysis of the data gathered in logbooks, 30 workstations have been classified as concerned with exposure to carbon nanotubes or TiO2. Additional tool validity as well as inter-and intra-evaluator reproducibility studies are ongoing. The first results are promising.

  3. A standardized non-instrumental tool for characterizing workstations concerned with exposure to engineered nanomaterials

    International Nuclear Information System (INIS)

    I, Guseva Canu; S, Ducamp; L, Delabre; Y, Iwatsubo; D, Jezewski-Serra; C, Ducros; S, Audignon-Durand; C, Durand; O, Le Bihan; S, Malard; A, Radauceanu; M, Reynier; M, Ricaud; O, Witschger

    2015-01-01

    carbon nanotubes. Among the tasks observed there were: nanomaterial characterisation analysis (8), weighing (7), synthesis (6), functionalization (5), and transfer (5). The manipulated quantities were usually very small. After analysis of the data gathered in logbooks, 30 workstations have been classified as concerned with exposure to carbon nanotubes or TiO 2 . Additional tool validity as well as inter-and intra-evaluator reproducibility studies are ongoing. The first results are promising. (paper)

  4. A methodology to emulate and evaluate a productive virtual workstation

    Science.gov (United States)

    Krubsack, David; Haberman, David

    1992-01-01

    The Advanced Display and Computer Augmented Control (ADCACS) Program at ACT is sponsored by NASA Ames to investigate the broad field of technologies which must be combined to design a 'virtual' workstation for the Space Station Freedom. This program is progressing in several areas and resulted in the definition of requirements for a workstation. A unique combination of technologies at the ACT Laboratory have been networked to effectively create an experimental environment. This experimental environment allows the integration of nonconventional input devices with a high power graphics engine within the framework of an expert system shell which coordinates the heterogeneous inputs with the 'virtual' presentation. The flexibility of the workstation is evolved as experiments are designed and conducted to evaluate the condition descriptions and rule sets of the expert system shell and its effectiveness in driving the graphics engine. Workstation productivity has been defined by the achievable performance in the emulator of the calibrated 'sensitivity' of input devices, the graphics presentation, the possible optical enhancements to achieve a wide field of view color image and the flexibility of conditional descriptions in the expert system shell in adapting to prototype problems.

  5. Supervisory Control Technique For An Assembly Workstation As A Dynamic Discrete Event System

    Directory of Open Access Journals (Sweden)

    Daniela Cristina CERNEGA

    2001-12-01

    Full Text Available This paper proposes a control problem statement in the framework of supervisory control technique for the assembly workstations. A desired behaviour of an assembly workstation is analysed. The behaviour of such a workstation is cyclic and some linguistic properties are established. In this paper, it is proposed an algorithm for the computation of the supremal controllable language of the closed system desired language. Copyright © 2001 IFAC.

  6. Efficient Parallel Engineering Computing on Linux Workstations

    Science.gov (United States)

    Lou, John Z.

    2010-01-01

    A C software module has been developed that creates lightweight processes (LWPs) dynamically to achieve parallel computing performance in a variety of engineering simulation and analysis applications to support NASA and DoD project tasks. The required interface between the module and the application it supports is simple, minimal and almost completely transparent to the user applications, and it can achieve nearly ideal computing speed-up on multi-CPU engineering workstations of all operating system platforms. The module can be integrated into an existing application (C, C++, Fortran and others) either as part of a compiled module or as a dynamically linked library (DLL).

  7. System engineering workstations - critical tool in addressing waste storage, transportation, or disposal

    International Nuclear Information System (INIS)

    Mar, B.W.

    1987-01-01

    The ability to create, evaluate, operate, and manage waste storage, transportation, and disposal systems (WSTDSs) is greatly enhanced when automated tools are available to support the generation of the voluminous mass of documents and data associated with the system engineering of the program. A system engineering workstation is an optimized set of hardware and software that provides such automated tools to those performing system engineering functions. This paper explores the functions that need to be performed by a WSTDS system engineering workstation. While the latter stages of a major WSTDS may require a mainframe computer and specialized software systems, most of the required system engineering functions can be supported by a system engineering workstation consisting of a personnel computer and commercial software. These findings suggest system engineering workstations for WSTDS applications will cost less than $5000 per unit, and the payback on the investment can be realized in a few months. In most cases the major cost element is not the capital costs of hardware or software, but the cost to train or retrain the system engineers in the use of the workstation and to ensure that the system engineering functions are properly conducted

  8. Impact of workstations on criticality analyses at ABB combustion engineering

    International Nuclear Information System (INIS)

    Tarko, L.B.; Freeman, R.S.; O'Donnell, P.F.

    1993-01-01

    During 1991, ABB Combustion Engineering (ABB C-E) made the transition from a CDC Cyber 990 mainframe for nuclear criticality safety analyses to Hewlett Packard (HP)/Apollo workstations. The primary motivation for this change was improved economics of the workstation and maintaining state-of-the-art technology. The Cyber 990 utilized the NOS operating system with a 60-bit word size. The CPU memory size was limited to 131 100 words of directly addressable memory with an extended 250000 words available. The Apollo workstation environment at ABB consists of HP/Apollo-9000/400 series desktop units used by most application engineers, networked with HP/Apollo DN10000 platforms that use 32-bit word size and function as the computer servers and network administrative CPUS, providing a virtual memory system

  9. Evaluation of PC-based diagnostic radiology workstations

    International Nuclear Information System (INIS)

    Pollack, T.; Brueggenwerth, G.; Kaulfuss, K.; Niederlag, W.

    2000-01-01

    Material and Methods: During February 1999 and September 1999 medical users at the hospital Dresden-Friedrichstadt Germany had tested 7 types of radiology diagnostic workstations. Two types of test methods were used: In test type 1 ergonomic and handling functions were evaluated impartial according to 78 selected user requirements. In test type 2 radiologists and radiographers (3+4) performed 23 work flow steps with a subjectively evaluation. Results: By using a progressive rating no product could fully meet the user requirements. As a result of the summary evaluation for test 1 and test 2 the following compliance rating was calculated for the different products: Rad Works (66%), Magic View (63%), ID-Report (58%), Impax 3000 (53%), Medical Workstation (52%), Pathspeed (46%) and Autorad (39%). (orig.) [de

  10. The advanced software development workstation project

    Science.gov (United States)

    Fridge, Ernest M., III; Pitman, Charles L.

    1991-01-01

    The Advanced Software Development Workstation (ASDW) task is researching and developing the technologies required to support Computer Aided Software Engineering (CASE) with the emphasis on those advanced methods, tools, and processes that will be of benefit to support all NASA programs. Immediate goals are to provide research and prototype tools that will increase productivity, in the near term, in projects such as the Software Support Environment (SSE), the Space Station Control Center (SSCC), and the Flight Analysis and Design System (FADS) which will be used to support the Space Shuttle and Space Station Freedom. Goals also include providing technology for development, evolution, maintenance, and operations. The technologies under research and development in the ASDW project are targeted to provide productivity enhancements during the software life cycle phase of enterprise and information system modeling, requirements generation and analysis, system design and coding, and system use and maintenance. On-line user's guides will assist users in operating the developed information system with knowledge base expert assistance.

  11. Flow time prediction for a single-server order picking workstation using aggregate process times

    NARCIS (Netherlands)

    Andriansyah, R.; Etman, L.F.P.; Rooda, J.E.

    2010-01-01

    In this paper we propose a simulation modeling approach based on aggregate process times for the performance analysis of order picking workstations in automated warehouses. The aggregate process time distribution is calculated from tote arrival and departure times. We refer to the aggregate process

  12. Argo workstation: a key component of operational oceanography

    Science.gov (United States)

    Dong, Mingmei; Xu, Shanshan; Miao, Qingsheng; Yue, Xinyang; Lu, Jiawei; Yang, Yang

    2018-02-01

    Operational oceanography requires the quantity, quality, and availability of data set and the timeliness and effectiveness of data products. Without steady and strong operational system supporting, operational oceanography will never be proceeded far. In this paper we describe an integrated platform named Argo Workstation. It operates as a data processing and management system, capable of data collection, automatic data quality control, visualized data check, statistical data search and data service. After it is set up, Argo workstation provides global high quality Argo data to users every day timely and effectively. It has not only played a key role in operational oceanography but also set up an example for operational system.

  13. High-performance mass storage system for workstations

    Science.gov (United States)

    Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.

    1993-01-01

    Reduced Instruction Set Computer (RISC) workstations and Personnel Computers (PC) are very popular tools for office automation, command and control, scientific analysis, database management, and many other applications. However, when using Input/Output (I/O) intensive applications, the RISC workstations and PC's are often overburdened with the tasks of collecting, staging, storing, and distributing data. Also, by using standard high-performance peripherals and storage devices, the I/O function can still be a common bottleneck process. Therefore, the high-performance mass storage system, developed by Loral AeroSys' Independent Research and Development (IR&D) engineers, can offload a RISC workstation of I/O related functions and provide high-performance I/O functions and external interfaces. The high-performance mass storage system has the capabilities to ingest high-speed real-time data, perform signal or image processing, and stage, archive, and distribute the data. This mass storage system uses a hierarchical storage structure, thus reducing the total data storage cost, while maintaining high-I/O performance. The high-performance mass storage system is a network of low-cost parallel processors and storage devices. The nodes in the network have special I/O functions such as: SCSI controller, Ethernet controller, gateway controller, RS232 controller, IEEE488 controller, and digital/analog converter. The nodes are interconnected through high-speed direct memory access links to form a network. The topology of the network is easily reconfigurable to maximize system throughput for various applications. This high-performance mass storage system takes advantage of a 'busless' architecture for maximum expandability. The mass storage system consists of magnetic disks, a WORM optical disk jukebox, and an 8mm helical scan tape to form a hierarchical storage structure. Commonly used files are kept in the magnetic disk for fast retrieval. The optical disks are used as archive

  14. System reliability evaluation of a touch panel manufacturing system with defect rate and reworking

    International Nuclear Information System (INIS)

    Lin, Yi-Kuei; Huang, Cheng-Fu; Chang, Ping-Chen

    2013-01-01

    In recent years, portable consumer electronic products, such as cell phone, GPS, digital camera, tablet PC, and notebook are using touch panel as interface. With the demand of touch panel increases, performance assessment is essential for touch panel production. This paper develops a method to evaluate system reliability of a touch panel manufacturing system (TPMS) with defect rate of each workstation and takes reworking actions into account. The system reliability which evaluates the possibility of demand satisfaction can provide to managers with an understanding of the system capability and can indicate possible improvements. First, we construct a capacitated manufacturing network (CMN) for a TPMS. Second, a decomposition technique is developed to determine the input flow of each workstation based on the CMN. Finally, we generate the minimal capacity vectors that should be provided to satisfy the demand. The system reliability is subsequently evaluated in terms of the minimal capacity vectors. A further decision making issue is discussed to decide a reliable production strategy. -- Graphical abstract: The proposed procedure to evaluate system reliability of the touch panel manufacturing system (TPMS). Highlights: • The system reliability of a touch panel manufacturing system (TPMS) is evaluated. • The reworking actions are taken into account in the TPMS. • A capacitated manufacturing network is constructed for the TPMS. • A procedure is proposed to evaluate system reliability of TPMS

  15. Power system reliability analysis using fault trees

    International Nuclear Information System (INIS)

    Volkanovski, A.; Cepin, M.; Mavko, B.

    2006-01-01

    The power system reliability analysis method is developed from the aspect of reliable delivery of electrical energy to customers. The method is developed based on the fault tree analysis, which is widely applied in the Probabilistic Safety Assessment (PSA). The method is adapted for the power system reliability analysis. The method is developed in a way that only the basic reliability parameters of the analysed power system are necessary as an input for the calculation of reliability indices of the system. The modeling and analysis was performed on an example power system consisting of eight substations. The results include the level of reliability of current power system configuration, the combinations of component failures resulting in a failed power delivery to loads, and the importance factors for components and subsystems. (author)

  16. Migration of nuclear criticality safety software from a mainframe to a workstation environment

    International Nuclear Information System (INIS)

    Bowie, L.J.; Robinson, R.C.; Cain, V.R.

    1993-01-01

    The Nuclear Criticality Safety Department (NCSD), Oak Ridge Y-12 Plant has undergone the transition of executing the Martin Marietta Energy Systems Nuclear Criticality Safety Software (NCSS) on IBM mainframes to a Hewlett-Packard (HP) 9000/730 workstation (NCSSHP). NCSSHP contains the following configuration controlled modules and cross-section libraries: BONAMI, CSAS, GEOMCHY, ICE, KENO IV, KENO Va, MODIIFY, NITAWL SCALE, SLTBLIB, XSDRN, UNIXLIB, albedos library, weights library, 16-Group HANSEN-ROACH master library, 27-Group ENDF/B-IV master library, and standard composition library. This paper will discuss the method used to choose the workstation, the hardware setup of the chosen workstation, an overview of Y-12 software quality assurance and configuration control methodology, code validation, difficulties encountered in migrating the codes, and advantages to migrating to a workstation environment

  17. Workstations as consoles for the CERN-PS complex, setting-up the environment

    International Nuclear Information System (INIS)

    Antonsanti, P.; Arruat, M.; Bouche, J.M.; Cons, L.; Deloose, Y.; Di Maio, F.

    1992-01-01

    Within the framework of the rejuvenation project of the CERN control systems, commercial workstations have to replace existing home-designed operator consoles. RISC-based workstations with UNIX, X-window TM and OSF/Motif TM have been introduced for the control of the PS complex. The first versions of general functionalities like synoptic display, program selection and control panels have been implemented and the first large scale application has been realized. This paper describes the different components of the workstation environment for the implementation of the applications. The focus is on the set of tools which have been used, developed or integrated, and on how we plan to make them evolve. (author)

  18. Shoulder girdle muscle activity and fatigue in traditional and improved design carpet weaving workstations.

    Science.gov (United States)

    Allahyari, Teimour; Mortazavi, Narges; Khalkhali, Hamid Reza; Sanjari, Mohammad Ali

    2016-01-01

    Work-related musculoskeletal disorders in the neck and shoulder regions are common among carpet weavers. Working for prolonged hours in a static and awkward posture could result in an increased muscle activity and may lead to musculoskeletal disorders. Ergonomic workstation improvements can reduce muscle fatigue and the risk of musculoskeletal disorders. The aim of this study is to assess and to compare upper trapezius and middle deltoid muscle activity in 2 traditional and improved design carpet weaving workstations. These 2 workstations were simulated in a laboratory and 12 women carpet weavers worked for 3 h. Electromyography (EMG) signals were recorded during work in bilateral upper trapezius and bilateral middle deltoid. The root mean square (RMS) and median frequency (MF) values were calculated and used to assess muscle load and fatigue. Repeated measure ANOVA was performed to assess the effect of independent variables on muscular activity and fatigue. The participants were asked to report shoulder region fatigue on the Borg's Category-Ratio scale (Borg CR-10). Root mean square values in workstation A are significantly higher than in workstation B. Furthermore, EMG amplitude was higher in bilateral trapezius than in bilateral deltoid. However, muscle fatigue was not observed in any of the workstations. The results of the study revealed that muscle load in a traditional workstation was high, but fatigue was not observed. Further studies investigating other muscles involved in carpet weaving tasks are recommended. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  19. Energy-efficiency based classification of the manufacturing workstation

    Science.gov (United States)

    Frumuşanu, G.; Afteni, C.; Badea, N.; Epureanu, A.

    2017-08-01

    EU Directive 92/75/EC established for the first time an energy consumption labelling scheme, further implemented by several other directives. As consequence, nowadays many products (e.g. home appliances, tyres, light bulbs, houses) have an EU Energy Label when offered for sale or rent. Several energy consumption models of manufacturing equipments have been also developed. This paper proposes an energy efficiency - based classification of the manufacturing workstation, aiming to characterize its energetic behaviour. The concept of energy efficiency of the manufacturing workstation is defined. On this base, a classification methodology has been developed. It refers to specific criteria and their evaluation modalities, together to the definition & delimitation of energy efficiency classes. The energy class position is defined after the amount of energy needed by the workstation in the middle point of its operating domain, while its extension is determined by the value of the first coefficient from the Taylor series that approximates the dependence between the energy consume and the chosen parameter of the working regime. The main domain of interest for this classification looks to be the optimization of the manufacturing activities planning and programming. A case-study regarding an actual lathe classification from energy efficiency point of view, based on two different approaches (analytical and numerical) is also included.

  20. Reliability analysis under epistemic uncertainty

    International Nuclear Information System (INIS)

    Nannapaneni, Saideep; Mahadevan, Sankaran

    2016-01-01

    This paper proposes a probabilistic framework to include both aleatory and epistemic uncertainty within model-based reliability estimation of engineering systems for individual limit states. Epistemic uncertainty is considered due to both data and model sources. Sparse point and/or interval data regarding the input random variables leads to uncertainty regarding their distribution types, distribution parameters, and correlations; this statistical uncertainty is included in the reliability analysis through a combination of likelihood-based representation, Bayesian hypothesis testing, and Bayesian model averaging techniques. Model errors, which include numerical solution errors and model form errors, are quantified through Gaussian process models and included in the reliability analysis. The probability integral transform is used to develop an auxiliary variable approach that facilitates a single-level representation of both aleatory and epistemic uncertainty. This strategy results in an efficient single-loop implementation of Monte Carlo simulation (MCS) and FORM/SORM techniques for reliability estimation under both aleatory and epistemic uncertainty. Two engineering examples are used to demonstrate the proposed methodology. - Highlights: • Epistemic uncertainty due to data and model included in reliability analysis. • A novel FORM-based approach proposed to include aleatory and epistemic uncertainty. • A single-loop Monte Carlo approach proposed to include both types of uncertainties. • Two engineering examples used for illustration.

  1. 40 CFR 86.1312-2007 - Filter stabilization and microbalance workstation environmental conditions, microbalance...

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 19 2010-07-01 2010-07-01 false Filter stabilization and microbalance workstation environmental conditions, microbalance specifications, and particulate matter filter handling and... Particulate Exhaust Test Procedures § 86.1312-2007 Filter stabilization and microbalance workstation...

  2. Reliability analysis techniques for the design engineer

    International Nuclear Information System (INIS)

    Corran, E.R.; Witt, H.H.

    1982-01-01

    This paper describes a fault tree analysis package that eliminates most of the housekeeping tasks involved in proceeding from the initial construction of a fault tree to the final stage of presenting a reliability analysis in a safety report. It is suitable for designers with relatively little training in reliability analysis and computer operation. Users can rapidly investigate the reliability implications of various options at the design stage and evolve a system which meets specified reliability objectives. Later independent review is thus unlikely to reveal major shortcomings necessitating modification and project delays. The package operates interactively, allowing the user to concentrate on the creative task of developing the system fault tree, which may be modified and displayed graphically. For preliminary analysis, system data can be derived automatically from a generic data bank. As the analysis proceeds, improved estimates of critical failure rates and test and maintenance schedules can be inserted. The technique is applied to the reliability analysis of the recently upgraded HIFAR Containment Isolation System. (author)

  3. BioPhotonics Workstation: 3D interactive manipulation, observation and characterization

    DEFF Research Database (Denmark)

    Glückstad, Jesper

    2011-01-01

    In ppo.dk we have invented the BioPhotonics Workstation to be applied in 3D research on regulated microbial cell growth including their underlying physiological mechanisms, in vivo characterization of cell constituents and manufacturing of nanostructures and new materials.......In ppo.dk we have invented the BioPhotonics Workstation to be applied in 3D research on regulated microbial cell growth including their underlying physiological mechanisms, in vivo characterization of cell constituents and manufacturing of nanostructures and new materials....

  4. A reliability analysis tool for SpaceWire network

    Science.gov (United States)

    Zhou, Qiang; Zhu, Longjiang; Fei, Haidong; Wang, Xingyou

    2017-04-01

    A SpaceWire is a standard for on-board satellite networks as the basis for future data-handling architectures. It is becoming more and more popular in space applications due to its technical advantages, including reliability, low power and fault protection, etc. High reliability is the vital issue for spacecraft. Therefore, it is very important to analyze and improve the reliability performance of the SpaceWire network. This paper deals with the problem of reliability modeling and analysis with SpaceWire network. According to the function division of distributed network, a reliability analysis method based on a task is proposed, the reliability analysis of every task can lead to the system reliability matrix, the reliability result of the network system can be deduced by integrating these entire reliability indexes in the matrix. With the method, we develop a reliability analysis tool for SpaceWire Network based on VC, where the computation schemes for reliability matrix and the multi-path-task reliability are also implemented. By using this tool, we analyze several cases on typical architectures. And the analytic results indicate that redundancy architecture has better reliability performance than basic one. In practical, the dual redundancy scheme has been adopted for some key unit, to improve the reliability index of the system or task. Finally, this reliability analysis tool will has a directive influence on both task division and topology selection in the phase of SpaceWire network system design.

  5. Analysis of information security reliability: A tutorial

    International Nuclear Information System (INIS)

    Kondakci, Suleyman

    2015-01-01

    This article presents a concise reliability analysis of network security abstracted from stochastic modeling, reliability, and queuing theories. Network security analysis is composed of threats, their impacts, and recovery of the failed systems. A unique framework with a collection of the key reliability models is presented here to guide the determination of the system reliability based on the strength of malicious acts and performance of the recovery processes. A unique model, called Attack-obstacle model, is also proposed here for analyzing systems with immunity growth features. Most computer science curricula do not contain courses in reliability modeling applicable to different areas of computer engineering. Hence, the topic of reliability analysis is often too diffuse to most computer engineers and researchers dealing with network security. This work is thus aimed at shedding some light on this issue, which can be useful in identifying models, their assumptions and practical parameters for estimating the reliability of threatened systems and for assessing the performance of recovery facilities. It can also be useful for the classification of processes and states regarding the reliability of information systems. Systems with stochastic behaviors undergoing queue operations and random state transitions can also benefit from the approaches presented here. - Highlights: • A concise survey and tutorial in model-based reliability analysis applicable to information security. • A framework of key modeling approaches for assessing reliability of networked systems. • The framework facilitates quantitative risk assessment tasks guided by stochastic modeling and queuing theory. • Evaluation of approaches and models for modeling threats, failures, impacts, and recovery analysis of information systems

  6. The impact of sit-stand office workstations on worker discomfort and productivity: a review.

    Science.gov (United States)

    Karakolis, Thomas; Callaghan, Jack P

    2014-05-01

    This review examines the effectiveness of sit-stand workstations at reducing worker discomfort without causing a decrease in productivity. Four databases were searched for studies on sit-stand workstations, and five selection criteria were used to identify appropriate articles. Fourteen articles were identified that met at least three of the five selection criteria. Seven of the identified studies reported either local, whole body or both local and whole body subjective discomfort scores. Six of these studies indicated implementing sit-stand workstations in an office environment led to lower levels of reported subjective discomfort (three of which were statistically significant). Therefore, this review concluded that sit-stand workstations are likely effective in reducing perceived discomfort. Eight of the identified studies reported a productivity outcome. Three of these studies reported an increase in productivity during sit-stand work, four reported no affect on productivity, and one reported mixed productivity results. Therefore, this review concluded that sit-stand workstations do not cause a decrease in productivity. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  7. ISDN communication: Its workstation technology and application system

    Energy Technology Data Exchange (ETDEWEB)

    Sugimura, T; Ogiwara, Y; Saito, T [Hitachi, Ltd., Tokyo (Japan)

    1991-07-01

    This report describes technology for integrated services digital network (ISDN) which allows workstations to process multimedia data and application systems of advanced group teleworking which use such technology. Hitachi has developed workstations which are more powerful, have more functions, and have larger memory capacities. These factors allowed media which require high-speed processing of large quantities of voice and image data to be integrated into the world of conventional text data processing and communications. In addition, the application of group teleworking system has a large impact through the improvements in the office environment, the changes in the style of office work, and the appearance of new businesses. A prototype of this system was exhibited and demonstrated at TELECOM91. 1 ref., 4 figs., 2 tabs.

  8. Virtual interface environment workstations

    Science.gov (United States)

    Fisher, S. S.; Wenzel, E. M.; Coler, C.; Mcgreevy, M. W.

    1988-01-01

    A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed at NASA's Ames Research Center for use as a multipurpose interface environment. This Virtual Interface Environment Workstation (VIEW) system provides a multisensory, interactive display environment in which a user can virtually explore a 360-degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, research scenarios, and research directions are described.

  9. Reliability Analysis of Adhesive Bonded Scarf Joints

    DEFF Research Database (Denmark)

    Kimiaeifar, Amin; Toft, Henrik Stensgaard; Lund, Erik

    2012-01-01

    element analysis (FEA). For the reliability analysis a design equation is considered which is related to a deterministic code-based design equation where reliability is secured by partial safety factors together with characteristic values for the material properties and loads. The failure criteria......A probabilistic model for the reliability analysis of adhesive bonded scarfed lap joints subjected to static loading is developed. It is representative for the main laminate in a wind turbine blade subjected to flapwise bending. The structural analysis is based on a three dimensional (3D) finite...... are formulated using a von Mises, a modified von Mises and a maximum stress failure criterion. The reliability level is estimated for the scarfed lap joint and this is compared with the target reliability level implicitly used in the wind turbine standard IEC 61400-1. A convergence study is performed to validate...

  10. Reliability analysis and operator modelling

    International Nuclear Information System (INIS)

    Hollnagel, Erik

    1996-01-01

    The paper considers the state of operator modelling in reliability analysis. Operator models are needed in reliability analysis because operators are needed in process control systems. HRA methods must therefore be able to account both for human performance variability and for the dynamics of the interaction. A selected set of first generation HRA approaches is briefly described in terms of the operator model they use, their classification principle, and the actual method they propose. In addition, two examples of second generation methods are also considered. It is concluded that first generation HRA methods generally have very simplistic operator models, either referring to the time-reliability relationship or to elementary information processing concepts. It is argued that second generation HRA methods must recognise that cognition is embedded in a context, and be able to account for that in the way human reliability is analysed and assessed

  11. Intraoperative non-record-keeping usage of anesthesia information management system workstations and associated hemodynamic variability and aberrancies.

    Science.gov (United States)

    Wax, David B; Lin, Hung-Mo; Reich, David L

    2012-12-01

    Anesthesia information management system workstations in the anesthesia workspace that allow usage of non-record-keeping applications could lead to distraction from patient care. We evaluated whether non-record-keeping usage of the computer workstation was associated with hemodynamic variability and aberrancies. Auditing data were collected on eight anesthesia information management system workstations and linked to their corresponding electronic anesthesia records to identify which application was active at any given time during the case. For each case, the periods spent using the anesthesia information management system record-keeping module were separated from those spent using non-record-keeping applications. The variability of heart rate and blood pressure were also calculated, as were the incidence of hypotension, hypertension, and tachycardia. Analysis was performed to identify whether non-record-keeping activity was a significant predictor of these hemodynamic outcomes. Data were analyzed for 1,061 cases performed by 171 clinicians. Median (interquartile range) non-record-keeping activity time was 14 (1, 38) min, representing 16 (3, 33)% of a median 80 (39, 143) min of procedure time. Variables associated with greater non-record-keeping activity included attending anesthesiologists working unassisted, longer case duration, lower American Society of Anesthesiologists status, and general anesthesia. Overall, there was no independent association between non-record-keeping workstation use and hemodynamic variability or aberrancies during anesthesia either between cases or within cases. Anesthesia providers spent sizable portions of case time performing non-record-keeping applications on anesthesia information management system workstations. This use, however, was not independently associated with greater hemodynamic variability or aberrancies in patients during maintenance of general anesthesia for predominantly general surgical and gynecologic procedures.

  12. Effect of One Carpet Weaving Workstation on Upper Trapezius Fatigue

    Directory of Open Access Journals (Sweden)

    Neda Mahdavi

    2016-03-01

    Full Text Available Introduction: This study aimed to investigate the effect of carpet weaving at a proposed workstation on Upper Trapezius (UTr fatigue during a task cycle. Fatigue in the shoulder is one of the most important precursors for upper limb musculoskeletal disorders. One of the most prevalent musculoskeletal disorders between carpet weavers is disorder of the shoulder region. Methods: This cross-sectional study, included eight females and three males. During an 80-minute cycle of carpet weaving, Electromyography (EMG signals of right and left UTr were recorded by the surface EMG, continuously. After raw signals were processed, MPF and RMS were considered as EMG amplitude and frequency parameters. Time series model and JASA methods were used to assess and classify the EMG parameter changes during the working time. Results: According to the JASA method, 58%, 16%, 8% and 8% of the participants experienced fatigue, force increase, force decrease and recovery, respectively in the right UTr. Also, 50%, 25%, 8% and 16% of the participants experienced fatigue, force increase, force decrease and recovery, respectively in the left UTr. Conclusions: For the major portion of the weavers, dominant status in Left and right UTr was fatigue, at the proposed workstation during a carpet weaving task cycle. The results of the study provide detailed information for optimal design of workstations. Further studies should focus on fatigue in various muscles and time periods for designing an appropriate and ergonomics carpet weaving workstation

  13. Development of a new discharge control system utilizing UNIX workstations and VME-bus systems for JT-60

    Energy Technology Data Exchange (ETDEWEB)

    Akasaka, Hiromi; Sueoka, Michiharu; Takano, Shoji; Totsuka, Toshiyuki; Yonekawa, Izuru; Kurihara, Kenichi; Kimura, Toyoaki [Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment

    2002-01-01

    The JT-60 discharge control system, which had used HIDIC-80E 16 bit mini-computers and CAMAC systems since the start of JT-60 experiment in 1985, was renewed in March, 2001. The new system consists of a UNIX workstation and a VME-bus system, and features a distributed control system. The workstation performs message communication with a VME-bus system and controllers of JT-60 sub-systems and processing for discharge control because of its flexibility to construction of a new network and modifications of software. The VME-bus system performs discharge sequence control because it is suitable for fast real time control and flexible to the hardware extension. The replacement has improved the control function and reliability of the discharge control system and also has provided sufficient performance necessary for future modifications of JT-60. The new system has been running successfully since April 2001. The data acquisition speed was confirmed to be twice faster than the previous one. This report describes major functions of the discharge control system, technical ideas for developing the system and results of the initial operation in detail. (author)

  14. Culture Representation in Human Reliability Analysis

    Energy Technology Data Exchange (ETDEWEB)

    David Gertman; Julie Marble; Steven Novack

    2006-12-01

    Understanding human-system response is critical to being able to plan and predict mission success in the modern battlespace. Commonly, human reliability analysis has been used to predict failures of human performance in complex, critical systems. However, most human reliability methods fail to take culture into account. This paper takes an easily understood state of the art human reliability analysis method and extends that method to account for the influence of culture, including acceptance of new technology, upon performance. The cultural parameters used to modify the human reliability analysis were determined from two standard industry approaches to cultural assessment: Hofstede’s (1991) cultural factors and Davis’ (1989) technology acceptance model (TAM). The result is called the Culture Adjustment Method (CAM). An example is presented that (1) reviews human reliability assessment with and without cultural attributes for a Supervisory Control and Data Acquisition (SCADA) system attack, (2) demonstrates how country specific information can be used to increase the realism of HRA modeling, and (3) discusses the differences in human error probability estimates arising from cultural differences.

  15. 76 FR 10403 - Hewlett Packard (HP), Global Product Development, Engineering Workstation Refresh Team, Working...

    Science.gov (United States)

    2011-02-24

    ...), Global Product Development, Engineering Workstation Refresh Team, Working On-Site at General Motors..., Non-Information Technology Business Development Team and Engineering Application Support Team, working... Hewlett Packard, Global Product Development, Engineering Workstation Refresh Team, working on-site at...

  16. How to Protect Patients Digital Images/Thermograms Stored on a Local Workstation

    Directory of Open Access Journals (Sweden)

    J. Živčák

    2010-01-01

    Full Text Available To ensure the security and privacy of patient electronic medical information stored on local workstations in doctors’ offices, clinic centers, etc., it is necessary to implement a secure and reliable method for logging on and accessing this information. Biometrically-based identification technologies use measurable personal properties (physiological or behavioral such as a fingerprint in order to identify or verify a person’s identity, and provide the foundation for highly secure personal identification, verification and/or authentication solutions. The use of biometric devices (fingerprint readers is an easy and secure way to log on to the system. We have provided practical tests on HP notebooks that have the fingerprint reader integrated. Successful/failed logons have been monitored and analyzed, and calculations have been made. This paper presents the false rejection rates, false acceptance rates and failure to acquire rates.

  17. Reliability Analysis of a Steel Frame

    Directory of Open Access Journals (Sweden)

    M. Sýkora

    2002-01-01

    Full Text Available A steel frame with haunches is designed according to Eurocodes. The frame is exposed to self-weight, snow, and wind actions. Lateral-torsional buckling appears to represent the most critical criterion, which is considered as a basis for the limit state function. In the reliability analysis, the probabilistic models proposed by the Joint Committee for Structural Safety (JCSS are used for basic variables. The uncertainty model coefficients take into account the inaccuracy of the resistance model for the haunched girder and the inaccuracy of the action effect model. The time invariant reliability analysis is based on Turkstra's rule for combinations of snow and wind actions. The time variant analysis describes snow and wind actions by jump processes with intermittencies. Assuming a 50-year lifetime, the obtained values of the reliability index b vary within the range from 3.95 up to 5.56. The cross-profile IPE 330 designed according to Eurocodes seems to be adequate. It appears that the time invariant reliability analysis based on Turkstra's rule provides considerably lower values of b than those obtained by the time variant analysis.

  18. Parallel Computation of Unsteady Flows on a Network of Workstations

    Science.gov (United States)

    1997-01-01

    Parallel computation of unsteady flows requires significant computational resources. The utilization of a network of workstations seems an efficient solution to the problem where large problems can be treated at a reasonable cost. This approach requires the solution of several problems: 1) the partitioning and distribution of the problem over a network of workstation, 2) efficient communication tools, 3) managing the system efficiently for a given problem. Of course, there is the question of the efficiency of any given numerical algorithm to such a computing system. NPARC code was chosen as a sample for the application. For the explicit version of the NPARC code both two- and three-dimensional problems were studied. Again both steady and unsteady problems were investigated. The issues studied as a part of the research program were: 1) how to distribute the data between the workstations, 2) how to compute and how to communicate at each node efficiently, 3) how to balance the load distribution. In the following, a summary of these activities is presented. Details of the work have been presented and published as referenced.

  19. User interface on networked workstations for MFTF plasma diagnostic instruments

    International Nuclear Information System (INIS)

    Renbarger, V.L.; Balch, T.R.

    1985-01-01

    A network of Sun-2/170 workstations is used to provide an interface to the MFTF-B Plasma Diagnostics System at Lawrence Livermore National Laboratory. The Plasma Diagnostics System (PDS) is responsible for control of MFTF-B plasma diagnostic instrumentation. An EtherNet Local Area Network links the workstations to a central multiprocessing system which furnishes data processing, data storage and control services for PDS. These workstations permit a physicist to command data acquisition, data processing, instrument control, and display of results. The interface is implemented as a metaphorical desktop, which helps the operator form a mental model of how the system works. As on a real desktop, functions are provided by sheets of paper (windows on a CRT screen) called worksheets. The worksheets may be invoked by pop-up menus and may be manipulated with a mouse. These worksheets are actually tasks that communicate with other tasks running in the central computer system. By making entries in the appropriate worksheet, a physicist may specify data acquisition or processing, control a diagnostic, or view a result

  20. An integrated approach to human reliability analysis -- decision analytic dynamic reliability model

    International Nuclear Information System (INIS)

    Holmberg, J.; Hukki, K.; Norros, L.; Pulkkinen, U.; Pyy, P.

    1999-01-01

    The reliability of human operators in process control is sensitive to the context. In many contemporary human reliability analysis (HRA) methods, this is not sufficiently taken into account. The aim of this article is that integration between probabilistic and psychological approaches in human reliability should be attempted. This is achieved first, by adopting such methods that adequately reflect the essential features of the process control activity, and secondly, by carrying out an interactive HRA process. Description of the activity context, probabilistic modeling, and psychological analysis form an iterative interdisciplinary sequence of analysis in which the results of one sub-task maybe input to another. The analysis of the context is carried out first with the help of a common set of conceptual tools. The resulting descriptions of the context promote the probabilistic modeling, through which new results regarding the probabilistic dynamics can be achieved. These can be incorporated in the context descriptions used as reference in the psychological analysis of actual performance. The results also provide new knowledge of the constraints of activity, by providing information of the premises of the operator's actions. Finally, the stochastic marked point process model gives a tool, by which psychological methodology may be interpreted and utilized for reliability analysis

  1. Diagnostic image workstations ofr PACS

    International Nuclear Information System (INIS)

    Meyer-Ebrecht, D.; Fasel, B.; Dahm, M.; Kaupp, A.; Schilling, R.

    1990-01-01

    Image workstations will be the 'window' to the complex infrastructure of PACS with its intertwined image modalities (image sources, image data bases and image processing devices) and data processing modalities (patient data bases, departmental and hospital information systems). They will serve for user-to-system dialogues, image display and local processing of data as well as images. Their hardware and software structures have to be optimized towards an efficient throughput and processing of image data. (author). 10 refs

  2. Next Genertation BioPhotonics Workstation

    DEFF Research Database (Denmark)

    Bañas, Andrew Rafael; Palima, Darwin; Tauro, Sandeep

    We will outline the specs of our Biophotonics Workstation that can generate up to 100 reconfigurable laser-traps making 3D real-time optical manipulation of advanced structures, cells or tiny particles possible with the use of joysticks or gaming devices. Optically actuated nanoneedles may...... be functionalized or directly used to perforate targeted cells at specific locations or force the complete separation of dividing cells, among other functions that can be very useful for microbiologists or biomedical researchers....

  3. Assessment of a cooperative workstation.

    OpenAIRE

    Beuscart, R. J.; Molenda, S.; Souf, N.; Foucher, C.; Beuscart-Zephir, M. C.

    1996-01-01

    Groupware and new Information Technologies have now made it possible for people in different places to work together in synchronous cooperation. Very often, designers of this new type of software are not provided with a model of the common workspace, which is prejudicial to software development and its acceptance by potential users. The authors take the example of a task of medical co-diagnosis, using a multi-media communication workstation. Synchronous cooperative work is made possible by us...

  4. Time-dependent reliability sensitivity analysis of motion mechanisms

    International Nuclear Information System (INIS)

    Wei, Pengfei; Song, Jingwen; Lu, Zhenzhou; Yue, Zhufeng

    2016-01-01

    Reliability sensitivity analysis aims at identifying the source of structure/mechanism failure, and quantifying the effects of each random source or their distribution parameters on failure probability or reliability. In this paper, the time-dependent parametric reliability sensitivity (PRS) analysis as well as the global reliability sensitivity (GRS) analysis is introduced for the motion mechanisms. The PRS indices are defined as the partial derivatives of the time-dependent reliability w.r.t. the distribution parameters of each random input variable, and they quantify the effect of the small change of each distribution parameter on the time-dependent reliability. The GRS indices are defined for quantifying the individual, interaction and total contributions of the uncertainty in each random input variable to the time-dependent reliability. The envelope function method combined with the first order approximation of the motion error function is introduced for efficiently estimating the time-dependent PRS and GRS indices. Both the time-dependent PRS and GRS analysis techniques can be especially useful for reliability-based design. This significance of the proposed methods as well as the effectiveness of the envelope function method for estimating the time-dependent PRS and GRS indices are demonstrated with a four-bar mechanism and a car rack-and-pinion steering linkage. - Highlights: • Time-dependent parametric reliability sensitivity analysis is presented. • Time-dependent global reliability sensitivity analysis is presented for mechanisms. • The proposed method is especially useful for enhancing the kinematic reliability. • An envelope method is introduced for efficiently implementing the proposed methods. • The proposed method is demonstrated by two real planar mechanisms.

  5. Viewport: An object-oriented approach to integrate workstation software for tile and stack mode display

    OpenAIRE

    Ghosh, Srinka; Andriole, Katherine P.; Avrin, David E.

    1997-01-01

    Diagnostic workstation design has migrated towards display presentation in one of two modes: tiled images or stacked images. It is our impression that the workstation setup or configuration in each of these two modes is rather distinct. We sought to establish a commonality to simplify software design, and to enable a single descriptor method to facilitate folder manager development of “hanging” protocols. All current workstation designs use a combination of “off-screen” and “on-screen” memory...

  6. Desk-based workers' perspectives on using sit-stand workstations: a qualitative analysis of the Stand@Work study

    NARCIS (Netherlands)

    Chau, J.Y.; Daley, M.; Srinivasan, A.; Dunn, S.; Bauman, A.E.; van der Ploeg, H.P.

    2014-01-01

    Background: Prolonged sitting time has been identified as a health risk factor. Sit-stand workstations allow desk workers to alternate between sitting and standing throughout the working day, but not much is known about their acceptability and feasibility. Hence, the aim of this study was to

  7. Reliability and validity of risk analysis

    International Nuclear Information System (INIS)

    Aven, Terje; Heide, Bjornar

    2009-01-01

    In this paper we investigate to what extent risk analysis meets the scientific quality requirements of reliability and validity. We distinguish between two types of approaches within risk analysis, relative frequency-based approaches and Bayesian approaches. The former category includes both traditional statistical inference methods and the so-called probability of frequency approach. Depending on the risk analysis approach, the aim of the analysis is different, the results are presented in different ways and consequently the meaning of the concepts reliability and validity are not the same.

  8. The Effects of FreeSurfer Version, Workstation Type, and Macintosh Operating System Version on Anatomical Volume and Cortical Thickness Measurements

    OpenAIRE

    Gronenschild, Ed H. B. M.; Habets, Petra; Jacobs, Heidi I. L.; Mengelers, Ron; Rozendaal, Nico; van Os, Jim; Marcelis, Machteld

    2012-01-01

    FreeSurfer is a popular software package to measure cortical thickness and volume of neuroanatomical structures. However, little if any is known about measurement reliability across various data processing conditions. Using a set of 30 anatomical T1-weighted 3T MRI scans, we investigated the effects of data processing variables such as FreeSurfer version (v4.3.1, v4.5.0, and v5.0.0), workstation (Macintosh and Hewlett-Packard), and Macintosh operating system version (OSX 10.5 and OSX 10.6). S...

  9. Structural Reliability Analysis of Wind Turbines: A Review

    Directory of Open Access Journals (Sweden)

    Zhiyu Jiang

    2017-12-01

    Full Text Available The paper presents a detailed review of the state-of-the-art research activities on structural reliability analysis of wind turbines between the 1990s and 2017. We describe the reliability methods including the first- and second-order reliability methods and the simulation reliability methods and show the procedure for and application areas of structural reliability analysis of wind turbines. Further, we critically review the various structural reliability studies on rotor blades, bottom-fixed support structures, floating systems and mechanical and electrical components. Finally, future applications of structural reliability methods to wind turbine designs are discussed.

  10. Reliability analysis of reactor pressure vessel intensity

    International Nuclear Information System (INIS)

    Zheng Liangang; Lu Yongbo

    2012-01-01

    This paper performs the reliability analysis of reactor pressure vessel (RPV) with ANSYS. The analysis method include direct Monte Carlo Simulation method, Latin Hypercube Sampling, central composite design and Box-Behnken Matrix design. The RPV integrity reliability under given input condition is proposed. The result shows that the effects on the RPV base material reliability are internal press, allowable basic stress and elasticity modulus of base material in descending order, and the effects on the bolt reliability are allowable basic stress of bolt material, preload of bolt and internal press in descending order. (authors)

  11. Stand by Me: Qualitative Insights into the Ease of Use of Adjustable Workstations.

    Science.gov (United States)

    Leavy, Justine; Jancey, Jonine

    2016-01-01

    Office workers sit for more than 80% of the work day making them an important target for work site health promotion interventions to break up prolonged sitting time. Adjustable workstations are one strategy used to reduce prolonged sitting time. This study provides both an employees' and employers' perspective into the advantages, disadvantages, practicality and convenience of adjustable workstations and how movement in the office can be further supported by organisations. This qualitative study was part of the Uprising pilot study. Employees were from the intervention arm of a two group (intervention n = 18 and control n = 18) study. Employers were the immediate line-manager of the employee. Data were collected via employee focus groups (n = 17) and employer individual interviews (n = 12). The majority of participants were female (n = 18), had healthy weight, and had a post-graduate qualification. All focus group discussions and interviews were recorded, transcribed verbatim and the data coded according to the content. Qualitative content analysis was conducted. Employee data identified four concepts: enhanced general wellbeing; workability and practicality; disadvantages of the retro-fit; and triggers to stand. Most employees (n = 12) reported enhanced general well-being, workability and practicality included less email exchange and positive interaction (n = 5), while the instability of the keyboard a commonly cited disadvantage. Triggers to stand included time and task based prompts. Employer data concepts included: general health and wellbeing; work engagement; flexibility; employee morale; and injury prevention. Over half of the employers (n = 7) emphasised back care and occupational health considerations as important, as well as increased level of staff engagement and strategies to break up prolonged periods of sitting. The focus groups highlight the perceived general health benefits from this short intervention, including opportunity to sit less and interact

  12. Stand by Me: Qualitative Insights into the Ease of Use of Adjustable Workstations

    Directory of Open Access Journals (Sweden)

    Jonine Jancey

    2016-08-01

    Full Text Available Background: Office workers sit for more than 80% of the work day making them an important target for work site health promotion interventions to break up prolonged sitting time. Adjustable workstations are one strategy used to reduce prolonged sitting time. This study provides both an employees’ and employers’ perspective into the advantages, disadvantages, practicality and convenience of adjustable workstations and how movement in the office can be further supported by organisations. This qualitative study was part of the Uprising pilot study. Employees were from the intervention arm of a two group (intervention n = 18 and control n = 18 study. Employers were the immediate line-manager of the employee. Data were collected via employee focus groups (n = 17 and employer individual interviews (n = 12. The majority of participants were female (n = 18, had healthy weight, and had a post-graduate qualification. All focus group discussions and interviews were recorded, transcribed verbatim and the data coded according to the content. Qualitative content analysis was conducted. Results: Employee data identified four concepts: enhanced general wellbeing; workability and practicality; disadvantages of the retro-fit; and triggers to stand. Most employees (n = 12 reported enhanced general well-being, workability and practicality included less email exchange and positive interaction (n = 5, while the instability of the keyboard a commonly cited disadvantage. Triggers to stand included time and task based prompts. Employer data concepts included: general health and wellbeing; work engagement; flexibility; employee morale; and injury prevention. Over half of the employers (n = 7 emphasised back care and occupational health considerations as important, as well as increased level of staff engagement and strategies to break up prolonged periods of sitting. Discussion: The focus groups highlight the perceived general health benefits from this short

  13. System reliability analysis with natural language and expert's subjectivity

    International Nuclear Information System (INIS)

    Onisawa, T.

    1996-01-01

    This paper introduces natural language expressions and expert's subjectivity to system reliability analysis. To this end, this paper defines a subjective measure of reliability and presents the method of the system reliability analysis using the measure. The subjective measure of reliability corresponds to natural language expressions of reliability estimation, which is represented by a fuzzy set defined on [0,1]. The presented method deals with the dependence among subsystems and employs parametrized operations of subjective measures of reliability which can reflect expert 's subjectivity towards the analyzed system. The analysis results are also expressed by linguistic terms. Finally this paper gives an example of the system reliability analysis by the presented method

  14. HiRel: Hybrid Automated Reliability Predictor (HARP) integrated reliability tool system, (version 7.0). Volume 4: HARP Output (HARPO) graphics display user's guide

    Science.gov (United States)

    Sproles, Darrell W.; Bavuso, Salvatore J.

    1994-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of highly reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed at the outset to be compatible with most computing platforms and operating systems and some programs have been beta tested within the aerospace community for over 8 years. This document is a user's guide for the HiRel graphical postprocessor program HARPO (HARP Output). HARPO reads ASCII files generated by HARP. It provides an interactive plotting capability that can be used to display alternate model data for trade-off analyses. File data can also be imported to other commercial software programs.

  15. Reliability analysis in intelligent machines

    Science.gov (United States)

    Mcinroy, John E.; Saridis, George N.

    1990-01-01

    Given an explicit task to be executed, an intelligent machine must be able to find the probability of success, or reliability, of alternative control and sensing strategies. By using concepts for information theory and reliability theory, new techniques for finding the reliability corresponding to alternative subsets of control and sensing strategies are proposed such that a desired set of specifications can be satisfied. The analysis is straightforward, provided that a set of Gaussian random state variables is available. An example problem illustrates the technique, and general reliability results are presented for visual servoing with a computed torque-control algorithm. Moreover, the example illustrates the principle of increasing precision with decreasing intelligence at the execution level of an intelligent machine.

  16. Treadmill workstations: the effects of walking while working on physical activity and work performance.

    Directory of Open Access Journals (Sweden)

    Avner Ben-Ner

    Full Text Available We conducted a 12-month-long experiment in a financial services company to study how the availability of treadmill workstations affects employees' physical activity and work performance. We enlisted sedentary volunteers, half of whom received treadmill workstations during the first two months of the study and the rest in the seventh month of the study. Participants could operate the treadmills at speeds of 0-2 mph and could use a standard chair-desk arrangement at will. (a Weekly online performance surveys were administered to participants and their supervisors, as well as to all other sedentary employees and their supervisors. Using within-person statistical analyses, we find that overall work performance, quality and quantity of performance, and interactions with coworkers improved as a result of adoption of treadmill workstations. (b Participants were outfitted with accelerometers at the start of the study. We find that daily total physical activity increased as a result of the adoption of treadmill workstations.

  17. Treadmill workstations: the effects of walking while working on physical activity and work performance.

    Science.gov (United States)

    Ben-Ner, Avner; Hamann, Darla J; Koepp, Gabriel; Manohar, Chimnay U; Levine, James

    2014-01-01

    We conducted a 12-month-long experiment in a financial services company to study how the availability of treadmill workstations affects employees' physical activity and work performance. We enlisted sedentary volunteers, half of whom received treadmill workstations during the first two months of the study and the rest in the seventh month of the study. Participants could operate the treadmills at speeds of 0-2 mph and could use a standard chair-desk arrangement at will. (a) Weekly online performance surveys were administered to participants and their supervisors, as well as to all other sedentary employees and their supervisors. Using within-person statistical analyses, we find that overall work performance, quality and quantity of performance, and interactions with coworkers improved as a result of adoption of treadmill workstations. (b) Participants were outfitted with accelerometers at the start of the study. We find that daily total physical activity increased as a result of the adoption of treadmill workstations.

  18. STARS software tool for analysis of reliability and safety

    International Nuclear Information System (INIS)

    Poucet, A.; Guagnini, E.

    1989-01-01

    This paper reports on the STARS (Software Tool for the Analysis of Reliability and Safety) project aims at developing an integrated set of Computer Aided Reliability Analysis tools for the various tasks involved in systems safety and reliability analysis including hazard identification, qualitative analysis, logic model construction and evaluation. The expert system technology offers the most promising perspective for developing a Computer Aided Reliability Analysis tool. Combined with graphics and analysis capabilities, it can provide a natural engineering oriented environment for computer assisted reliability and safety modelling and analysis. For hazard identification and fault tree construction, a frame/rule based expert system is used, in which the deductive (goal driven) reasoning and the heuristic, applied during manual fault tree construction, is modelled. Expert system can explain their reasoning so that the analyst can become aware of the why and the how results are being obtained. Hence, the learning aspect involved in manual reliability and safety analysis can be maintained and improved

  19. Physics and detector simulation facility Type O workstation specifications

    International Nuclear Information System (INIS)

    Chartrand, G.; Cormell, L.R.; Hahn, R.; Jacobson, D.; Johnstad, H.; Leibold, P.; Marquez, M.; Ramsey, B.; Roberts, L.; Scipioni, B.; Yost, G.P.

    1990-11-01

    This document specifies the requirements for the front-end network of workstations of a distributed computing facility. This facility will be needed to perform the physics and detector simulations for the design of Superconducting Super Collider (SSC) detectors, and other computations in support of physics and detector needs. A detailed description of the computer simulation facility is given in the overall system specification document. This document provides revised subsystem specifications for the network of monitor-less Type 0 workstations. The requirements specified in this document supersede the requirements given. In Section 2 a brief functional description of the facility and its use are provided. The list of detailed specifications (vendor requirements) is given in Section 3 and the qualifying requirements (benchmarks) are described in Section 4

  20. BioPhotonics Workstation: a university tech transfer challenge

    DEFF Research Database (Denmark)

    Glückstad, Jesper; Bañas, Andrew Rafael; Tauro, Sandeep

    2011-01-01

    Conventional optical trapping or tweezing is often limited in the achievable trapping range because of high numerical aperture and imaging requirements. To circumvent this, we are developing a next generation BioPhotonics Workstation platform that supports extension modules through a long working...

  1. Ergonomics in the computer workstation | Karoney | East African ...

    African Journals Online (AJOL)

    Background: Awareness of effects of long term use of computer and application of ergonomics in the computer workstation is important for preventing musculoskeletal disorders, eyestrain and psychosocial effects. Objectives: To determine the awareness of ºphysical and psychological effects of prolonged computer usage ...

  2. Out of Hours Emergency Computed Tomography Brain Studies: Comparison of Standard 3 Megapixel Diagnostic Workstation Monitors With the iPad 2.

    Science.gov (United States)

    Salati, Umer; Leong, Sum; Donnellan, John; Kok, Hong Kuan; Buckley, Orla; Torreggiani, William

    2015-11-01

    The purpose was to compare performance of diagnostic workstation monitors and the Apple iPad 2 (Cupertino, CA) in interpretation of emergency computed tomography (CT) brain studies. Two experienced radiologists interpreted 100 random emergency CT brain studies on both on-site diagnostic workstation monitors and the iPad 2 via remote access. The radiologists were blinded to patient clinical details and to each other's interpretation and the study list was randomized between interpretations on different modalities. Interobserver agreement between radiologists and intraobserver agreement between modalities was determined and Cohen kappa coefficients calculated for each. Performance with regards to urgent and nonurgent abnormalities was assessed separately. There was substantial intraobserver agreement of both radiologists between the modalities with overall calculated kappa values of 0.959 and 0.940 in detecting acute abnormalities and perfect agreement with regards to hemorrhage. Intraobserver agreement kappa values were 0.939 and 0.860 for nonurgent abnormalities. Interobserver agreement between the 2 radiologists for both diagnostic monitors and the iPad 2 was also substantial ranging from 0.821-0.860. The iPad 2 is a reliable modality in the interpretation of CT brain studies in them emergency setting and for the detection of acute and chronic abnormalities, with comparable performance to standard diagnostic workstation monitors. Copyright © 2015 Canadian Association of Radiologists. Published by Elsevier Inc. All rights reserved.

  3. Implementation of a high-resolution workstation for primary diagnosis of projection radiography images

    Science.gov (United States)

    Good, Walter F.; Herron, John M.; Maitz, Glenn S.; Gur, David; Miller, Stephen L.; Straub, William H.; Fuhrman, Carl R.

    1990-08-01

    We designed and implemented a high-resolution video workstation as the central hardware component in a comprehensive multi-project program comparing the use of digital and film modalities. The workstation utilizes a 1.8 GByte real-time disk (RCI) capable of storing 400 full-resolution images and two Tektronix (GMA251) display controllers with 19" monitors (GMA2O2). The display is configured in a portrait format with a resolution of 1536 x 2048 x 8 bit, and operates at 75 Hz in a noninterlaced mode. Transmission of data through a 12 to 8 bit lookup table into the display controllers occurs at 20 MBytes/second (.35 seconds per image). The workstation allows easy use of brightness (level) and contrast (window) to be manipulated with a trackball, and various processing options can be selected using push buttons. Display of any of the 400 images is also performed at 20MBytes/sec (.35 sec/image). A separate text display provides for the automatic display of patient history data and for a scoring form through which readers can interact with the system by means of a computer mouse. In addition, the workstation provides for the randomization of cases and for the immediate entry of diagnostic responses into a master database. Over the past year this workstation has been used for over 10,000 readings in diagnostic studies related to 1) image resolution; 2) film vs. soft display; 3) incorporation of patient history data into the reading process; and 4) usefulness of image processing.

  4. The effect of dynamic workstations on the performance of various computer and office-based tasks

    NARCIS (Netherlands)

    Burford, E.M.; Botter, J.; Commissaris, D.; Könemann, R.; Hiemstra-Van Mastrigt, S.; Ellegast, R.P.

    2013-01-01

    The effect of different workstations, conventional and dynamic, on different types of performance measures for several different office and computer based task was investigated in this research paper. The two dynamic workstations assessed were the Lifespan Treadmill Desk and the RightAngle

  5. Users Guide to VSMOKE-GIS for Workstations

    Science.gov (United States)

    Mary F. Harms; Leonidas G. Lavdas

    1997-01-01

    VSMOKE-GIS was developed to help prescribed burners in the national forests of the Southeastern United States visualize smoke dispersion and to plan prescribed burns. Developed for use on workstations, this decision-support system consists of a graphical user interface, written in Arc/Info Arc Macro Language, and is linked to a FORTRAN computer program. VSMOKE-GIS...

  6. Post-deployment usability evaluation of a radiology workstation

    NARCIS (Netherlands)

    Jorritsma, Wiard; Cnossen, Fokie; Dierckx, Rudi; Oudkerk, Matthijs; van Ooijen, Peter

    2015-01-01

    Objective To evaluate the usability of a radiology workstation after deployment in a hospital. Significance In radiology, it is difficult to perform valid pre-deployment usability evaluations due to the heterogeneity of the user group, the complexity of the radiological workflow, and the complexity

  7. HiRel: Hybrid Automated Reliability Predictor (HARP) integrated reliability tool system, (version 7.0). Volume 3: HARP Graphics Oriented (GO) input user's guide

    Science.gov (United States)

    Bavuso, Salvatore J.; Rothmann, Elizabeth; Mittal, Nitin; Koppen, Sandra Howell

    1994-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of highly reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed at the outset to be compatible with most computing platforms and operating systems, and some programs have been beta tested within the aerospace community for over 8 years. This document is a user's guide for the HiRel graphical preprocessor Graphics Oriented (GO) program. GO is a graphical user interface for the HARP engine that enables the drawing of reliability/availability models on a monitor. A mouse is used to select fault tree gates or Markov graphical symbols from a menu for drawing.

  8. Graphics metafile interface to ARAC emergency response models for remote workstation study

    International Nuclear Information System (INIS)

    Lawver, B.S.

    1985-01-01

    The Department of Energy's Atmospheric Response Advisory Capability models are executed on computers at a central computer center with the output distributed to accident advisors in the field. The output of these atmospheric diffusion models are generated as contoured isopleths of concentrations. When these isopleths are overlayed with local geography, they become a useful tool to the accident site advisor. ARAC has developed a workstation that is located at potential accident sites. The workstation allows the accident advisor to view color plots of the model results, scale those plots and print black and white hardcopy of the model results. The graphics metafile, also known as Virtual Device Metafile (VDM) allows the models to generate a single device independent output file that is partitioned into geography, isoopleths and labeling information. The metafile is a very compact data storage technique that is output device independent. The metafile frees the model from either generating output for all known graphic devices or requiring the model to be rerun for additional graphic devices. With the partitioned metafile ARAC can transmit to the remote workstation the isopleths and labeling for each model. The geography database may not change and can be transmitted only when needed. This paper describes the important features of the remote workstation and how these features are supported by the device independent graphics metafile

  9. Reliability analysis of reactor inspection robot(RIROB)

    International Nuclear Information System (INIS)

    Eom, H. S.; Kim, J. H.; Lee, J. C.; Choi, Y. R.; Moon, S. S.

    2002-05-01

    This report describes the method and the result of the reliability analysis of RIROB developed in Korea Atomic Energy Research Institute. There are many classic techniques and models for the reliability analysis. These techniques and models have been used widely and approved in other industries such as aviation and nuclear industry. Though these techniques and models have been approved in real fields they are still insufficient for the complicated systems such RIROB which are composed of computer, networks, electronic parts, mechanical parts, and software. Particularly the application of these analysis techniques to digital and software parts of complicated systems is immature at this time thus expert judgement plays important role in evaluating the reliability of the systems at these days. In this report we proposed a method which combines diverse evidences relevant to the reliability to evaluate the reliability of complicated systems such as RIROB. The proposed method combines diverse evidences and performs inference in formal and in quantitative way by using the benefits of Bayesian Belief Nets (BBN)

  10. Reliability analysis techniques for the design engineer

    International Nuclear Information System (INIS)

    Corran, E.R.; Witt, H.H.

    1980-01-01

    A fault tree analysis package is described that eliminates most of the housekeeping tasks involved in proceeding from the initial construction of a fault tree to the final stage of presenting a reliability analysis in a safety report. It is suitable for designers with relatively little training in reliability analysis and computer operation. Users can rapidly investigate the reliability implications of various options at the design stage, and evolve a system which meets specified reliability objectives. Later independent review is thus unlikely to reveal major shortcomings necessitating modification and projects delays. The package operates interactively allowing the user to concentrate on the creative task of developing the system fault tree, which may be modified and displayed graphically. For preliminary analysis system data can be derived automatically from a generic data bank. As the analysis procedes improved estimates of critical failure rates and test and maintenance schedules can be inserted. The computations are standard, - identification of minimal cut-sets, estimation of reliability parameters, and ranking of the effect of the individual component failure modes and system failure modes on these parameters. The user can vary the fault trees and data on-line, and print selected data for preferred systems in a form suitable for inclusion in safety reports. A case history is given - that of HIFAR containment isolation system. (author)

  11. Ergonomics standards and guidelines for computer workstation design and the impact on users' health - a review.

    Science.gov (United States)

    Woo, E H C; White, P; Lai, C W K

    2016-03-01

    This paper presents an overview of global ergonomics standards and guidelines for design of computer workstations, with particular focus on their inconsistency and associated health risk impact. Overall, considerable disagreements were found in the design specifications of computer workstations globally, particularly in relation to the results from previous ergonomics research and the outcomes from current ergonomics standards and guidelines. To cope with the rapid advancement in computer technology, this article provides justifications and suggestions for modifications in the current ergonomics standards and guidelines for the design of computer workstations. Practitioner Summary: A research gap exists in ergonomics standards and guidelines for computer workstations. We explore the validity and generalisability of ergonomics recommendations by comparing previous ergonomics research through to recommendations and outcomes from current ergonomics standards and guidelines.

  12. Workplace sitting and height-adjustable workstations: a randomized controlled trial.

    Science.gov (United States)

    Neuhaus, Maike; Healy, Genevieve N; Dunstan, David W; Owen, Neville; Eakin, Elizabeth G

    2014-01-01

    Desk-based office employees sit for most of their working day. To address excessive sitting as a newly identified health risk, best practice frameworks suggest a multi-component approach. However, these approaches are resource intensive and knowledge about their impact is limited. To compare the efficacy of a multi-component intervention to reduce workplace sitting time, to a height-adjustable workstations-only intervention, and to a comparison group (usual practice). Three-arm quasi-randomized controlled trial in three separate administrative units of the University of Queensland, Brisbane, Australia. Data were collected between January and June 2012 and analyzed the same year. Desk-based office workers aged 20-65 (multi-component intervention, n=16; workstations-only, n=14; comparison, n=14). The multi-component intervention comprised installation of height-adjustable workstations and organizational-level (management consultation, staff education, manager e-mails to staff) and individual-level (face-to-face coaching, telephone support) elements. Workplace sitting time (minutes/8-hour workday) assessed objectively via activPAL3 devices worn for 7 days at baseline and 3 months (end-of-intervention). At baseline, the mean proportion of workplace sitting time was approximately 77% across all groups (multi-component group 366 minutes/8 hours [SD=49]; workstations-only group 373 minutes/8 hours [SD=36], comparison 365 minutes/8 hours [SD=54]). Following intervention and relative to the comparison group, workplace sitting time in the multi-component group was reduced by 89 minutes/8-hour workday (95% CI=-130, -47 minutes; pworkplace sitting. These findings may have important practical and financial implications for workplaces targeting sitting time reductions. Australian New Zealand Clinical Trials Registry 00363297. © 2013 American Journal of Preventive Medicine Published by American Journal of Preventive Medicine All rights reserved.

  13. Human Reliability Analysis for Design: Using Reliability Methods for Human Factors Issues

    Energy Technology Data Exchange (ETDEWEB)

    Ronald Laurids Boring

    2010-11-01

    This paper reviews the application of human reliability analysis methods to human factors design issues. An application framework is sketched in which aspects of modeling typically found in human reliability analysis are used in a complementary fashion to the existing human factors phases of design and testing. The paper provides best achievable practices for design, testing, and modeling. Such best achievable practices may be used to evaluate and human system interface in the context of design safety certifications.

  14. Human Reliability Analysis for Design: Using Reliability Methods for Human Factors Issues

    International Nuclear Information System (INIS)

    Boring, Ronald Laurids

    2010-01-01

    This paper reviews the application of human reliability analysis methods to human factors design issues. An application framework is sketched in which aspects of modeling typically found in human reliability analysis are used in a complementary fashion to the existing human factors phases of design and testing. The paper provides best achievable practices for design, testing, and modeling. Such best achievable practices may be used to evaluate and human system interface in the context of design safety certifications.

  15. Systems reliability analysis for the national ignition facility

    International Nuclear Information System (INIS)

    Majumdar, K.C.; Annese, C.E.; MacIntyre, A.T.; Sicherman, A.

    1996-01-01

    A Reliability, Availability and Maintainability (RAM) analysis was initiated for the National Ignition Facility (NIF). The NIF is an inertial confinement fusion research facility designed to achieve controlled thermonuclear reaction; the preferred site for the NIF is the Lawrence Livermore National Laboratory (LLNL). The NIF RAM analysis has three purposes: (1) to allocate top level reliability and availability goals for the systems, (2) to develop an operability model for optimum maintainability, and (3) to determine the achievability of the allocated goals of the RAM parameters for the NIF systems and the facility operation as a whole. An allocation model assigns the reliability and availability goals for front line and support systems by a top-down approach; reliability analysis uses a bottom-up approach to determine the system reliability and availability from component level to system level

  16. Mechanical reliability analysis of tubes intended for hydrocarbons

    Energy Technology Data Exchange (ETDEWEB)

    Nahal, Mourad; Khelif, Rabia [Badji Mokhtar University, Annaba (Algeria)

    2013-02-15

    Reliability analysis constitutes an essential phase in any study concerning reliability. Many industrialists evaluate and improve the reliability of their products during the development cycle - from design to startup (design, manufacture, and exploitation) - to develop their knowledge on cost/reliability ratio and to control sources of failure. In this study, we obtain results for hardness, tensile, and hydrostatic tests carried out on steel tubes for transporting hydrocarbons followed by statistical analysis. Results obtained allow us to conduct a reliability study based on resistance request. Thus, index of reliability is calculated and the importance of the variables related to the tube is presented. Reliability-based assessment of residual stress effects is applied to underground pipelines under a roadway, with and without active corrosion. Residual stress has been found to greatly increase probability of failure, especially in the early stages of pipe lifetime.

  17. Criticality codes migration to workstations at the Hanford site

    International Nuclear Information System (INIS)

    Miller, E.M.

    1993-01-01

    Westinghouse Hanford Company, Hanford Site Operations contractor, Richland, Washington, currently runs criticality codes on the Cray X-MP EA/232 computer but has recommended that US Department of Energy DOE-Richland replace the Cray with more economical workstations

  18. Workstation Table Engineering Model Design, Development, Fabrication, and Testing

    Science.gov (United States)

    2012-05-01

    This research effort is focused on providing a workstation table design that will reduce the risk of occupant injuries due to secondary impacts and to compartmentalize the occupants to prevent impacts with other objects and/or passengers seated acros...

  19. Computer-aided diagnosis workstation and telemedicine network system for chest diagnosis based on multislice CT images

    Science.gov (United States)

    Satoh, Hitoshi; Niki, Noboru; Eguchi, Kenji; Ohmatsu, Hironobu; Kakinuma, Ryutaru; Moriyama, Noriyuki

    2009-02-01

    Mass screening based on multi-helical CT images requires a considerable number of images to be read. It is this time-consuming step that makes the use of helical CT for mass screening impractical at present. Moreover, the doctor who diagnoses a medical image is insufficient in Japan. To overcome these problems, we have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically detects suspected lung cancers in helical CT images, a coronary artery calcification screening algorithm that automatically detects suspected coronary artery calcification and a vertebra body analysis algorithm for quantitative evaluation of osteoporosis likelihood by using helical CT scanner for the lung cancer mass screening. The functions to observe suspicious shadow in detail are provided in computer-aided diagnosis workstation with these screening algorithms. We also have developed the telemedicine network by using Web medical image conference system with the security improvement of images transmission, Biometric fingerprint authentication system and Biometric face authentication system. Biometric face authentication used on site of telemedicine makes "Encryption of file" and "Success in login" effective. As a result, patients' private information is protected. We can share the screen of Web medical image conference system from two or more web conference terminals at the same time. An opinion can be exchanged mutually by using a camera and a microphone that are connected with workstation. Based on these diagnostic assistance methods, we have developed a new computer-aided workstation and a new telemedicine network that can display suspected lesions three-dimensionally in a short time. The results of this study indicate that our radiological information system without film by using computer-aided diagnosis workstation and our telemedicine network system can increase diagnostic speed, diagnostic accuracy and

  20. Reliability Analysis of Wind Turbines

    DEFF Research Database (Denmark)

    Toft, Henrik Stensgaard; Sørensen, John Dalsgaard

    2008-01-01

    In order to minimise the total expected life-cycle costs of a wind turbine it is important to estimate the reliability level for all components in the wind turbine. This paper deals with reliability analysis for the tower and blades of onshore wind turbines placed in a wind farm. The limit states...... consideres are in the ultimate limit state (ULS) extreme conditions in the standstill position and extreme conditions during operating. For wind turbines, where the magnitude of the loads is influenced by the control system, the ultimate limit state can occur in both cases. In the fatigue limit state (FLS......) the reliability level for a wind turbine placed in a wind farm is considered, and wake effects from neighbouring wind turbines is taken into account. An illustrative example with calculation of the reliability for mudline bending of the tower is considered. In the example the design is determined according...

  1. Studies on radio-diagnosis workstations

    International Nuclear Information System (INIS)

    Niguet, A.

    2008-01-01

    Radio-diagnosis ranges from mammography to interventional radiology, and represents a great majority of medical examinations, and is therefore the main source of exposure for the population. The author gives an overview of methods for workstation assessment, mainly based on the dose-area product. She indicates the factors affecting the radiation quantity, and evokes the influence of the type of examination. Measurements enable workers to be classified, an adapted dosimetry follow-on to be implemented, working areas to be delimited, collective and individual protections to be implemented, and recommendations to be drafted. Results obtained on a cardiologist are presented

  2. The integrated workstation: A common, consistent link between nuclear plant personnel and plant information and computerized resources

    International Nuclear Information System (INIS)

    Wood, R.T.; Knee, H.E.; Mullens, J.A.; Munro, J.K. Jr.; Swail, B.K.; Tapp, P.A.

    1993-01-01

    The increasing use of computer technology in the US nuclear power industry has greatly expanded the capability to obtain, analyze, and present data about the plant to station personnel. Data concerning a power plant's design, configuration, operational and maintenance histories, and current status, and the information that can be derived from them, provide the link between the plant and plant staff. It is through this information bridge that operations, maintenance and engineering personnel understand and manage plant performance. However, it is necessary to transform the vast quantity of data available from various computer systems and across communications networks into clear, concise, and coherent information. In addition, it is important to organize this information into a consolidated, structured form within an integrated environment so that various users throughout the plant have ready access at their local station to knowledge necessary for their tasks. Thus, integrated workstations are needed to provide the inquired information and proper software tools, in a manner that can be easily understood and used, to the proper users throughout the plant. An effort is underway at the Oak Ridge National Laboratory to address this need by developing Integrated Workstation functional requirements and implementing a limited-scale prototype demonstration. The integrated Workstation requirements will define a flexible, expandable computer environment that permits a tailored implementation of workstation capabilities and facilitates future upgrades to add enhanced applications. The functionality to be supported by the integrated workstation and inherent capabilities to be provided by the workstation environment win be described. In addition, general technology areas which are to be addressed in the Integrated Workstation functional requirements will be discussed

  3. Automated processing of forensic casework samples using robotic workstations equipped with nondisposable tips: contamination prevention.

    Science.gov (United States)

    Frégeau, Chantal J; Lett, C Marc; Elliott, Jim; Yensen, Craig; Fourney, Ron M

    2008-05-01

    An automated process has been developed for the analysis of forensic casework samples using TECAN Genesis RSP 150/8 or Freedom EVO liquid handling workstations equipped exclusively with nondisposable tips. Robot tip cleaning routines have been incorporated strategically within the DNA extraction process as well as at the end of each session. Alternative options were examined for cleaning the tips and different strategies were employed to verify cross-contamination. A 2% sodium hypochlorite wash (1/5th dilution of the 10.8% commercial bleach stock) proved to be the best overall approach for preventing cross-contamination of samples processed using our automated protocol. The bleach wash steps do not adversely impact the short tandem repeat (STR) profiles developed from DNA extracted robotically and allow for major cost savings through the implementation of fixed tips. We have demonstrated that robotic workstations equipped with fixed pipette tips can be used with confidence with properly designed tip washing routines to process casework samples using an adapted magnetic bead extraction protocol.

  4. Component reliability analysis for development of component reliability DB of Korean standard NPPs

    International Nuclear Information System (INIS)

    Choi, S. Y.; Han, S. H.; Kim, S. H.

    2002-01-01

    The reliability data of Korean NPP that reflects the plant specific characteristics is necessary for PSA and Risk Informed Application. We have performed a project to develop the component reliability DB and calculate the component reliability such as failure rate and unavailability. We have collected the component operation data and failure/repair data of Korean standard NPPs. We have analyzed failure data by developing a data analysis method which incorporates the domestic data situation. And then we have compared the reliability results with the generic data for the foreign NPPs

  5. The driver workstation in commercial vehicles; Ergonomie und Design von Fahrerarbeitsplaetzen in Nutzfahrzeugen

    Energy Technology Data Exchange (ETDEWEB)

    Kraus, W. [HAW-Hamburg (Germany)

    2003-07-01

    Nowadays, ergonomics and design are quality factors and indispensable elements of commercial vehicle design and development. Whereas a vehicle's appearance, i.e. its outside design, produces fascination and image, the design of its passenger cell focuses entirely on drivers and their tasks. Today, passenger-cell design and the ergonomics of driver workstations in commercial vehicles are clearly becoming more and more important. This article concentrates above all on defining commercial vehicle drivers, which, within the scope of research projects on coach-driver workstations, has provided new insight into the design of driver workstations. In light of the deficits determined, the research project mainly focused on designing driver workstations which were in line with the latest findings in ergonomics and human engineering. References to the methodology of driver-workstation optimization seems important in this context. The afore-mentioned innovations in the passenger cells of commercial vehicles will be explained and described by means of topical and practical examples. (orig.) [German] Ergonomie und Design sind heute Qualitaetsfaktoren und unverzichtbarer Bestandteil bei der Entwicklung von Nutzfahrzeugen. Erzeugt das Erscheinungsbild, die Aussengestaltung des Fahrzeugs, die Faszination und das Image, so ist die Innengestaltung weitgehend ganz auf die Bedienpersonen und ihre Arbeitsaufgaben bezogen. Die Innenraumgestaltung und die Ergonomie von Fahrerarbeitsplaetzen in Nutzfahrzeugen sind heute in einer Phase der deutlichen Aufwertung zu sehen. Im Beitrag wird besonders auf die Definition der Bedienpersonen fuer Nutzfahrzeuge eingegangen, die im Rahmen des Forschungsprojekts Fahrerarbeitsplatz im Reisebus zu neuen Erkenntnissen bei der Auslegung von Arbeitsplaetzen fuehrte. Gemaess der ermittelten Defizite konzentriert sich die Studie im Kern auf das Gestaltungskonzept des Fahrerarbeitsplatzes nach ergonomischen und arbeitswissenschaftlichen Erkenntnissen

  6. Reliability Analysis for Safety Grade PLC(POSAFE-Q)

    International Nuclear Information System (INIS)

    Choi, Kyung Chul; Song, Seung Whan; Park, Gang Min; Hwang, Sung Jae

    2012-01-01

    Safety Grade PLC(Programmable Logic Controller), POSAFE-Q, was developed recently in accordance with nuclear regulatory and requirements. In this paper, describe reliability analysis for digital safety grade PLC (especially POSAFE-Q). Reliability analysis scope is Prediction, Calculation of MTBF (Mean Time Between Failure), FMEA (Failure Mode Effect Analysis), PFD (Probability of Failure on Demand). (author)

  7. Computer-aided diagnosis workstation and network system for chest diagnosis based on multislice CT images

    Science.gov (United States)

    Satoh, Hitoshi; Niki, Noboru; Eguchi, Kenji; Moriyama, Noriyuki; Ohmatsu, Hironobu; Masuda, Hideo; Machida, Suguru

    2008-03-01

    Mass screening based on multi-helical CT images requires a considerable number of images to be read. It is this time-consuming step that makes the use of helical CT for mass screening impractical at present. To overcome this problem, we have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically detects suspected lung cancers in helical CT images, a coronary artery calcification screening algorithm that automatically detects suspected coronary artery calcification and a vertebra body analysis algorithm for quantitative evaluation of osteoporosis likelihood by using helical CT scanner for the lung cancer mass screening. The function to observe suspicious shadow in detail are provided in computer-aided diagnosis workstation with these screening algorithms. We also have developed the telemedicine network by using Web medical image conference system with the security improvement of images transmission, Biometric fingerprint authentication system and Biometric face authentication system. Biometric face authentication used on site of telemedicine makes "Encryption of file" and Success in login" effective. As a result, patients' private information is protected. Based on these diagnostic assistance methods, we have developed a new computer-aided workstation and a new telemedicine network that can display suspected lesions three-dimensionally in a short time. The results of this study indicate that our radiological information system without film by using computer-aided diagnosis workstation and our telemedicine network system can increase diagnostic speed, diagnostic accuracy and security improvement of medical information.

  8. Making the PACS workstation a browser of image processing software: a feasibility study using inter-process communication techniques.

    Science.gov (United States)

    Wang, Chunliang; Ritter, Felix; Smedby, Orjan

    2010-07-01

    To enhance the functional expandability of a picture archiving and communication systems (PACS) workstation and to facilitate the integration of third-part image-processing modules, we propose a browser-server style method. In the proposed solution, the PACS workstation shows the front-end user interface defined in an XML file while the image processing software is running in the background as a server. Inter-process communication (IPC) techniques allow an efficient exchange of image data, parameters, and user input between the PACS workstation and stand-alone image-processing software. Using a predefined communication protocol, the PACS workstation developer or image processing software developer does not need detailed information about the other system, but will still be able to achieve seamless integration between the two systems and the IPC procedure is totally transparent to the final user. A browser-server style solution was built between OsiriX (PACS workstation software) and MeVisLab (Image-Processing Software). Ten example image-processing modules were easily added to OsiriX by converting existing MeVisLab image processing networks. Image data transfer using shared memory added communication based on IPC techniques is an appealing method that allows PACS workstation developers and image processing software developers to cooperate while focusing on different interests.

  9. Weibull distribution in reliability data analysis in nuclear power plant

    International Nuclear Information System (INIS)

    Ma Yingfei; Zhang Zhijian; Zhang Min; Zheng Gangyang

    2015-01-01

    Reliability is an important issue affecting each stage of the life cycle ranging from birth to death of a product or a system. The reliability engineering includes the equipment failure data processing, quantitative assessment of system reliability and maintenance, etc. Reliability data refers to the variety of data that describe the reliability of system or component during its operation. These data may be in the form of numbers, graphics, symbols, texts and curves. Quantitative reliability assessment is the task of the reliability data analysis. It provides the information related to preventing, detect, and correct the defects of the reliability design. Reliability data analysis under proceed with the various stages of product life cycle and reliability activities. Reliability data of Systems Structures and Components (SSCs) in Nuclear Power Plants is the key factor of probabilistic safety assessment (PSA); reliability centered maintenance and life cycle management. The Weibull distribution is widely used in reliability engineering, failure analysis, industrial engineering to represent manufacturing and delivery times. It is commonly used to model time to fail, time to repair and material strength. In this paper, an improved Weibull distribution is introduced to analyze the reliability data of the SSCs in Nuclear Power Plants. An example is given in the paper to present the result of the new method. The Weibull distribution of mechanical equipment for reliability data fitting ability is very strong in nuclear power plant. It's a widely used mathematical model for reliability analysis. The current commonly used methods are two-parameter and three-parameter Weibull distribution. Through comparison and analysis, the three-parameter Weibull distribution fits the data better. It can reflect the reliability characteristics of the equipment and it is more realistic to the actual situation. (author)

  10. Control of a pulse height analyzer using an RDX workstation

    International Nuclear Information System (INIS)

    Montelongo, S.; Hunt, D.N.

    1984-12-01

    The Nuclear Chemistry Division of Lawrence Livermore National laboratory is in the midst of upgrading its radiation counting facilities to automate data acquisition and quality control. This upgrade requires control of a pulse height analyzer (PHA) from an interactive LSI-11/23 workstation running RSX-11M. The PHA is a micro-computer based multichannel analyzer system providing data acquisition, storage, display, manipulation and input/output from up to four independent acquisition interfaces. Control of the analyzer includes reading and writing energy spectra, issuing commands, and servicing device interrupts. The analyzer communicates to the host system over a 9600-baud serial line using the Digital Data Communications link level Protocol (DDCMP). We relieved the RSX workstation CPU from the DDCMP overhead by implementing a DEC compatible in-house designed DMA serial line board (the ISL-11) to communicate with the analyzer. An RSX I/O device driver was written to complete the path between the analyzer and the RSX system by providing the link between the communication board and an application task. The I/O driver is written to handle several ISL-11 cards all operating in parallel thus providing support for control of multiple analyzers from a single workstation. The RSX device driver, its design and use by application code controlling the analyzer, and its operating environment will be discussed

  11. Fast 2D FWI on a multi and many-cores workstation.

    Science.gov (United States)

    Thierry, Philippe; Donno, Daniela; Noble, Mark

    2014-05-01

    Following the introduction of x86 co-processors (Xeon Phi) and the performance increase of standard 2-socket workstations using the latest 12 cores E5-v2 x86-64 CPU, we present here a MPI + OpenMP implementation of an acoustic 2D FWI (full waveform inversion) code which simultaneously runs on the CPUs and on the co-processors installed in a workstation. The main advantage of running a 2D FWI on a workstation is to be able to quickly evaluate new features such as more complicated wave equations, new cost functions, finite-difference stencils or boundary conditions. Since the co-processor is made of 61 in-order x86 cores, each of them having up to 4 threads, this many-core can be seen as a shared memory SMP (symmetric multiprocessing) machine with its own IP address. Depending on the vendor, a single workstation can handle several co-processors making the workstation as a personal cluster under the desk. The original Fortran 90 CPU version of the 2D FWI code is just recompiled to get a Xeon Phi x86 binary. This multi and many-core configuration uses standard compilers and associated MPI as well as math libraries under Linux; therefore, the cost of code development remains constant, while improving computation time. We choose to implement the code with the so-called symmetric mode to fully use the capacity of the workstation, but we also evaluate the scalability of the code in native mode (i.e running only on the co-processor) thanks to the Linux ssh and NFS capabilities. Usual care of optimization and SIMD vectorization is used to ensure optimal performances, and to analyze the application performances and bottlenecks on both platforms. The 2D FWI implementation uses finite-difference time-domain forward modeling and a quasi-Newton (with L-BFGS algorithm) optimization scheme for the model parameters update. Parallelization is achieved through standard MPI shot gathers distribution and OpenMP for domain decomposition within the co-processor. Taking advantage of the 16

  12. Development of reliability centered maintenance methods and tools

    International Nuclear Information System (INIS)

    Jacquot, J.P.; Dubreuil-Chambardel, A.; Lannoy, A.; Monnier, B.

    1992-12-01

    This paper recalls the development of the RCM (Reliability Centered Maintenance) approach in the nuclear industry and describes the trial study implemented by EDF in the context of the OMF (RCM) Project. The approach developed is currently being applied to about thirty systems (Industrial Project). On a parallel, R and D efforts are being maintained to improve the selectivity of the analysis methods. These methods use Probabilistic Safety Study models, thereby guaranteeing better selectivity in the identification of safety critical elements and enhancing consistency between Maintenance and Safety studies. They also offer more detailed analysis of operation feedback, invoking for example Bayes' methods combining expert judgement and feedback data. Finally, they propose a functional and material representation of the plant. This dual representation describes both the functions assured by maintenance provisions and the material elements required for their implementation. In the final chapter, the targets of the future OMF workstation are summarized and the latter's insertion in the EDF information system is briefly described. (authors). 5 figs., 2 tabs., 7 refs

  13. A user interface on networked workstations for MFTF-B plasma diagnostic instruments

    International Nuclear Information System (INIS)

    Balch, T.R.; Renbarger, V.L.

    1986-01-01

    A network of Sun-2/170 workstations is used to provide an interface to the MFTF-B Plasma Diagnostics System at Lawrence Livermore National Laboratory. The Plasma Diagnostics System (PDS) is responsible for control of MFTF-B plasma diagnostic instrumentation. An EtherNet Local Area Network links the workstations to a central multiprocessing system which furnishes data processing, data storage and control services for PDS. These workstations permit a physicist to command data acquisition, data processing, instrument control, and display of results. The interface is implemented as a metaphorical desktop, which helps the operator form a mental model of how the system works. As on a real desktop, functions are provided by sheets of paper (windows on a CRT screen) called worksheets. The worksheets may be invoked by pop-up menus and may be manipulated with a mouse. These worksheets are actually tasks that communicate with other tasks running in the central computer system. By making entries in the appropriate worksheet, a physicist may specify data acquisition or processing, control a diagnostic, or view a result

  14. Optimizing the pathology workstation "cockpit": Challenges and solutions

    Directory of Open Access Journals (Sweden)

    Elizabeth A Krupinski

    2010-01-01

    Full Text Available The 21 st century has brought numerous changes to the clinical reading (i.e., image or virtual pathology slide interpretation environment of pathologists and it will continue to change even more dramatically as information and communication technologies (ICTs become more widespread in the integrated healthcare enterprise. The extent to which these changes impact the practicing pathologist differ as a function of the technology under consideration, but digital "virtual slides" and the viewing of images on computer monitors instead of glass slides through a microscope clearly represents a significant change in the way that pathologists extract information from these images and render diagnostic decisions. One of the major challenges facing pathologists in this new era is how to best optimize the pathology workstation, the reading environment and the new and varied types of information available in order to ensure efficient and accurate processing of this information. Although workstations can be stand-alone units with images imported via external storage devices, this scenario is becoming less common as pathology departments connect to information highways within their hospitals and to external sites. Picture Archiving and Communications systems are no longer confined to radiology departments but are serving the entire integrated healthcare enterprise, including pathology. In radiology, the workstation is often referred to as the "cockpit" with a "digital dashboard" and the reading room as the "control room." Although pathology has yet to "go digital" to the extent that radiology has, lessons derived from radiology reading "cockpits" can be quite valuable in setting up the digital pathology reading room. In this article, we describe the concept of the digital dashboard and provide some recent examples of informatics-based applications that have been shown to improve the workflow and quality in digital reading environments.

  15. Reliability Analysis of Tubular Joints in Offshore Structures

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle; Sørensen, John Dalsgaard

    1987-01-01

    Reliability analysis of single tubular joints and offshore platforms with tubular joints is" presented. The failure modes considered are yielding, punching, buckling and fatigue failure. Element reliability as well as systems reliability approaches are used and illustrated by several examples....... Finally, optimal design of tubular.joints with reliability constraints is discussed and illustrated by an example....

  16. Waste package reliability analysis

    International Nuclear Information System (INIS)

    Pescatore, C.; Sastre, C.

    1983-01-01

    Proof of future performance of a complex system such as a high-level nuclear waste package over a period of hundreds to thousands of years cannot be had in the ordinary sense of the word. The general method of probabilistic reliability analysis could provide an acceptable framework to identify, organize, and convey the information necessary to satisfy the criterion of reasonable assurance of waste package performance according to the regulatory requirements set forth in 10 CFR 60. General principles which may be used to evaluate the qualitative and quantitative reliability of a waste package design are indicated and illustrated with a sample calculation of a repository concept in basalt. 8 references, 1 table

  17. Swimming pool reactor reliability and safety analysis

    International Nuclear Information System (INIS)

    Li Zhaohuan

    1997-01-01

    A reliability and safety analysis of Swimming Pool Reactor in China Institute of Atomic Energy is done by use of event/fault tree technique. The paper briefly describes the analysis model, analysis code and main results. Meanwhile it also describes the impact of unassigned operation status on safety, the estimation of effectiveness of defense tactics in maintenance against common cause failure, the effectiveness of recovering actions on the system reliability, the comparison of occurrence frequencies of the core damage by use of generic and specific data

  18. Probabilistic risk assessment course documentation. Volume 3. System reliability and analysis techniques, Session A - reliability

    International Nuclear Information System (INIS)

    Lofgren, E.V.

    1985-08-01

    This course in System Reliability and Analysis Techniques focuses on the quantitative estimation of reliability at the systems level. Various methods are reviewed, but the structure provided by the fault tree method is used as the basis for system reliability estimates. The principles of fault tree analysis are briefly reviewed. Contributors to system unreliability and unavailability are reviewed, models are given for quantitative evaluation, and the requirements for both generic and plant-specific data are discussed. Also covered are issues of quantifying component faults that relate to the systems context in which the components are embedded. All reliability terms are carefully defined. 44 figs., 22 tabs

  19. The image-interpretation-workstation of the future: lessons learned

    Science.gov (United States)

    Maier, S.; van de Camp, F.; Hafermann, J.; Wagner, B.; Peinsipp-Byma, E.; Beyerer, J.

    2017-05-01

    In recent years, professionally used workstations got increasingly complex and multi-monitor systems are more and more common. Novel interaction techniques like gesture recognition were developed but used mostly for entertainment and gaming purposes. These human computer interfaces are not yet widely used in professional environments where they could greatly improve the user experience. To approach this problem, we combined existing tools in our imageinterpretation-workstation of the future, a multi-monitor workplace comprised of four screens. Each screen is dedicated to a special task in the image interpreting process: a geo-information system to geo-reference the images and provide a spatial reference for the user, an interactive recognition support tool, an annotation tool and a reporting tool. To further support the complex task of image interpreting, self-developed interaction systems for head-pose estimation and hand tracking were used in addition to more common technologies like touchscreens, face identification and speech recognition. A set of experiments were conducted to evaluate the usability of the different interaction systems. Two typical extensive tasks of image interpreting were devised and approved by military personal. They were then tested with a current setup of an image interpreting workstation using only keyboard and mouse against our image-interpretationworkstation of the future. To get a more detailed look at the usefulness of the interaction techniques in a multi-monitorsetup, the hand tracking, head pose estimation and the face recognition were further evaluated using tests inspired by everyday tasks. The results of the evaluation and the discussion are presented in this paper.

  20. Comparison of personal computer with CT workstation in the evaluation of 3-dimensional CT image of the skull

    International Nuclear Information System (INIS)

    Kang, Bok Hee; Kim, Kee Deog; Park, Chang Seo

    2001-01-01

    To evaluate the usefulness of the reconstructed 3-dimensional image on the personal computer in comparison with that of the CT workstation by quantitative comparison and analysis. The spiral CT data obtained from 27 persons were transferred from the CT workstation to a personal computer, and they were reconstructed as 3-dimensional image on the personal computer using V-works 2.0 TM . One observer obtained the 14 measurements on the reconstructed 3-dimensional image on both the CT workstation and the personal computer. Paired test was used to evaluate the intraobserver difference and the mean value of the each measurement on the CT workstation and the personal computer. Pearson correlation analysis and % imcongruence were also performed. I-Gn, N-Gn, N-A, N-Ns, B-A and G-Op did not show any statistically significant difference (p>0.05), B-O, B-N, Eu-Eu, Zy-Zy, Biw, D-D, Orbrd R, and L had statistically significant difference (p<0.05), but the mean values of the differences of all measurements were below 2 mm, except for D-D. The value of correlation coefficient γ was greater than 0.95 at I-Gn, N-Gn, N-A, N-Ns, B-A, B-N, G-Op, Eu-Eu, Zy-Zy, and Biw, and it was 0.75 at B-O, 0.78 at D-D, and 0.82 at both Orbrb R and L. The % incongruence was below 4% at I-Gn, N-Gn, N-A, N-Ns, B-A, B-N, G-Op, Eu-Eu, Zy-Zy, and Biw, and 7.18%, 10.78%, 4.97%, 5.89% at B-O, D-D, Orbrb R and L respectively. It can be considered that the utilization of the personal computer has great usefulness in reconstruction of the 3-dimensional image when it comes to the economics, accessibility and convenience, except for thin bones and the landmarks which and difficult to be located

  1. Workstation computer systems for in-core fuel management

    International Nuclear Information System (INIS)

    Ciccone, L.; Casadei, A.L.

    1992-01-01

    The advancement of powerful engineering workstations has made it possible to have thermal-hydraulics and accident analysis computer programs operating efficiently with a significant performance/cost ratio compared to large mainframe computer. Today, nuclear utilities are acquiring independent engineering analysis capability for fuel management and safety analyses. Computer systems currently available to utility organizations vary widely thus requiring that this software be operational on a number of computer platforms. Recognizing these trends Westinghouse adopted a software development life cycle process for the software development activities which strictly controls the development, testing and qualification of design computer codes. In addition, software standards to ensure maximum portability were developed and implemented, including adherence to FORTRAN 77, and use of uniform system interface and auxiliary routines. A comprehensive test matrix was developed for each computer program to ensure that evolution of code versions preserves the licensing basis. In addition, the results of such test matrices establish the Quality Assurance basis and consistency for the same software operating on different computer platforms. (author). 4 figs

  2. Distributed computing and nuclear reactor analysis

    International Nuclear Information System (INIS)

    Brown, F.B.; Derstine, K.L.; Blomquist, R.N.

    1994-01-01

    Large-scale scientific and engineering calculations for nuclear reactor analysis can now be carried out effectively in a distributed computing environment, at costs far lower than for traditional mainframes. The distributed computing environment must include support for traditional system services, such as a queuing system for batch work, reliable filesystem backups, and parallel processing capabilities for large jobs. All ANL computer codes for reactor analysis have been adapted successfully to a distributed system based on workstations and X-terminals. Distributed parallel processing has been demonstrated to be effective for long-running Monte Carlo calculations

  3. Human reliability analysis using event trees

    International Nuclear Information System (INIS)

    Heslinga, G.

    1983-01-01

    The shut-down procedure of a technologically complex installation as a nuclear power plant consists of a lot of human actions, some of which have to be performed several times. The procedure is regarded as a chain of modules of specific actions, some of which are analyzed separately. The analysis is carried out by making a Human Reliability Analysis event tree (HRA event tree) of each action, breaking down each action into small elementary steps. The application of event trees in human reliability analysis implies more difficulties than in the case of technical systems where event trees were mainly used until now. The most important reason is that the operator is able to recover a wrong performance; memory influences play a significant role. In this study these difficulties are dealt with theoretically. The following conclusions can be drawn: (1) in principle event trees may be used in human reliability analysis; (2) although in practice the operator will recover his fault partly, theoretically this can be described as starting the whole event tree again; (3) compact formulas have been derived, by which the probability of reaching a specific failure consequence on passing through the HRA event tree after several times of recovery is to be calculated. (orig.)

  4. Application of Metric-based Software Reliability Analysis to Example Software

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Smidts, Carol

    2008-07-01

    The software reliability of TELLERFAST ATM software is analyzed by using two metric-based software reliability analysis methods, a state transition diagram-based method and a test coverage-based method. The procedures for the software reliability analysis by using the two methods and the analysis results are provided in this report. It is found that the two methods have a relation of complementary cooperation, and therefore further researches on combining the two methods to reflect the benefit of the complementary cooperative effect to the software reliability analysis are recommended

  5. Fatigue Reliability Analysis of a Mono-Tower Platform

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; Brincker, Rune

    1991-01-01

    In this paper, a fatigue reliability analysis of a Mono-tower platform is presented. The failure mode, fatigue failure in the butt welds, is investigated with two different models. The one with the fatigue strength expressed through SN relations, the other with the fatigue strength expressed thro...... of the natural period, damping ratio, current, stress spectrum and parameters describing the fatigue strength. Further, soil damping is shown to be significant for the Mono-tower.......In this paper, a fatigue reliability analysis of a Mono-tower platform is presented. The failure mode, fatigue failure in the butt welds, is investigated with two different models. The one with the fatigue strength expressed through SN relations, the other with the fatigue strength expressed...... through linear-elastic fracture mechanics (LEFM). In determining the cumulative fatigue damage, Palmgren-Miner's rule is applied. Element reliability, as well as systems reliability, is estimated using first-order reliability methods (FORM). The sensitivity of the systems reliability to various parameters...

  6. Prime implicants in dynamic reliability analysis

    International Nuclear Information System (INIS)

    Tyrväinen, Tero

    2016-01-01

    This paper develops an improved definition of a prime implicant for the needs of dynamic reliability analysis. Reliability analyses often aim to identify minimal cut sets or prime implicants, which are minimal conditions that cause an undesired top event, such as a system's failure. Dynamic reliability analysis methods take the time-dependent behaviour of a system into account. This means that the state of a component can change in the analysed time frame and prime implicants can include the failure of a component at different time points. There can also be dynamic constraints on a component's behaviour. For example, a component can be non-repairable in the given time frame. If a non-repairable component needs to be failed at a certain time point to cause the top event, we consider that the condition that it is failed at the latest possible time point is minimal, and the condition in which it fails earlier non-minimal. The traditional definition of a prime implicant does not account for this type of time-related minimality. In this paper, a new definition is introduced and illustrated using a dynamic flowgraph methodology model. - Highlights: • A new definition of a prime implicant is developed for dynamic reliability analysis. • The new definition takes time-related minimality into account. • The new definition is needed in dynamic flowgraph methodology. • Results can be represented by a smaller number of prime implicants.

  7. [Influence of different lighting levels at workstations with video display terminals on operators' work efficiency].

    Science.gov (United States)

    Janosik, Elzbieta; Grzesik, Jan

    2003-01-01

    The aim of this work was to evaluate the influence of different lighting levels at workstations with video display terminals (VDTs) on the course of the operators' visual work, and to determine the optimal levels of lighting at VDT workstations. For two kinds of job (entry of figures from a typescript and edition of the text displayed on the screen), the work capacity, the degree of the visual strain and the operators' subjective symptoms were determined for four lighting levels (200, 300, 500 and 750 lx). It was found that the work at VDT workstations may overload the visual system and cause eyes complaints as well as the reduction of accommodation or convergence strength. It was also noted that the edition of the text displayed on the screen is more burdening for operators than the entry of figures from a typescript. Moreover, the examination results showed that the lighting at VDT workstations should be higher than 200 lx and that 300 lx makes the work conditions most comfortable during the entry of figures from a typescript, and 500 lx during the edition of the text displayed on the screen.

  8. Reliability Analysis of Elasto-Plastic Structures

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle; Sørensen, John Dalsgaard

    1984-01-01

    . Failure of this type of system is defined either as formation of a mechanism or by failure of a prescribed number of elements. In the first case failure is independent of the order in which the elements fail, but this is not so by the second definition. The reliability analysis consists of two parts...... are described and the two definitions of failure can be used by the first formulation, but only the failure definition based on formation of a mechanism by the second formulation. The second part of the reliability analysis is an estimate of the failure probability for the structure on the basis...

  9. Stereotactic biopsy aided by a computer graphics workstation: experience with 200 consecutive cases.

    Science.gov (United States)

    Ulm, A J; Bova, F J; Friedman, W A

    2001-12-01

    The advent of modern computer technology has made it possible to examine not just the target point, but the entire trajectory in planning for stereotactic biopsies. Two hundred consecutive biopsies were performed by one surgeon, utilizing a computer graphics workstation. The target point, entry point, and complete trajectory were carefully scrutinized and adjusted to minimize potential complications. Pathologically abnormal tissue was obtained in 197 cases (98.5%). There was no mortality in this series. Symptomatic hemorrhages occurred in 4 cases (2%). Computer graphics workstations facilitate safe and effective biopsies in virtually any brain area.

  10. Implementation of Active Workstations in University Libraries—A Comparison of Portable Pedal Exercise Machines and Standing Desks

    Directory of Open Access Journals (Sweden)

    Camille Bastien Tardif

    2018-06-01

    Full Text Available Sedentary behaviors are an important issue worldwide, as prolonged sitting time has been associated with health problems. Recently, active workstations have been developed as a strategy to counteract sedentary behaviors. The present study examined the rationale and perceptions of university students’ and staff following their first use of an active workstation in library settings. Ninety-nine volunteers completed a self-administered questionnaire after using a portable pedal exercise machine (PPEM or a standing desk (SD. Computer tasks were performed on the SD (p = 0.001 and paperwork tasks on a PPEM (p = 0.037 to a larger extent. Men preferred the SD and women chose the PPEM (p = 0.037. The appreciation of the PPEM was revealed to be higher than for the SD, due to its higher scores for effective, useful, functional, convenient, and comfortable dimensions. Younger participants (<25 years of age found the active workstation more pleasant to use than older participants, and participants who spent between 4 to 8 h per day in a seated position found active workstations were more effective and convenient than participants sitting fewer than 4 h per day. The results of this study are a preliminary step to better understanding the feasibility and acceptability of active workstations on university campuses.

  11. Bearing Procurement Analysis Method by Total Cost of Ownership Analysis and Reliability Prediction

    Science.gov (United States)

    Trusaji, Wildan; Akbar, Muhammad; Sukoyo; Irianto, Dradjad

    2018-03-01

    In making bearing procurement analysis, price and its reliability must be considered as decision criteria, since price determines the direct cost as acquisition cost and reliability of bearing determine the indirect cost such as maintenance cost. Despite the indirect cost is hard to identify and measured, it has high contribution to overall cost that will be incurred. So, the indirect cost of reliability must be considered when making bearing procurement analysis. This paper tries to explain bearing evaluation method with the total cost of ownership analysis to consider price and maintenance cost as decision criteria. Furthermore, since there is a lack of failure data when bearing evaluation phase is conducted, reliability prediction method is used to predict bearing reliability from its dynamic load rating parameter. With this method, bearing with a higher price but has higher reliability is preferable for long-term planning. But for short-term planning the cheaper one but has lower reliability is preferable. This contextuality can give rise to conflict between stakeholders. Thus, the planning horizon needs to be agreed by all stakeholder before making a procurement decision.

  12. Viewport: an object-oriented approach to integrate workstation software for tile and stack mode display.

    Science.gov (United States)

    Ghosh, S; Andriole, K P; Avrin, D E

    1997-08-01

    Diagnostic workstation design has migrated towards display presentation in one of two modes: tiled images or stacked images. It is our impression that the workstation setup or configuration in each of these two modes is rather distinct. We sought to establish a commonality to simplify software design, and to enable a single descriptor method to facilitate folder manager development of "hanging" protocols. All current workstation designs use a combination of "off-screen" and "on-screen" memory whether or not they use a dedicated display subsystem, or merely a video board. Most diagnostic workstations also have two or more monitors. Our central concept is that of a "logical" viewport that can be smaller than, the same size as, or larger than a single monitor. Each port "views" an image data sequence loaded into offscreen memory. Each viewport can display one or more images in sequence in a one-on-one or traditionally tiled presentation. Viewports can be assigned to the available monitor "real estate" in any manner that fits. For example, a single sequence computed tomography (CT) study could be displayed across all monitors in a tiled appearance by assigning a single large viewport to the monitors. At the other extreme, a multisequence magnetic resonance (MR) study could be compared with a similar previous study by assigning four viewports to each monitor, single image display per viewport, and assigning four of the sequences of the current study to the left monitor viewports, and four of the earlier study to the right monitor viewports. Ergonomic controls activate scrolling through the off-screen image sequence data. Workstation folder manager hanging protocols could then specify viewports, number of images per viewport, and the automatic assignment of appropriately named sequences of current and previous studies to the viewports on a radiologist-specific basis. Furthermore, software development is simplified by common base objects and methods of the tile and stack

  13. Reliability Analysis Techniques for Communication Networks in Nuclear Power Plant

    International Nuclear Information System (INIS)

    Lim, T. J.; Jang, S. C.; Kang, H. G.; Kim, M. C.; Eom, H. S.; Lee, H. J.

    2006-09-01

    The objectives of this project is to investigate and study existing reliability analysis techniques for communication networks in order to develop reliability analysis models for nuclear power plant's safety-critical networks. It is necessary to make a comprehensive survey of current methodologies for communication network reliability. Major outputs of this study are design characteristics of safety-critical communication networks, efficient algorithms for quantifying reliability of communication networks, and preliminary models for assessing reliability of safety-critical communication networks

  14. Research review and development trends of human reliability analysis techniques

    International Nuclear Information System (INIS)

    Li Pengcheng; Chen Guohua; Zhang Li; Dai Licao

    2011-01-01

    Human reliability analysis (HRA) methods are reviewed. The theoretical basis of human reliability analysis, human error mechanism, the key elements of HRA methods as well as the existing HRA methods are respectively introduced and assessed. Their shortcomings,the current research hotspot and difficult problems are identified. Finally, it takes a close look at the trends of human reliability analysis methods. (authors)

  15. A comparison between digital images viewed on a picture archiving and communication system diagnostic workstation and on a PC-based remote viewing system by emergency physicians.

    Science.gov (United States)

    Parasyn, A; Hanson, R M; Peat, J K; De Silva, M

    1998-02-01

    Picture Archiving and Communication Systems (PACS) make possible the viewing of radiographic images on computer workstations located where clinical care is delivered. By the nature of their work this feature is particularly useful for emergency physicians who view radiographic studies for information and use them to explain results to patients and their families. However, the high cost of PACS diagnostic workstations with fuller functionality places limits on the number of and therefore the accessibility to workstations in the emergency department. This study was undertaken to establish how well less expensive personal computer-based workstations would work to support these needs of emergency physicians. The study compared the outcome of observations by 5 emergency physicians on a series of radiographic studies containing subtle abnormalities displayed on both a PACS diagnostic workstation and on a PC-based workstation. The 73 digitized radiographic studies were randomly arranged on both types of workstation over four separate viewing sessions for each emergency physician. There was no statistical difference between a PACS diagnostic workstation and a PC-based workstation in this trial. The mean correct ratings were 59% on the PACS diagnostic workstations and 61% on the PC-based workstations. These findings also emphasize the need for prompt reporting by a radiologist.

  16. Reliability analysis of grid connected small wind turbine power electronics

    International Nuclear Information System (INIS)

    Arifujjaman, Md.; Iqbal, M.T.; Quaicoe, J.E.

    2009-01-01

    Grid connection of small permanent magnet generator (PMG) based wind turbines requires a power conditioning system comprising a bridge rectifier, a dc-dc converter and a grid-tie inverter. This work presents a reliability analysis and an identification of the least reliable component of the power conditioning system of such grid connection arrangements. Reliability of the configuration is analyzed for the worst case scenario of maximum conversion losses at a particular wind speed. The analysis reveals that the reliability of the power conditioning system of such PMG based wind turbines is fairly low and it reduces to 84% of initial value within one year. The investigation is further enhanced by identifying the least reliable component within the power conditioning system and found that the inverter has the dominant effect on the system reliability, while the dc-dc converter has the least significant effect. The reliability analysis demonstrates that a permanent magnet generator based wind energy conversion system is not the best option from the point of view of power conditioning system reliability. The analysis also reveals that new research is required to determine a robust power electronics configuration for small wind turbine conversion systems.

  17. Analysis of operating reliability of WWER-1000 unit

    International Nuclear Information System (INIS)

    Bortlik, J.

    1985-01-01

    The nuclear power unit was divided into 33 technological units. Input data for reliability analysis were surveys of operating results obtained from the IAEA information system and certain indexes of the reliability of technological equipment determined using the Bayes formula. The missing reliability data for technological equipment were used from the basic variant. The fault tree of the WWER-1000 unit was determined for the peak event defined as the impossibility of reaching 100%, 75% and 50% of rated power. The period was observed of the nuclear power plant operation with reduced output owing to defect and the respective time needed for a repair of the equipment. The calculation of the availability of the WWER-1000 unit was made for different variant situations. Certain indexes of the operating reliability of the WWER-1000 unit which are the result of a detailed reliability analysis are tabulated for selected variants. (E.S.)

  18. General specifications for the development of a USL/DBMS NASA/PC R and D distributed workstation

    Science.gov (United States)

    Dominick, Wayne D. (Editor); Chum, Frank Y.

    1984-01-01

    The general specifications for the development of a PC-based distributed workstation (PCDWS) for an information storage and retrieval systems environment are defined. This research proposes the development of a PCDWS prototype as part of the University of Southwestern Louisiana Data Base Management System (USL/DBMS) NASA/PC R and D project in the PC-based workstation environment.

  19. Reliability analysis and assessment of structural systems

    International Nuclear Information System (INIS)

    Yao, J.T.P.; Anderson, C.A.

    1977-01-01

    The study of structural reliability deals with the probability of having satisfactory performance of the structure under consideration within any specific time period. To pursue this study, it is necessary to apply available knowledge and methodology in structural analysis (including dynamics) and design, behavior of materials and structures, experimental mechanics, and the theory of probability and statistics. In addition, various severe loading phenomena such as strong motion earthquakes and wind storms are important considerations. For three decades now, much work has been done on reliability analysis of structures, and during this past decade, certain so-called 'Level I' reliability-based design codes have been proposed and are in various stages of implementation. These contributions will be critically reviewed and summarized in this paper. Because of the undesirable consequences resulting from the failure of nuclear structures, it is important and desirable to consider the structural reliability in the analysis and design of these structures. Moreover, after these nuclear structures are constructed, it is desirable for engineers to be able to assess the structural reliability periodically as well as immediately following the occurrence of severe loading conditions such as a strong-motion earthquake. During this past decade, increasing use has been made of techniques of system identification in structural engineering. On the basis of non-destructive test results, various methods have been developed to obtain an adequate mathematical model (such as the equations of motion with more realistic parameters) to represent the structural system

  20. Safety and reliability analysis based on nonprobabilistic methods

    International Nuclear Information System (INIS)

    Kozin, I.O.; Petersen, K.E.

    1996-01-01

    Imprecise probabilities, being developed during the last two decades, offer a considerably more general theory having many advantages which make it very promising for reliability and safety analysis. The objective of the paper is to argue that imprecise probabilities are more appropriate tool for reliability and safety analysis, that they allow to model the behavior of nuclear industry objects more comprehensively and give a possibility to solve some problems unsolved in the framework of conventional approach. Furthermore, some specific examples are given from which we can see the usefulness of the tool for solving some reliability tasks

  1. System Reliability Analysis Considering Correlation of Performances

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Saekyeol; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of); Lim, Woochul [Mando Corporation, Seongnam (Korea, Republic of)

    2017-04-15

    Reliability analysis of a mechanical system has been developed in order to consider the uncertainties in the product design that may occur from the tolerance of design variables, uncertainties of noise, environmental factors, and material properties. In most of the previous studies, the reliability was calculated independently for each performance of the system. However, the conventional methods cannot consider the correlation between the performances of the system that may lead to a difference between the reliability of the entire system and the reliability of the individual performance. In this paper, the joint probability density function (PDF) of the performances is modeled using a copula which takes into account the correlation between performances of the system. The system reliability is proposed as the integral of joint PDF of performances and is compared with the individual reliability of each performance by mathematical examples and two-bar truss example.

  2. System Reliability Analysis Considering Correlation of Performances

    International Nuclear Information System (INIS)

    Kim, Saekyeol; Lee, Tae Hee; Lim, Woochul

    2017-01-01

    Reliability analysis of a mechanical system has been developed in order to consider the uncertainties in the product design that may occur from the tolerance of design variables, uncertainties of noise, environmental factors, and material properties. In most of the previous studies, the reliability was calculated independently for each performance of the system. However, the conventional methods cannot consider the correlation between the performances of the system that may lead to a difference between the reliability of the entire system and the reliability of the individual performance. In this paper, the joint probability density function (PDF) of the performances is modeled using a copula which takes into account the correlation between performances of the system. The system reliability is proposed as the integral of joint PDF of performances and is compared with the individual reliability of each performance by mathematical examples and two-bar truss example.

  3. Active Workstations Do Not Impair Executive Function in Young and Middle-Age Adults.

    Science.gov (United States)

    Ehmann, Peter J; Brush, Christopher J; Olson, Ryan L; Bhatt, Shivang N; Banu, Andrea H; Alderman, Brandon L

    2017-05-01

    This study aimed to examine the effects of self-selected low-intensity walking on an active workstation on executive functions (EF) in young and middle-age adults. Using a within-subjects design, 32 young (20.6 ± 2.0 yr) and 26 middle-age (45.6 ± 11.8 yr) adults performed low-intensity treadmill walking and seated control conditions in randomized order on separate days, while completing an EF test battery. EF was assessed using modified versions of the Stroop (inhibition), Sternberg (working memory), Wisconsin Card Sorting (cognitive flexibility), and Tower of London (global EF) cognitive tasks. Behavioral performance outcomes were assessed using composite task z-scores and traditional measures of reaction time and accuracy. Average HR and step count were also measured throughout. The expected task difficulty effects were found for reaction time and accuracy. No significant main effects or interactions as a function of treadmill walking were found for tasks assessing global EF and the three individual EF domains. Accuracy on the Tower of London task was slightly impaired during slow treadmill walking for both age-groups. Middle-age adults displayed longer planning times for more difficult conditions of the Tower of London during walking compared with sitting. A 50-min session of low-intensity treadmill walking on an active workstation resulted in accruing approximately 4500 steps. These findings suggest that executive function performance remains relatively unaffected while walking on an active workstation, further supporting the use of treadmill workstations as an effective approach to increase physical activity and reduce sedentary time in the workplace.

  4. Experience with workstations for accelerator control at the CERN SPS

    International Nuclear Information System (INIS)

    Ogle, A.; Ulander, J.; Wilkie, I.

    1990-01-01

    The CERN super proton synchrotron (SPS) control system is currently undergoing a major long-term upgrade. This paper reviews progress on the high-level application software with particular reference to the operator interface. An important feature of the control-system upgrade is the move from consoles with a number of fixed screens and limited multitasking ability to workstations with the potential to display a large number of windows and perform a number of independent tasks simultaneously. This workstation environment thus permits the operator to run tasks in one machine for which he previously had to monopolize two or even three old consoles. However, the environment also allows the operator to cover the screen with a multitude of windows, leading to complete confusion. Initial requests to present some form of 'global status' of the console proved to be naive, and several iterations were necessary before the operators were satisfied. (orig.)

  5. Clinical impact and value of workstation single sign-on.

    Science.gov (United States)

    Gellert, George A; Crouch, John F; Gibson, Lynn A; Conklin, George S; Webster, S Luke; Gillean, John A

    2017-05-01

    CHRISTUS Health began implementation of computer workstation single sign-on (SSO) in 2015. SSO technology utilizes a badge reader placed at each workstation where clinicians swipe or "tap" their identification badges. To assess the impact of SSO implementation in reducing clinician time logging in to various clinical software programs, and in financial savings from migrating to a thin client that enabled replacement of traditional hard drive computer workstations. Following implementation of SSO, a total of 65,202 logins were sampled systematically during a 7day period among 2256 active clinical end users for time saved in 6 facilities when compared to pre-implementation. Dollar values were assigned to the time saved by 3 groups of clinical end users: physicians, nurses and ancillary service providers. The reduction of total clinician login time over the 7day period showed a net gain of 168.3h per week of clinician time - 28.1h (2.3 shifts) per facility per week. Annualized, 1461.2h of mixed physician and nursing time is liberated per facility per annum (121.8 shifts of 12h per year). The annual dollar cost savings of this reduction of time expended logging in is $92,146 per hospital per annum and $1,658,745 per annum in the first phase implementation of 18 hospitals. Computer hardware equipment savings due to desktop virtualization increases annual savings to $2,333,745. Qualitative value contributions to clinician satisfaction, reduction in staff turnover, facilitation of adoption of EHR applications, and other benefits of SSO are discussed. SSO had a positive impact on clinician efficiency and productivity in the 6 hospitals evaluated, and is an effective and cost-effective method to liberate clinician time from repetitive and time consuming logins to clinical software applications. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  6. Reliability analysis of digital based I and C system

    Energy Technology Data Exchange (ETDEWEB)

    Kang, I. S.; Cho, B. S.; Choi, M. J. [KOPEC, Yongin (Korea, Republic of)

    1999-10-01

    Rapidly, digital technology is being widely applied in replacing analog component installed in existing plant and designing new nuclear power plant for control and monitoring system in Korea as well as in foreign countries. Even though many merits of digital technology, it is being faced with a new problem of reliability assurance. The studies for solving this problem are being performed vigorously in foreign countries. The reliability of KNGR Engineered Safety Features Component Control System (ESF-CCS), digital based I and C system, was analyzed to verify fulfillment of the ALWR EPRI-URD requirement for reliability analysis and eliminate hazards in design applied new technology. The qualitative analysis using FMEA and quantitative analysis using reliability block diagram were performed. The results of analyses are shown in this paper.

  7. Human reliability analysis of control room operators

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Isaac J.A.L.; Carvalho, Paulo Victor R.; Grecco, Claudio H.S. [Instituto de Engenharia Nuclear (IEN), Rio de Janeiro, RJ (Brazil)

    2005-07-01

    Human reliability is the probability that a person correctly performs some system required action in a required time period and performs no extraneous action that can degrade the system Human reliability analysis (HRA) is the analysis, prediction and evaluation of work-oriented human performance using some indices as human error likelihood and probability of task accomplishment. Significant progress has been made in the HRA field during the last years, mainly in nuclear area. Some first-generation HRA methods were developed, as THERP (Technique for human error rate prediction). Now, an array of called second-generation methods are emerging as alternatives, for instance ATHEANA (A Technique for human event analysis). The ergonomics approach has as tool the ergonomic work analysis. It focus on the study of operator's activities in physical and mental form, considering at the same time the observed characteristics of operator and the elements of the work environment as they are presented to and perceived by the operators. The aim of this paper is to propose a methodology to analyze the human reliability of the operators of industrial plant control room, using a framework that includes the approach used by ATHEANA, THERP and the work ergonomics analysis. (author)

  8. Development of RBDGG Solver and Its Application to System Reliability Analysis

    International Nuclear Information System (INIS)

    Kim, Man Cheol

    2010-01-01

    For the purpose of making system reliability analysis easier and more intuitive, RBDGG (Reliability Block diagram with General Gates) methodology was introduced as an extension of the conventional reliability block diagram. The advantage of the RBDGG methodology is that the structure of a RBDGG model is very similar to the actual structure of the analyzed system, and therefore the modeling of a system for system reliability and unavailability analysis becomes very intuitive and easy. The main idea of the development of the RBDGG methodology is similar with that of the development of the RGGG (Reliability Graph with General Gates) methodology, which is an extension of a conventional reliability graph. The newly proposed methodology is now implemented into a software tool, RBDGG Solver. RBDGG Solver was developed as a WIN32 console application. RBDGG Solver receives information on the failure modes and failure probabilities of each component in the system, along with the connection structure and connection logics among the components in the system. Based on the received information, RBDGG Solver automatically generates a system reliability analysis model for the system, and then provides the analysis results. In this paper, application of RBDGG Solver to the reliability analysis of an example system, and verification of the calculation results are provided for the purpose of demonstrating how RBDGG Solver is used for system reliability analysis

  9. Advances in methods and applications of reliability and safety analysis

    International Nuclear Information System (INIS)

    Fieandt, J.; Hossi, H.; Laakso, K.; Lyytikaeinen, A.; Niemelae, I.; Pulkkinen, U.; Pulli, T.

    1986-01-01

    The know-how of the reliability and safety design and analysis techniques of Vtt has been established over several years in analyzing the reliability in the Finnish nuclear power plants Loviisa and Olkiluoto. This experience has been later on applied and developed to be used in the process industry, conventional power industry, automation and electronics. VTT develops and transfers methods and tools for reliability and safety analysis to the private and public sectors. The technology transfer takes place in joint development projects with potential users. Several computer-aided methods, such as RELVEC for reliability modelling and analysis, have been developed. The tool developed are today used by major Finnish companies in the fields of automation, nuclear power, shipbuilding and electronics. Development of computer-aided and other methods needed in analysis of operating experience, reliability or safety is further going on in a number of research and development projects

  10. Human reliability analysis methods for probabilistic safety assessment

    International Nuclear Information System (INIS)

    Pyy, P.

    2000-11-01

    Human reliability analysis (HRA) of a probabilistic safety assessment (PSA) includes identifying human actions from safety point of view, modelling the most important of them in PSA models, and assessing their probabilities. As manifested by many incidents and studies, human actions may have both positive and negative effect on safety and economy. Human reliability analysis is one of the areas of probabilistic safety assessment (PSA) that has direct applications outside the nuclear industry. The thesis focuses upon developments in human reliability analysis methods and data. The aim is to support PSA by extending the applicability of HRA. The thesis consists of six publications and a summary. The summary includes general considerations and a discussion about human actions in the nuclear power plant (NPP) environment. A condensed discussion about the results of the attached publications is then given, including new development in methods and data. At the end of the summary part, the contribution of the publications to good practice in HRA is presented. In the publications, studies based on the collection of data on maintenance-related failures, simulator runs and expert judgement are presented in order to extend the human reliability analysis database. Furthermore, methodological frameworks are presented to perform a comprehensive HRA, including shutdown conditions, to study reliability of decision making, and to study the effects of wrong human actions. In the last publication, an interdisciplinary approach to analysing human decision making is presented. The publications also include practical applications of the presented methodological frameworks. (orig.)

  11. Improving the Reliability of LDP through the Divergence of Power Supply for LDP

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Bang Sil [KHNP CRI, Daejeon (Korea, Republic of)

    2016-10-15

    The representative operating systems are Large Display Panel (LDP), Operator Workstation (OWS), Computerized Procedure System (CPS). The LDP provides the information which is plant essential safety function, plant operating mode, the state of major operating variable and the important alarm to operator continuously. In case of malfunction and switchover fail of the inverter, it is possible to occur the power loss of LDP. The power of LDP and operator console workstation had been supplied by only 2 non-safety Power Line1/Power Line2 (P.L1/P.L2). This study mentioned the divergence of power for LDP as method of power loss caused by unreliability of inverter. Ultimately, LDP power line has been added, and then it is more improved on the safety of LDP operation than before. So Shin-Kori 3,4 can enhance the reliability and economic, safety of plant.

  12. An advanced tube wear and fatigue workstation to predict flow induced vibrations of steam generator tubes

    International Nuclear Information System (INIS)

    Gay, N.; Baratte, C.; Flesch, B.

    1997-01-01

    Flow induced tube vibration damage is a major concern for designers and operators of nuclear power plant steam generators (SG). The operating flow-induced vibrational behaviour has to be estimated accurately to allow a precise evaluation of the new safety margins in order to optimize the maintenance policy. For this purpose, an industrial 'Tube Wear and Fatigue Workstation', called 'GEVIBUS Workstation' and based on an advanced methodology for predictive analysis of flow-induced vibration of tube bundles subject to cross-flow has been developed at Electricite de France. The GEVIBUS Workstation is an interactive processor linking modules as: thermalhydraulic computation, parametric finite element builder, interface between finite element model, thermalhydraulic code and vibratory response computations, refining modelling of fluid-elastic and random forces, linear and non-linear dynamic response and the coupled fluid-structure system, evaluation of tube damage due to fatigue and wear, graphical outputs. Two practical applications are also presented in the paper; the first simulation refers to an experimental set-up consisting of a straight tube bundle subject to water cross-flow, while the second one deals with an industrial configuration which has been observed in some operating steam generators i.e., top tube support plate degradation. In the first case the GEVIBUS predictions in terms of tube displacement time histories and phase planes have been found in very good agreement with experiment. In the second application the GEVIBUS computation showed that a tube with localized degradation is much more stable than a tube located in an extended degradation zone. Important conclusions are also drawn concerning maintenance. (author)

  13. Mathematical Methods in Survival Analysis, Reliability and Quality of Life

    CERN Document Server

    Huber, Catherine; Mesbah, Mounir

    2008-01-01

    Reliability and survival analysis are important applications of stochastic mathematics (probability, statistics and stochastic processes) that are usually covered separately in spite of the similarity of the involved mathematical theory. This title aims to redress this situation: it includes 21 chapters divided into four parts: Survival analysis, Reliability, Quality of life, and Related topics. Many of these chapters were presented at the European Seminar on Mathematical Methods for Survival Analysis, Reliability and Quality of Life in 2006.

  14. The BioPhotonics Workstation: from university research to commercial prototype

    DEFF Research Database (Denmark)

    Glückstad, Jesper

    I will outline the specifications of the compact BioPhotonics Workstation we recently have developed that utilizes high-speed spatial light modulation to generate an array of reconfigurable laser-traps making 3D real-time optical manipulation of advanced structures possible with the use of joysti...

  15. Reliability analysis - systematic approach based on limited data

    International Nuclear Information System (INIS)

    Bourne, A.J.

    1975-11-01

    The initial approaches required for reliability analysis are outlined. These approaches highlight the system boundaries, examine the conditions under which the system is required to operate, and define the overall performance requirements. The discussion is illustrated by a simple example of an automatic protective system for a nuclear reactor. It is then shown how the initial approach leads to a method of defining the system, establishing performance parameters of interest and determining the general form of reliability models to be used. The overall system model and the availability of reliability data at the system level are next examined. An iterative process is then described whereby the reliability model and data requirements are systematically refined at progressively lower hierarchic levels of the system. At each stage, the approach is illustrated with examples from the protective system previously described. The main advantages of the approach put forward are the systematic process of analysis, the concentration of assessment effort in the critical areas and the maximum use of limited reliability data. (author)

  16. Bayesian Inference for NASA Probabilistic Risk and Reliability Analysis

    Science.gov (United States)

    Dezfuli, Homayoon; Kelly, Dana; Smith, Curtis; Vedros, Kurt; Galyean, William

    2009-01-01

    This document, Bayesian Inference for NASA Probabilistic Risk and Reliability Analysis, is intended to provide guidelines for the collection and evaluation of risk and reliability-related data. It is aimed at scientists and engineers familiar with risk and reliability methods and provides a hands-on approach to the investigation and application of a variety of risk and reliability data assessment methods, tools, and techniques. This document provides both: A broad perspective on data analysis collection and evaluation issues. A narrow focus on the methods to implement a comprehensive information repository. The topics addressed herein cover the fundamentals of how data and information are to be used in risk and reliability analysis models and their potential role in decision making. Understanding these topics is essential to attaining a risk informed decision making environment that is being sought by NASA requirements and procedures such as 8000.4 (Agency Risk Management Procedural Requirements), NPR 8705.05 (Probabilistic Risk Assessment Procedures for NASA Programs and Projects), and the System Safety requirements of NPR 8715.3 (NASA General Safety Program Requirements).

  17. Digital Processor Module Reliability Analysis of Nuclear Power Plant

    International Nuclear Information System (INIS)

    Lee, Sang Yong; Jung, Jae Hyun; Kim, Jae Ho; Kim, Sung Hun

    2005-01-01

    The system used in plant, military equipment, satellite, etc. consists of many electronic parts as control module, which requires relatively high reliability than other commercial electronic products. Specially, Nuclear power plant related to the radiation safety requires high safety and reliability, so most parts apply to Military-Standard level. Reliability prediction method provides the rational basis of system designs and also provides the safety significance of system operations. Thus various reliability prediction tools have been developed in recent decades, among of them, the MI-HDBK-217 method has been widely used as a powerful tool for the prediction. In this work, It is explained that reliability analysis work for Digital Processor Module (DPM, control module of SMART) is performed by Parts Stress Method based on MIL-HDBK-217F NOTICE2. We are using the Relex 7.6 of Relex software corporation, because reliability analysis process requires enormous part libraries and data for failure rate calculation

  18. Emulating conventional operator interfaces on window-based workstations

    International Nuclear Information System (INIS)

    Carr, G.P.

    1990-01-01

    This paper explores an approach to support the LAMPF and PSR control systems on VAX/VMS workstations using DECwindows and VI Corporation Data Views as the operator interface. The PSR control system was recently turned over to MP division and the two control-system staffs were merged into one group. One of the goals of this new group is to develop a common workstation-based operator console and interface which can be used in a single control room controlling both the linac and proton storage ring. The new console operator interface will need a high-level graphics toolkit for its implementation. During the conversion to the new consoles it will also probably be necessary to write a package to emulate the current operator interfaces at the software level. This paper describes a project to evaluate the appropriateness of VI Corporation's Data Views graphics package for use in the LAMPF control-system environment by using it to write an emulation of the LAMPF touch-panel interface to a large LAMPF control-system application program. A secondary objective of this project was to explore any productivity increases that might be realized by using an object-oriented graphics package and graphics editor. (orig.)

  19. Preliminary Analysis of LORAN-C System Reliability for Civil Aviation.

    Science.gov (United States)

    1981-09-01

    overviev of the analysis technique. Section 3 describes the computerized LORAN-C coverage model which is used extensively in the reliability analysis...Xth Plenary Assembly, Geneva, 1963, published by International Telecomunications Union. S. Braff, R., Computer program to calculate a Karkov Chain Reliability Model, unpublished york, MITRE Corporation. A-1 I.° , 44J Ili *Y 0E 00 ...F i8 1110 Prelim inary Analysis of Program Engineering & LORAN’C System ReliabilityMaintenance Service i ~Washington. D.C.

  20. A workstation based spectrometry application for ECR ion source [Paper No.: G5

    International Nuclear Information System (INIS)

    Suresh Babu, R.M.; . PS Div.)

    1993-01-01

    A program for an Electron Cyclotron Resonance (ECR) Ion Source beam diagnostics application in a X-Windows/Motif based workstation environment is discussed. The application program controls the hardware and acquires data via a front end computer across a local area network. The data is subsequently processed for displaying on the workstation console. The timing for data acquisition and control is determined by the particle source timing. The user interface has been implemented using the Motif widget set and the actions have been implemented through call back routines. The equipment interface is through a set of database driven calls across the network. (author). 7 refs., 1 fig

  1. Experience in using workstations as hosts in an accelerator control environment

    International Nuclear Information System (INIS)

    Abola, A.; Casella, R.; Clifford, T.; Hoff, L.; Katz, R.; Kennell, S.; Mandell, S.; McBreen, E.; Weygand, D.P.

    1987-03-01

    A new control system has been used for light ion acceleration at the Alternating Gradient Synchrotron (AGS). The control system uses Apollo workstations in the dual role of console hardware computer and controls system host. It has been found that having a powerful dedicated CPU with a demand paging virtual memory OS featuring strong interprocess communication, mapped memory shared files, shared code, and multi-window capabilities, allows us to provide an efficient operation environment in which users may view and manage several control processes simultaneously. The same features which make workstations good console computers also provide an outstanding platform for code development. The software for the system, consisting of about 30K lines of ''C'' code, was developed on schedule, ready for light ion commissioning. System development is continuing with work being done on applications programs

  2. Experience in using workstations as hosts in an accelerator control environment

    International Nuclear Information System (INIS)

    Abola, A.; Casella, R.; Clifford, T.; Hoff, L.; Katz, R.; Kennell, S.; Mandell, S.; McBreen, E.; Weygand, D.P.

    1987-01-01

    A new control system has been used for light ion acceleration at the Alternating Gradient Synchrotron (AGS). The control system uses Apollo workstations in the dual role of console hardware computer and controls system host. It has been found that having a powerful dedicated CPU with a demand paging virtual memory OS featuring strong interprocess communication, mapped memory shared files, shared code, and multi-window capabilities, allows us to provide an efficient operation environment in which users may view and manage several control processes simultaneously. The same features which make workstations good console computers also provide an outstanding platform for code development. The software for the system, consisting of about 30K lines of ''C'' code, was developed on schedule, ready for light ion commissioning. System development is continuing with work being done on applications programs

  3. Reliability analysis of digital I and C systems at KAERI

    International Nuclear Information System (INIS)

    Kim, Man Cheol

    2013-01-01

    This paper provides an overview of the ongoing research activities on a reliability analysis of digital instrumentation and control (I and C) systems of nuclear power plants (NPPs) performed by the Korea Atomic Energy Research Institute (KAERI). The research activities include the development of a new safety-critical software reliability analysis method by integrating the advantages of existing software reliability analysis methods, a fault coverage estimation method based on fault injection experiments, and a new human reliability analysis method for computer-based main control rooms (MCRs) based on human performance data from the APR-1400 full-scope simulator. The research results are expected to be used to address various issues such as the licensing issues related to digital I and C probabilistic safety assessment (PSA) for advanced digital-based NPPs. (author)

  4. A worldwide flock of Condors : load sharing among workstation clusters

    NARCIS (Netherlands)

    Epema, D.H.J.; Livny, M.; Dantzig, van R.; Evers, X.; Pruyne, J.

    1996-01-01

    Condor is a distributed batch system for sharing the workload of compute-intensive jobs in a pool of unix workstations connected by a network. In such a Condor pool, idle machines are spotted by Condor and allocated to queued jobs, thus putting otherwise unutilized capacity to efficient use. When

  5. Reliability analysis of stiff versus flexible piping

    International Nuclear Information System (INIS)

    Lu, S.C.

    1985-01-01

    The overall objective of this research project is to develop a technical basis for flexible piping designs which will improve piping reliability and minimize the use of pipe supports, snubbers, and pipe whip restraints. The current study was conducted to establish the necessary groundwork based on the piping reliability analysis. A confirmatory piping reliability assessment indicated that removing rigid supports and snubbers tends to either improve or affect very little the piping reliability. The authors then investigated a couple of changes to be implemented in Regulatory Guide (RG) 1.61 and RG 1.122 aimed at more flexible piping design. They concluded that these changes substantially reduce calculated piping responses and allow piping redesigns with significant reduction in number of supports and snubbers without violating ASME code requirements. Furthermore, the more flexible piping redesigns are capable of exhibiting reliability levels equal to or higher than the original stiffer design. An investigation of the malfunction of pipe whip restraints confirmed that the malfunction introduced higher thermal stresses and tended to reduce the overall piping reliability. Finally, support and component reliabilities were evaluated based on available fragility data. Results indicated that the support reliability usually exhibits a moderate decrease as the piping flexibility increases. Most on-line pumps and valves showed an insignificant reduction in reliability for a more flexible piping design

  6. Reliability analysis for Atucha II reactor protection system signals

    International Nuclear Information System (INIS)

    Roca, Jose Luis

    1996-01-01

    Atucha II is a 745 MW Argentine Power Nuclear Reactor constructed by ENACE SA, Nuclear Argentine Company for Electrical Power Generation and SIEMENS AG KWU, Erlangen, Germany. A preliminary modular logic analysis of RPS (Reactor Protection System) signals was performed by means of the well known Swedish professional risk and reliability software named Risk-Spectrum taking as a basis a reference signal coded as JR17ER003 which command the two moderator loops valves. From the reliability and behavior knowledge for this reference signal follows an estimation of the reliability for the other 97 RPS signals. Because the preliminary character of this analysis Main Important Measures are not performed at this stage. Reliability is by the statistic value named unavailability predicted. The scope of this analysis is restricted from the measurement elements to the RPS buffer outputs. In the present context only one redundancy is analyzed so in the Instrumentation and Control area there no CCF (Common Cause Failures) present for signals. Finally those unavailability values could be introduced in the failure domain for the posterior complete Atucha II reliability analysis which includes all mechanical and electromechanical features. Also an estimation of the spurious frequency of RPS signals defined as faulty by no trip is performed

  7. Reliability analysis for Atucha II reactor protection system signals

    International Nuclear Information System (INIS)

    Roca, Jose L.

    2000-01-01

    Atucha II is a 745 MW Argentine power nuclear reactor constructed by Nuclear Argentine Company for Electric Power Generation S.A. (ENACE S.A.) and SIEMENS AG KWU, Erlangen, Germany. A preliminary modular logic analysis of RPS (Reactor Protection System) signals was performed by means of the well known Swedish professional risk and reliability software named Risk-Spectrum taking as a basis a reference signal coded as JR17ER003 which command the two moderator loops valves. From the reliability and behavior knowledge for this reference signal follows an estimation of the reliability for the other 97 RPS signals. Because the preliminary character of this analysis Main Important Measures are not performed at this stage. Reliability is by the statistic value named unavailability predicted. The scope of this analysis is restricted from the measurement elements to the RPS buffer outputs. In the present context only one redundancy is analyzed so in the Instrumentation and Control area there no CCF (Common Cause Failures) present for signals. Finally those unavailability values could be introduced in the failure domain for the posterior complete Atucha II reliability analysis which includes all mechanical and electromechanical features. Also an estimation of the spurious frequency of RPS signals defined as faulty by no trip is performed. (author)

  8. A workstation-integrated peer review quality assurance program: pilot study

    International Nuclear Information System (INIS)

    O’Keeffe, Margaret M; Davis, Todd M; Siminoski, Kerry

    2013-01-01

    The surrogate indicator of radiological excellence that has become accepted is consistency of assessments between radiologists, and the technique that has become the standard for evaluating concordance is peer review. This study describes the results of a workstation-integrated peer review program in a busy outpatient radiology practice. Workstation-based peer review was performed using the software program Intelerad Peer Review. Cases for review were randomly chosen from those being actively reported. If an appropriate prior study was available, and if the reviewing radiologist and the original interpreting radiologist had not exceeded review targets, the case was scored using the modified RADPEER system. There were 2,241 cases randomly assigned for peer review. Of selected cases, 1,705 (76%) were interpreted. Reviewing radiologists agreed with prior reports in 99.1% of assessments. Positive feedback (score 0) was given in three cases (0.2%) and concordance (scores of 0 to 2) was assigned in 99.4%, similar to reported rates of 97.0% to 99.8%. Clinically significant discrepancies (scores of 3 or 4) were identified in 10 cases (0.6%). Eighty-eight percent of reviewed radiologists found the reviews worthwhile, 79% found scores appropriate, and 65% felt feedback was appropriate. Two-thirds of radiologists found case rounds discussing significant discrepancies to be valuable. The workstation-based computerized peer review process used in this pilot project was seamlessly incorporated into the normal workday and met most criteria for an ideal peer review system. Clinically significant discrepancies were identified in 0.6% of cases, similar to published outcomes using the RADPEER system. Reviewed radiologists felt the process was worthwhile

  9. A workstation-integrated peer review quality assurance program: pilot study

    Science.gov (United States)

    2013-01-01

    Background The surrogate indicator of radiological excellence that has become accepted is consistency of assessments between radiologists, and the technique that has become the standard for evaluating concordance is peer review. This study describes the results of a workstation-integrated peer review program in a busy outpatient radiology practice. Methods Workstation-based peer review was performed using the software program Intelerad Peer Review. Cases for review were randomly chosen from those being actively reported. If an appropriate prior study was available, and if the reviewing radiologist and the original interpreting radiologist had not exceeded review targets, the case was scored using the modified RADPEER system. Results There were 2,241 cases randomly assigned for peer review. Of selected cases, 1,705 (76%) were interpreted. Reviewing radiologists agreed with prior reports in 99.1% of assessments. Positive feedback (score 0) was given in three cases (0.2%) and concordance (scores of 0 to 2) was assigned in 99.4%, similar to reported rates of 97.0% to 99.8%. Clinically significant discrepancies (scores of 3 or 4) were identified in 10 cases (0.6%). Eighty-eight percent of reviewed radiologists found the reviews worthwhile, 79% found scores appropriate, and 65% felt feedback was appropriate. Two-thirds of radiologists found case rounds discussing significant discrepancies to be valuable. Conclusions The workstation-based computerized peer review process used in this pilot project was seamlessly incorporated into the normal workday and met most criteria for an ideal peer review system. Clinically significant discrepancies were identified in 0.6% of cases, similar to published outcomes using the RADPEER system. Reviewed radiologists felt the process was worthwhile. PMID:23822583

  10. Interactive reliability analysis project. FY 80 progress report

    International Nuclear Information System (INIS)

    Rasmuson, D.M.; Shepherd, J.C.

    1981-03-01

    This report summarizes the progress to date in the interactive reliability analysis project. Purpose is to develop and demonstrate a reliability and safety technique that can be incorporated early in the design process. Details are illustrated in a simple example of a reactor safety system

  11. Accident Sequence Evaluation Program: Human reliability analysis procedure

    International Nuclear Information System (INIS)

    Swain, A.D.

    1987-02-01

    This document presents a shortened version of the procedure, models, and data for human reliability analysis (HRA) which are presented in the Handbook of Human Reliability Analysis With emphasis on Nuclear Power Plant Applications (NUREG/CR-1278, August 1983). This shortened version was prepared and tried out as part of the Accident Sequence Evaluation Program (ASEP) funded by the US Nuclear Regulatory Commission and managed by Sandia National Laboratories. The intent of this new HRA procedure, called the ''ASEP HRA Procedure,'' is to enable systems analysts, with minimal support from experts in human reliability analysis, to make estimates of human error probabilities and other human performance characteristics which are sufficiently accurate for many probabilistic risk assessments. The ASEP HRA Procedure consists of a Pre-Accident Screening HRA, a Pre-Accident Nominal HRA, a Post-Accident Screening HRA, and a Post-Accident Nominal HRA. The procedure in this document includes changes made after tryout and evaluation of the procedure in four nuclear power plants by four different systems analysts and related personnel, including human reliability specialists. The changes consist of some additional explanatory material (including examples), and more detailed definitions of some of the terms. 42 refs

  12. Functionalized 2PP structures for the BioPhotonics Workstation

    DEFF Research Database (Denmark)

    Matsuoka, Tomoyo; Nishi, Masayuki; Sakakura, Masaaki

    2011-01-01

    In its standard version, our BioPhotonics Workstation (BWS) can generate multiple controllable counter-propagating beams to create real-time user-programmable optical traps for stable three-dimensional control and manipulation of a plurality of particles. The combination of the platform with micr...... on the BWS platform by functionalizing them with silica-based sol-gel materials inside which dyes can be entrapped....

  13. 78 FR 45447 - Revisions to Modeling, Data, and Analysis Reliability Standard

    Science.gov (United States)

    2013-07-29

    ...; Order No. 782] Revisions to Modeling, Data, and Analysis Reliability Standard AGENCY: Federal Energy... Analysis (MOD) Reliability Standard MOD- 028-2, submitted to the Commission for approval by the North... Organization. The Commission finds that the proposed Reliability Standard represents an improvement over the...

  14. State of the art report on aging reliability analysis

    International Nuclear Information System (INIS)

    Choi, Sun Yeong; Yang, Joon Eon; Han, Sang Hoon; Ha, Jae Joo

    2002-03-01

    The goal of this report is to describe the state of the art on aging analysis methods to calculate the effects of component aging quantitatively. In this report, we described some aging analysis methods which calculate the increase of Core Damage Frequency (CDF) due to aging by including the influence of aging into PSA. We also described several research topics required for aging analysis for components of domestic NPPs. We have described a statistical model and reliability physics model which calculate the effect of aging quantitatively by using PSA method. It is expected that the practical use of the reliability-physics model will be increased though the process with the reliability-physics model is more complicated than statistical model

  15. Reliability of the Emergency Severity Index: Meta-analysis

    Directory of Open Access Journals (Sweden)

    Amir Mirhaghi

    2015-01-01

    Full Text Available Objectives: Although triage systems based on the Emergency Severity Index (ESI have many advantages in terms of simplicity and clarity, previous research has questioned their reliability in practice. Therefore, the aim of this meta-analysis was to determine the reliability of ESI triage scales. Methods: This metaanalysis was performed in March 2014. Electronic research databases were searched and articles conforming to the Guidelines for Reporting Reliability and Agreement Studies were selected. Two researchers independently examined selected abstracts. Data were extracted in the following categories: version of scale (latest/older, participants (adult/paediatric, raters (nurse, physician or expert, method of reliability (intra/inter-rater, reliability statistics (weighted/unweighted kappa and the origin and publication year of the study. The effect size was obtained by the Z-transformation of reliability coefficients. Data were pooled with random-effects models and a meta-regression was performed based on the method of moments estimator. Results: A total of 19 studies from six countries were included in the analysis. The pooled coefficient for the ESI triage scales was substantial at 0.791 (95% confidence interval: 0.787‒0.795. Agreement was higher with the latest and adult versions of the scale and among expert raters, compared to agreement with older and paediatric versions of the scales and with other groups of raters, respectively. Conclusion: ESI triage scales showed an acceptable level of overall reliability. However, ESI scales require more development in order to see full agreement from all rater groups. Further studies concentrating on other aspects of reliability assessment are needed.

  16. Habitat Demonstration Unit Medical Operations Workstation Upgrades

    Science.gov (United States)

    Trageser, Katherine H.

    2011-01-01

    This paper provides an overview of the design and fabrication associated with upgrades for the Medical Operations Workstation in the Habitat Demonstration Unit. The work spanned a ten week period. The upgrades will be used during the 2011 Desert Research and Technology Studies (Desert RATS) field campaign. Upgrades include a deployable privacy curtain system, a deployable tray table, an easily accessible biological waste container, reorganization and labeling of the medical supplies, and installation of a retractable camera. All of the items were completed within the ten week period.

  17. Some Ideas on the Microcomputer and the Information/Knowledge Workstation.

    Science.gov (United States)

    Boon, J. A.; Pienaar, H.

    1989-01-01

    Identifies the optimal goal of knowledge workstations as the harmony of technology and human decision-making behaviors. Two types of decision-making processes are described and the application of each type to experimental and/or operational situations is discussed. Suggestions for technical solutions to machine-user interfaces are then offered.…

  18. Reliability analysis in interdependent smart grid systems

    Science.gov (United States)

    Peng, Hao; Kan, Zhe; Zhao, Dandan; Han, Jianmin; Lu, Jianfeng; Hu, Zhaolong

    2018-06-01

    Complex network theory is a useful way to study many real complex systems. In this paper, a reliability analysis model based on complex network theory is introduced in interdependent smart grid systems. In this paper, we focus on understanding the structure of smart grid systems and studying the underlying network model, their interactions, and relationships and how cascading failures occur in the interdependent smart grid systems. We propose a practical model for interdependent smart grid systems using complex theory. Besides, based on percolation theory, we also study the effect of cascading failures effect and reveal detailed mathematical analysis of failure propagation in such systems. We analyze the reliability of our proposed model caused by random attacks or failures by calculating the size of giant functioning components in interdependent smart grid systems. Our simulation results also show that there exists a threshold for the proportion of faulty nodes, beyond which the smart grid systems collapse. Also we determine the critical values for different system parameters. In this way, the reliability analysis model based on complex network theory can be effectively utilized for anti-attack and protection purposes in interdependent smart grid systems.

  19. Adjoint sensitivity analysis of dynamic reliability models based on Markov chains - II: Application to IFMIF reliability assessment

    Energy Technology Data Exchange (ETDEWEB)

    Cacuci, D. G. [Commiss Energy Atom, Direct Energy Nucl, Saclay, (France); Cacuci, D. G.; Balan, I. [Univ Karlsruhe, Inst Nucl Technol and Reactor Safetly, Karlsruhe, (Germany); Ionescu-Bujor, M. [Forschungszentrum Karlsruhe, Fus Program, D-76021 Karlsruhe, (Germany)

    2008-07-01

    In Part II of this work, the adjoint sensitivity analysis procedure developed in Part I is applied to perform sensitivity analysis of several dynamic reliability models of systems of increasing complexity, culminating with the consideration of the International Fusion Materials Irradiation Facility (IFMIF) accelerator system. Section II presents the main steps of a procedure for the automated generation of Markov chains for reliability analysis, including the abstraction of the physical system, construction of the Markov chain, and the generation and solution of the ensuing set of differential equations; all of these steps have been implemented in a stand-alone computer code system called QUEFT/MARKOMAG-S/MCADJSEN. This code system has been applied to sensitivity analysis of dynamic reliability measures for a paradigm '2-out-of-3' system comprising five components and also to a comprehensive dynamic reliability analysis of the IFMIF accelerator system facilities for the average availability and, respectively, the system's availability at the final mission time. The QUEFT/MARKOMAG-S/MCADJSEN has been used to efficiently compute sensitivities to 186 failure and repair rates characterizing components and subsystems of the first-level fault tree of the IFMIF accelerator system. (authors)

  20. Adjoint sensitivity analysis of dynamic reliability models based on Markov chains - II: Application to IFMIF reliability assessment

    International Nuclear Information System (INIS)

    Cacuci, D. G.; Cacuci, D. G.; Balan, I.; Ionescu-Bujor, M.

    2008-01-01

    In Part II of this work, the adjoint sensitivity analysis procedure developed in Part I is applied to perform sensitivity analysis of several dynamic reliability models of systems of increasing complexity, culminating with the consideration of the International Fusion Materials Irradiation Facility (IFMIF) accelerator system. Section II presents the main steps of a procedure for the automated generation of Markov chains for reliability analysis, including the abstraction of the physical system, construction of the Markov chain, and the generation and solution of the ensuing set of differential equations; all of these steps have been implemented in a stand-alone computer code system called QUEFT/MARKOMAG-S/MCADJSEN. This code system has been applied to sensitivity analysis of dynamic reliability measures for a paradigm '2-out-of-3' system comprising five components and also to a comprehensive dynamic reliability analysis of the IFMIF accelerator system facilities for the average availability and, respectively, the system's availability at the final mission time. The QUEFT/MARKOMAG-S/MCADJSEN has been used to efficiently compute sensitivities to 186 failure and repair rates characterizing components and subsystems of the first-level fault tree of the IFMIF accelerator system. (authors)

  1. A reliability simulation language for reliability analysis

    International Nuclear Information System (INIS)

    Deans, N.D.; Miller, A.J.; Mann, D.P.

    1986-01-01

    The results of work being undertaken to develop a Reliability Description Language (RDL) which will enable reliability analysts to describe complex reliability problems in a simple, clear and unambiguous way are described. Component and system features can be stated in a formal manner and subsequently used, along with control statements to form a structured program. The program can be compiled and executed on a general-purpose computer or special-purpose simulator. (DG)

  2. Durability reliability analysis for corroding concrete structures under uncertainty

    Science.gov (United States)

    Zhang, Hao

    2018-02-01

    This paper presents a durability reliability analysis of reinforced concrete structures subject to the action of marine chloride. The focus is to provide insight into the role of epistemic uncertainties on durability reliability. The corrosion model involves a number of variables whose probabilistic characteristics cannot be fully determined due to the limited availability of supporting data. All sources of uncertainty, both aleatory and epistemic, should be included in the reliability analysis. Two methods are available to formulate the epistemic uncertainty: the imprecise probability-based method and the purely probabilistic method in which the epistemic uncertainties are modeled as random variables. The paper illustrates how the epistemic uncertainties are modeled and propagated in the two methods, and shows how epistemic uncertainties govern the durability reliability.

  3. A methodology to incorporate organizational factors into human reliability analysis

    International Nuclear Information System (INIS)

    Li Pengcheng; Chen Guohua; Zhang Li; Xiao Dongsheng

    2010-01-01

    A new holistic methodology for Human Reliability Analysis (HRA) is proposed to model the effects of the organizational factors on the human reliability. Firstly, a conceptual framework is built, which is used to analyze the causal relationships between the organizational factors and human reliability. Then, the inference model for Human Reliability Analysis is built by combining the conceptual framework with Bayesian networks, which is used to execute the causal inference and diagnostic inference of human reliability. Finally, a case example is presented to demonstrate the specific application of the proposed methodology. The results show that the proposed methodology of combining the conceptual model with Bayesian Networks can not only easily model the causal relationship between organizational factors and human reliability, but in a given context, people can quantitatively measure the human operational reliability, and identify the most likely root causes or the prioritization of root causes caused human error. (authors)

  4. Structural reliability analysis based on the cokriging technique

    International Nuclear Information System (INIS)

    Zhao Wei; Wang Wei; Dai Hongzhe; Xue Guofeng

    2010-01-01

    Approximation methods are widely used in structural reliability analysis because they are simple to create and provide explicit functional relationships between the responses and variables in stead of the implicit limit state function. Recently, the kriging method which is a semi-parameter interpolation technique that can be used for deterministic optimization and structural reliability has gained popularity. However, to fully exploit the kriging method, especially in high-dimensional problems, a large number of sample points should be generated to fill the design space and this can be very expensive and even impractical in practical engineering analysis. Therefore, in this paper, a new method-the cokriging method, which is an extension of kriging, is proposed to calculate the structural reliability. cokriging approximation incorporates secondary information such as the values of the gradients of the function being approximated. This paper explores the use of the cokriging method for structural reliability problems by comparing it with the Kriging method based on some numerical examples. The results indicate that the cokriging procedure described in this work can generate approximation models to improve on the accuracy and efficiency for structural reliability problems and is a viable alternative to the kriging.

  5. Assessment of modern methods of human factor reliability analysis in PSA studies

    International Nuclear Information System (INIS)

    Holy, J.

    2001-12-01

    The report is structured as follows: Classical terms and objects (Probabilistic safety assessment as a framework for human reliability assessment; Human failure within the PSA model; Basic types of operator failure modelled in a PSA study and analyzed by HRA methods; Qualitative analysis of human reliability; Quantitative analysis of human reliability used; Process of analysis of nuclear reactor operator reliability in a PSA study); New terms and objects (Analysis of dependences; Errors of omission; Errors of commission; Error forcing context); and Overview and brief assessment of human reliability analysis (Basic characteristics of the methods; Assets and drawbacks of the use of each of HRA method; History and prospects of the use of the methods). (P.A.)

  6. Reliability analysis of Angra I safety systems

    International Nuclear Information System (INIS)

    Oliveira, L.F.S. de; Soto, J.B.; Maciel, C.C.; Gibelli, S.M.O.; Fleming, P.V.; Arrieta, L.A.

    1980-07-01

    An extensive reliability analysis of some safety systems of Angra I, are presented. The fault tree technique, which has been successfully used in most reliability studies of nuclear safety systems performed to date is employed. Results of a quantitative determination of the unvailability of the accumulator and the containment spray injection systems are presented. These results are also compared to those reported in WASH-1400. (E.G.) [pt

  7. Reliability analysis of RC containment structures under combined loads

    International Nuclear Information System (INIS)

    Hwang, H.; Reich, M.; Kagami, S.

    1984-01-01

    This paper discusses a reliability analysis method and load combination design criteria for reinforced concrete containment structures under combined loads. The probability based reliability analysis method is briefly described. For load combination design criteria, derivations of the load factors for accidental pressure due to a design basis accident and safe shutdown earthquake (SSE) for three target limit state probabilities are presented

  8. IEEE guide for the analysis of human reliability

    International Nuclear Information System (INIS)

    Dougherty, E.M. Jr.

    1987-01-01

    The Institute of Electrical and Electronics Engineers (IEEE) working group 7.4 of the Human Factors and Control Facilities Subcommittee of the Nuclear Power Engineering Committee (NPEC) has released its fifth draft of a Guide for General Principles of Human Action Reliability Analysis for Nuclear Power Generating Stations, for approval of NPEC. A guide is the least mandating in the IEEE hierarchy of standards. The purpose is to enhance the performance of an human reliability analysis (HRA) as a part of a probabilistic risk assessment (PRA), to assure reproducible results, and to standardize documentation. The guide does not recommend or even discuss specific techniques, which are too rapidly evolving today. Considerable maturation in the analysis of human reliability in a PRA context has taken place in recent years. The IEEE guide on this subject is an initial step toward bringing HRA out of the research and development arena into the toolbox of standard engineering practices

  9. A parallel solution to the cutting stock problem for a cluster of workstations

    Energy Technology Data Exchange (ETDEWEB)

    Nicklas, L.D.; Atkins, R.W.; Setia, S.V.; Wang, P.Y. [George Mason Univ., Fairfax, VA (United States)

    1996-12-31

    This paper describes the design and implementation of a solution to the constrained 2-D cutting stock problem on a cluster of workstations. The constrained 2-D cutting stock problem is an irregular problem with a dynamically modified global data set and irregular amounts and patterns of communication. A replicated data structure is used for the parallel solution since the ratio of reads to writes is known to be large. Mutual exclusion and consistency are maintained using a token-based lazy consistency mechanism, and a randomized protocol for dynamically balancing the distributed work queue is employed. Speedups are reported for three benchmark problems executed on a cluster of workstations interconnected by a 10 Mbps Ethernet.

  10. Discrete event simulation versus conventional system reliability analysis approaches

    DEFF Research Database (Denmark)

    Kozine, Igor

    2010-01-01

    Discrete Event Simulation (DES) environments are rapidly developing and appear to be promising tools for building reliability and risk analysis models of safety-critical systems and human operators. If properly developed, they are an alternative to the conventional human reliability analysis models...... and systems analysis methods such as fault and event trees and Bayesian networks. As one part, the paper describes briefly the author’s experience in applying DES models to the analysis of safety-critical systems in different domains. The other part of the paper is devoted to comparing conventional approaches...

  11. Sensitivity analysis in a structural reliability context

    International Nuclear Information System (INIS)

    Lemaitre, Paul

    2014-01-01

    This thesis' subject is sensitivity analysis in a structural reliability context. The general framework is the study of a deterministic numerical model that allows to reproduce a complex physical phenomenon. The aim of a reliability study is to estimate the failure probability of the system from the numerical model and the uncertainties of the inputs. In this context, the quantification of the impact of the uncertainty of each input parameter on the output might be of interest. This step is called sensitivity analysis. Many scientific works deal with this topic but not in the reliability scope. This thesis' aim is to test existing sensitivity analysis methods, and to propose more efficient original methods. A bibliographical step on sensitivity analysis on one hand and on the estimation of small failure probabilities on the other hand is first proposed. This step raises the need to develop appropriate techniques. Two variables ranking methods are then explored. The first one proposes to make use of binary classifiers (random forests). The second one measures the departure, at each step of a subset method, between each input original density and the density given the subset reached. A more general and original methodology reflecting the impact of the input density modification on the failure probability is then explored. The proposed methods are then applied on the CWNR case, which motivates this thesis. (author)

  12. A bench-top automated workstation for nucleic acid isolation from clinical sample types.

    Science.gov (United States)

    Thakore, Nitu; Garber, Steve; Bueno, Arial; Qu, Peter; Norville, Ryan; Villanueva, Michael; Chandler, Darrell P; Holmberg, Rebecca; Cooney, Christopher G

    2018-04-18

    Systems that automate extraction of nucleic acid from cells or viruses in complex clinical matrices have tremendous value even in the absence of an integrated downstream detector. We describe our bench-top automated workstation that integrates our previously-reported extraction method - TruTip - with our newly-developed mechanical lysis method. This is the first report of this method for homogenizing viscous and heterogeneous samples and lysing difficult-to-disrupt cells using "MagVor": a rotating magnet that rotates a miniature stir disk amidst glass beads confined inside of a disposable tube. Using this system, we demonstrate automated nucleic acid extraction from methicillin-resistant Staphylococcus aureus (MRSA) in nasopharyngeal aspirate (NPA), influenza A in nasopharyngeal swabs (NPS), human genomic DNA from whole blood, and Mycobacterium tuberculosis in NPA. The automated workstation yields nucleic acid with comparable extraction efficiency to manual protocols, which include commercially-available Qiagen spin column kits, across each of these sample types. This work expands the scope of applications beyond previous reports of TruTip to include difficult-to-disrupt cell types and automates the process, including a method for removal of organics, inside a compact bench-top workstation. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Accident Sequence Evaluation Program: Human reliability analysis procedure

    Energy Technology Data Exchange (ETDEWEB)

    Swain, A.D.

    1987-02-01

    This document presents a shortened version of the procedure, models, and data for human reliability analysis (HRA) which are presented in the Handbook of Human Reliability Analysis With emphasis on Nuclear Power Plant Applications (NUREG/CR-1278, August 1983). This shortened version was prepared and tried out as part of the Accident Sequence Evaluation Program (ASEP) funded by the US Nuclear Regulatory Commission and managed by Sandia National Laboratories. The intent of this new HRA procedure, called the ''ASEP HRA Procedure,'' is to enable systems analysts, with minimal support from experts in human reliability analysis, to make estimates of human error probabilities and other human performance characteristics which are sufficiently accurate for many probabilistic risk assessments. The ASEP HRA Procedure consists of a Pre-Accident Screening HRA, a Pre-Accident Nominal HRA, a Post-Accident Screening HRA, and a Post-Accident Nominal HRA. The procedure in this document includes changes made after tryout and evaluation of the procedure in four nuclear power plants by four different systems analysts and related personnel, including human reliability specialists. The changes consist of some additional explanatory material (including examples), and more detailed definitions of some of the terms. 42 refs.

  14. ESCRIME: testing bench for advanced operator workstations in future plants

    International Nuclear Information System (INIS)

    Poujol, A.; Papin, B.

    1994-01-01

    The problem of optimal task allocation between man and computer for the operation of nuclear power plants is of major concern for the design of future plants. As the increased level of automation induces the modification of the tasks actually devoted to the operator in the control room, it is very important to anticipate these consequences at the plant design stage. The improvement of man machine cooperation is expected to play a major role in minimizing the impact of human errors on plant safety. The CEA has launched a research program concerning the evolution of the plant operation in order to optimize the efficiency of the human/computer systems for a better safety. The objective of this program is to evaluate different modalities of man-machine share of tasks, in a representative context. It relies strongly upon the development of a specific testing facility, the ESCRIME work bench, which is presented in this paper. It consists of an EDF 1300MWe PWR plant simulator connected to an operator workstation. The plant simulator model presents at a significant level of details the instrumentation and control of the plant and the main connected circuits. The operator interface is based on the generalization of the use of interactive graphic displays, and is intended to be consistent to the tasks to be performed by the operator. The functional architecture of the workstation is modular, so that different cooperation mechanisms can be implemented within the same framework. It is based on a thorough analysis and structuration of plant control tasks, in normal as well as in accident situations. The software architecture design follows the distributed artificial intelligence approach. Cognitive agents cooperate in order to operate the process. The paper presents the basic principles and the functional architecture of the test bed and describes the steps and the present status of the program. (author)

  15. Feedwater heater performance evaluation using the heat exchanger workstation

    International Nuclear Information System (INIS)

    Ranganathan, K.M.; Singh, G.P.; Tsou, J.L.

    1995-01-01

    A Heat Exchanger Workstation (HEW) has been developed to monitor the condition of heat exchanging equipment power plants. HEW enables engineers to analyze thermal performance and failure events for power plant feedwater heaters. The software provides tools for heat balance calculation and performance analysis. It also contains an expert system that enables performance enhancement. The Operation and Maintenance (O ampersand M) reference module on CD-ROM for HEW will be available by the end of 1995. Future developments of HEW would result in Condenser Expert System (CONES) and Balance of Plant Expert System (BOPES). HEW consists of five tightly integrated applications: A Database system for heat exchanger data storage, a Diagrammer system for creating plant heat exchanger schematics and data display, a Performance Analyst system for analyzing and predicting heat exchanger performance, a Performance Advisor expert system for expertise on improving heat exchanger performance and a Water Calculator system for computing properties of steam and water. In this paper an analysis of a feedwater heater which has been off-line is used to demonstrate how HEW can analyze the performance of the feedwater heater train and provide an economic justification for either replacing or repairing the feedwater heater

  16. Reliability and risk analysis methods research plan

    International Nuclear Information System (INIS)

    1984-10-01

    This document presents a plan for reliability and risk analysis methods research to be performed mainly by the Reactor Risk Branch (RRB), Division of Risk Analysis and Operations (DRAO), Office of Nuclear Regulatory Research. It includes those activities of other DRAO branches which are very closely related to those of the RRB. Related or interfacing programs of other divisions, offices and organizations are merely indicated. The primary use of this document is envisioned as an NRC working document, covering about a 3-year period, to foster better coordination in reliability and risk analysis methods development between the offices of Nuclear Regulatory Research and Nuclear Reactor Regulation. It will also serve as an information source for contractors and others to more clearly understand the objectives, needs, programmatic activities and interfaces together with the overall logical structure of the program

  17. Representative Sampling for reliable data analysis

    DEFF Research Database (Denmark)

    Petersen, Lars; Esbensen, Kim Harry

    2005-01-01

    regime in order to secure the necessary reliability of: samples (which must be representative, from the primary sampling onwards), analysis (which will not mean anything outside the miniscule analytical volume without representativity ruling all mass reductions involved, also in the laboratory) and data...

  18. A double-loop adaptive sampling approach for sensitivity-free dynamic reliability analysis

    International Nuclear Information System (INIS)

    Wang, Zequn; Wang, Pingfeng

    2015-01-01

    Dynamic reliability measures reliability of an engineered system considering time-variant operation condition and component deterioration. Due to high computational costs, conducting dynamic reliability analysis at an early system design stage remains challenging. This paper presents a confidence-based meta-modeling approach, referred to as double-loop adaptive sampling (DLAS), for efficient sensitivity-free dynamic reliability analysis. The DLAS builds a Gaussian process (GP) model sequentially to approximate extreme system responses over time, so that Monte Carlo simulation (MCS) can be employed directly to estimate dynamic reliability. A generic confidence measure is developed to evaluate the accuracy of dynamic reliability estimation while using the MCS approach based on developed GP models. A double-loop adaptive sampling scheme is developed to efficiently update the GP model in a sequential manner, by considering system input variables and time concurrently in two sampling loops. The model updating process using the developed sampling scheme can be terminated once the user defined confidence target is satisfied. The developed DLAS approach eliminates computationally expensive sensitivity analysis process, thus substantially improves the efficiency of dynamic reliability analysis. Three case studies are used to demonstrate the efficacy of DLAS for dynamic reliability analysis. - Highlights: • Developed a novel adaptive sampling approach for dynamic reliability analysis. • POD Developed a new metric to quantify the accuracy of dynamic reliability estimation. • Developed a new sequential sampling scheme to efficiently update surrogate models. • Three case studies were used to demonstrate the efficacy of the new approach. • Case study results showed substantially enhanced efficiency with high accuracy

  19. Multi-objective optimization algorithms for mixed model assembly line balancing problem with parallel workstations

    Directory of Open Access Journals (Sweden)

    Masoud Rabbani

    2016-12-01

    Full Text Available This paper deals with mixed model assembly line (MMAL balancing problem of type-I. In MMALs several products are made on an assembly line while the similarity of these products is so high. As a result, it is possible to assemble several types of products simultaneously without any additional setup times. The problem has some particular features such as parallel workstations and precedence constraints in dynamic periods in which each period also effects on its next period. The research intends to reduce the number of workstations and maximize the workload smoothness between workstations. Dynamic periods are used to determine all variables in different periods to achieve efficient solutions. A non-dominated sorting genetic algorithm (NSGA-II and multi-objective particle swarm optimization (MOPSO are used to solve the problem. The proposed model is validated with GAMS software for small size problem and the performance of the foregoing algorithms is compared with each other based on some comparison metrics. The NSGA-II outperforms MOPSO with respect to some comparison metrics used in this paper, but in other metrics MOPSO is better than NSGA-II. Finally, conclusion and future research is provided.

  20. Videoconferencing using workstations in the ATLAS collaboration

    International Nuclear Information System (INIS)

    Onions, C.; Blokzijl, K. Bos

    1994-01-01

    The ATLAS collaboration consists of about 1000 physicists from close to 100 institutes around the world. This number is expected to grow over the coming years. The authors realized that they needed to do something to allow people to participate in meetings held at CERN without having to travel and hence they started a pilot project in July, 1993 to look into this. Colleagues from Nikhef already had experience of international network meetings (e.g. RIPE) using standard UNIX workstations and public domain software tools using the MBONE, hence they investigated this as a first priority

  1. Visual observation of digitalised signals by workstations

    International Nuclear Information System (INIS)

    Navratil, J.; Akiyama, A.; Mimashi, T.

    1994-01-01

    The idea to have on-line information about the behavior of betatron tune, as a first step to the future automatic control of TRISTAN accelerator tune, appeared near the end of 1991. At the same time, other suggestions concerning a rejuvenation of the existing Control System arose and therefore the newly created project ''System for monitoring betatron tune'' (SMBT) started with several goals: - to obtain new on-line information about the beam behavior during the acceleration time, - to test the way of possible extension and replacement of the existing control system of TRISTAN, - to get experience with the workstation and XWindow software

  2. Computer modeling and design of diagnostic workstations and radiology reading rooms

    Science.gov (United States)

    Ratib, Osman M.; Amato, Carlos L.; Balbona, Joseph A.; Boots, Kevin; Valentino, Daniel J.

    2000-05-01

    We used 3D modeling techniques to design and evaluate the ergonomics of diagnostic workstation and radiology reading room in the planning phase of building a new hospital at UCLA. Given serious space limitations, the challenge was to provide more optimal working environment for radiologists in a crowded and busy environment. A particular attention was given to flexibility, lighting condition and noise reduction in rooms shared by multiple users performing diagnostic tasks as well as regular clinical conferences. Re-engineering workspace ergonomics rely on the integration of new technologies, custom designed cabinets, indirect lighting, sound-absorbent partitioning and geometric arrangement of workstations to allow better privacy while optimizing space occupation. Innovations included adjustable flat monitors, integration of videoconferencing and voice recognition, control monitor and retractable keyboard for optimal space utilization. An overhead compartment protecting the monitors from ambient light is also used as accessory lightbox and rear-view projection screen for conferences.

  3. Reliability Analysis of Fatigue Fracture of Wind Turbine Drivetrain Components

    DEFF Research Database (Denmark)

    Berzonskis, Arvydas; Sørensen, John Dalsgaard

    2016-01-01

    in the volume of the casted ductile iron main shaft, on the reliability of the component. The probabilistic reliability analysis conducted is based on fracture mechanics models. Additionally, the utilization of the probabilistic reliability for operation and maintenance planning and quality control is discussed....

  4. Reliability analysis of cluster-based ad-hoc networks

    International Nuclear Information System (INIS)

    Cook, Jason L.; Ramirez-Marquez, Jose Emmanuel

    2008-01-01

    The mobile ad-hoc wireless network (MAWN) is a new and emerging network scheme that is being employed in a variety of applications. The MAWN varies from traditional networks because it is a self-forming and dynamic network. The MAWN is free of infrastructure and, as such, only the mobile nodes comprise the network. Pairs of nodes communicate either directly or through other nodes. To do so, each node acts, in turn, as a source, destination, and relay of messages. The virtue of a MAWN is the flexibility this provides; however, the challenge for reliability analyses is also brought about by this unique feature. The variability and volatility of the MAWN configuration makes typical reliability methods (e.g. reliability block diagram) inappropriate because no single structure or configuration represents all manifestations of a MAWN. For this reason, new methods are being developed to analyze the reliability of this new networking technology. New published methods adapt to this feature by treating the configuration probabilistically or by inclusion of embedded mobility models. This paper joins both methods together and expands upon these works by modifying the problem formulation to address the reliability analysis of a cluster-based MAWN. The cluster-based MAWN is deployed in applications with constraints on networking resources such as bandwidth and energy. This paper presents the problem's formulation, a discussion of applicable reliability metrics for the MAWN, and illustration of a Monte Carlo simulation method through the analysis of several example networks

  5. A Review: Passive System Reliability Analysis – Accomplishments and Unresolved Issues

    Energy Technology Data Exchange (ETDEWEB)

    Nayak, Arun Kumar, E-mail: arunths@barc.gov.in [Reactor Engineering Division, Reactor Design and Development Group, Bhabha Atomic Research Centre, Mumbai (India); Chandrakar, Amit [Homi Bhabha National Institute, Mumbai (India); Vinod, Gopika [Reactor Safety Division, Reactor Design and Development Group, Bhabha Atomic Research Centre, Mumbai (India)

    2014-10-10

    Reliability assessment of passive safety systems is one of the important issues, since safety of advanced nuclear reactors rely on several passive features. In this context, a few methodologies such as reliability evaluation of passive safety system (REPAS), reliability methods for passive safety functions (RMPS), and analysis of passive systems reliability (APSRA) have been developed in the past. These methodologies have been used to assess reliability of various passive safety systems. While these methodologies have certain features in common, but they differ in considering certain issues; for example, treatment of model uncertainties, deviation of geometric, and process parameters from their nominal values. This paper presents the state of the art on passive system reliability assessment methodologies, the accomplishments, and remaining issues. In this review, three critical issues pertaining to passive systems performance and reliability have been identified. The first issue is applicability of best estimate codes and model uncertainty. The best estimate codes based phenomenological simulations of natural convection passive systems could have significant amount of uncertainties, these uncertainties must be incorporated in appropriate manner in the performance and reliability analysis of such systems. The second issue is the treatment of dynamic failure characteristics of components of passive systems. REPAS, RMPS, and APSRA methodologies do not consider dynamic failures of components or process, which may have strong influence on the failure of passive systems. The influence of dynamic failure characteristics of components on system failure probability is presented with the help of a dynamic reliability methodology based on Monte Carlo simulation. The analysis of a benchmark problem of Hold-up tank shows the error in failure probability estimation by not considering the dynamism of components. It is thus suggested that dynamic reliability methodologies must be

  6. Analysis and assessment of water treatment plant reliability

    Directory of Open Access Journals (Sweden)

    Szpak Dawid

    2017-03-01

    Full Text Available The subject of the publication is the analysis and assessment of the reliability of the surface water treatment plant (WTP. In the study the one parameter method of reliability assessment was used. Based on the flow sheet derived from the water company the reliability scheme of the analysed WTP was prepared. On the basis of the daily WTP work report the availability index Kg for the individual elements included in the WTP, was determined. Then, based on the developed reliability scheme showing the interrelationships between elements, the availability index Kg for the whole WTP was determined. The obtained value of the availability index Kg was compared with the criteria values.

  7. Time-dependent reliability analysis of nuclear reactor operators using probabilistic network models

    International Nuclear Information System (INIS)

    Oka, Y.; Miyata, K.; Kodaira, H.; Murakami, S.; Kondo, S.; Togo, Y.

    1987-01-01

    Human factors are very important for the reliability of a nuclear power plant. Human behavior has essentially a time-dependent nature. The details of thinking and decision making processes are important for detailed analysis of human reliability. They have, however, not been well considered by the conventional methods of human reliability analysis. The present paper describes the models for the time-dependent and detailed human reliability analysis. Recovery by an operator is taken into account and two-operators models are also presented

  8. Procedure for conducting a human-reliability analysis for nuclear power plants. Final report

    International Nuclear Information System (INIS)

    Bell, B.J.; Swain, A.D.

    1983-05-01

    This document describes in detail a procedure to be followed in conducting a human reliability analysis as part of a probabilistic risk assessment when such an analysis is performed according to the methods described in NUREG/CR-1278, Handbook for Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications. An overview of the procedure describing the major elements of a human reliability analysis is presented along with a detailed description of each element and an example of an actual analysis. An appendix consists of some sample human reliability analysis problems for further study

  9. Root cause analysis in support of reliability enhancement of engineering components

    International Nuclear Information System (INIS)

    Kumar, Sachin; Mishra, Vivek; Joshi, N.S.; Varde, P.V.

    2014-01-01

    Reliability based methods have been widely used for the safety assessment of plant system, structures and components. These methods provide a quantitative estimation of system reliability but do not give insight into the failure mechanism. Understanding the failure mechanism is a must to avoid the recurrence of the events and enhancement of the system reliability. Root cause analysis provides a tool for gaining detailed insights into the causes of failure of component with particular attention to the identification of fault in component design, operation, surveillance, maintenance, training, procedures and policies which must be improved to prevent repetition of incidents. Root cause analysis also helps in developing Probabilistic Safety Analysis models. A probabilistic precursor study provides a complement to the root cause analysis approach in event analysis by focusing on how an event might have developed adversely. This paper discusses the root cause analysis methodologies and their application in the specific case studies for enhancement of system reliability. (author)

  10. DATMAN: A reliability data analysis program using Bayesian updating

    International Nuclear Information System (INIS)

    Becker, M.; Feltus, M.A.

    1996-01-01

    Preventive maintenance (PM) techniques focus on the prevention of failures, in particular, system components that are important to plant functions. Reliability-centered maintenance (RCM) improves on the PM techniques by introducing a set of guidelines by which to evaluate the system functions. It also minimizes intrusive maintenance, labor, and equipment downtime without sacrificing system performance when its function is essential for plant safety. Both the PM and RCM approaches require that system reliability data be updated as more component failures and operation time are acquired. Systems reliability and the likelihood of component failures can be calculated by Bayesian statistical methods, which can update these data. The DATMAN computer code has been developed at Penn State to simplify the Bayesian analysis by performing tedious calculations needed for RCM reliability analysis. DATMAN reads data for updating, fits a distribution that best fits the data, and calculates component reliability. DATMAN provides a user-friendly interface menu that allows the user to choose from several common prior and posterior distributions, insert new failure data, and visually select the distribution that matches the data most accurately

  11. Generalization of Posture Training to Computer Workstations in an Applied Setting

    Science.gov (United States)

    Sigurdsson, Sigurdur O.; Ring, Brandon M.; Needham, Mick; Boscoe, James H.; Silverman, Kenneth

    2011-01-01

    Improving employees' posture may decrease the risk of musculoskeletal disorders. The current paper is a systematic replication and extension of Sigurdsson and Austin (2008), who found that an intervention consisting of information, real-time feedback, and self-monitoring improved participant posture at mock workstations. In the current study,…

  12. Design and Development of an Integrated Workstation Automation Hub

    Energy Technology Data Exchange (ETDEWEB)

    Weber, Andrew; Ghatikar, Girish; Sartor, Dale; Lanzisera, Steven

    2015-03-30

    Miscellaneous Electronic Loads (MELs) account for one third of all electricity consumption in U.S. commercial buildings, and are drivers for a significant energy use in India. Many of the MEL-specific plug-load devices are concentrated at workstations in offices. The use of intelligence, and integrated controls and communications at the workstation for an Office Automation Hub – offers the opportunity to improve both energy efficiency and occupant comfort, along with services for Smart Grid operations. Software and hardware solutions are available from a wide array of vendors for the different components, but an integrated system with interoperable communications is yet to be developed and deployed. In this study, we propose system- and component-level specifications for the Office Automation Hub, their functions, and a prioritized list for the design of a proof-of-concept system. Leveraging the strength of both the U.S. and India technology sectors, this specification serves as a guide for researchers and industry in both countries to support the development, testing, and evaluation of a prototype product. Further evaluation of such integrated technologies for performance and cost is necessary to identify the potential to reduce energy consumptions in MELs and to improve occupant comfort.

  13. Predicting cycle time distributions for integrated processing workstations : an aggregate modeling approach

    NARCIS (Netherlands)

    Veeger, C.P.L.; Etman, L.F.P.; Lefeber, A.A.J.; Adan, I.J.B.F.; Herk, van J.; Rooda, J.E.

    2011-01-01

    To predict cycle time distributions of integrated processing workstations, detailed simulation models are almost exclusively used; these models require considerable development and maintenance effort. As an alternative, we propose an aggregate model that is a lumped-parameter representation of the

  14. The development of a reliable amateur boxing performance analysis template.

    Science.gov (United States)

    Thomson, Edward; Lamb, Kevin; Nicholas, Ceri

    2013-01-01

    The aim of this study was to devise a valid performance analysis system for the assessment of the movement characteristics associated with competitive amateur boxing and assess its reliability using analysts of varying experience of the sport and performance analysis. Key performance indicators to characterise the demands of an amateur contest (offensive, defensive and feinting) were developed and notated using a computerised notational analysis system. Data were subjected to intra- and inter-observer reliability assessment using median sign tests and calculating the proportion of agreement within predetermined limits of error. For all performance indicators, intra-observer reliability revealed non-significant differences between observations (P > 0.05) and high agreement was established (80-100%) regardless of whether exact or the reference value of ±1 was applied. Inter-observer reliability was less impressive for both analysts (amateur boxer and experienced analyst), with the proportion of agreement ranging from 33-100%. Nonetheless, there was no systematic bias between observations for any indicator (P > 0.05), and the proportion of agreement within the reference range (±1) was 100%. A reliable performance analysis template has been developed for the assessment of amateur boxing performance and is available for use by researchers, coaches and athletes to classify and quantify the movement characteristics of amateur boxing.

  15. Instrument workstation for the EGSE of the Near Infrared Spectro-Photometer instrument (NISP) of the EUCLID mission

    Science.gov (United States)

    Trifoglio, M.; Gianotti, F.; Conforti, V.; Franceschi, E.; Stephen, J. B.; Bulgarelli, A.; Fioretti, V.; Maiorano, E.; Nicastro, L.; Valenziano, L.; Zoli, A.; Auricchio, N.; Balestra, A.; Bonino, D.; Bonoli, C.; Bortoletto, F.; Capobianco, V.; Chiarusi, T.; Corcione, L.; Debei, S.; De Rosa, A.; Dusini, S.; Fornari, F.; Giacomini, F.; Guizzo, G. P.; Ligori, S.; Margiotta, A.; Mauri, N.; Medinaceli, E.; Morgante, G.; Patrizii, L.; Sirignano, C.; Sirri, G.; Sortino, F.; Stanco, L.; Tenti, M.

    2016-07-01

    The NISP instrument on board the Euclid ESA mission will be developed and tested at different levels of integration using various test equipment which shall be designed and procured through a collaborative and coordinated effort. The NISP Instrument Workstation (NI-IWS) will be part of the EGSE configuration that will support the NISP AIV/AIT activities from the NISP Warm Electronics level up to the launch of Euclid. One workstation is required for the NISP EQM/AVM, and a second one for the NISP FM. Each workstation will follow the respective NISP model after delivery to ESA for Payload and Satellite AIV/AIT and launch. At these levels the NI-IWS shall be configured as part of the Payload EGSE, the System EGSE, and the Launch EGSE, respectively. After launch, the NI-IWS will be also re-used in the Euclid Ground Segment in order to support the Commissioning and Performance Verification (CPV) phase, and for troubleshooting purposes during the operational phase. The NI-IWS is mainly aimed at the local storage in a suitable format of the NISP instrument data and metadata, at local retrieval, processing and display of the stored data for on-line instrument assessment, and at the remote retrieval of the stored data for off-line analysis on other computers. We describe the design of the IWS software that will create a suitable interface to the external systems in each of the various configurations envisaged at the different levels, and provide the capabilities required to monitor and verify the instrument functionalities and performance throughout all phases of the NISP lifetime.

  16. EPRI root cause advisory workstation 'ERCAWS'

    International Nuclear Information System (INIS)

    Singh, A.; Chiu, C.; Hackman, R.B.

    1993-01-01

    EPRI and its contractor FPI International are developing Personal Computer (PC), Microsoft Windows based software to assist power plant engineers and maintenance personnel to diagnose and correct root causes of power plant equipment failures. The EPRI Root Cause Advisory Workstation (ERCAWS) is easy to use and able to handle knowledge bases and diagnostic tools for an unlimited number of equipment types. Knowledge base data is based on power industry experience and root cause analysis from many sources - Utilities, EPRI, US government, FPI, and International sources. The approach used in the knowledge base handling portion of the software is case-study oriented with the engineer selecting the equipment type and symptom identification using a combination of text, photographs, and animation, displaying dynamic physical phenomena involved. Root causes, means for confirmation, and corrective actions are then suggested in a simple, user friendly format. The first knowledge base being released with ERCAWS is the Valve Diagnostic Advisor module; covering six common valve types and some motor operator and air operator items. More modules are under development with Heat Exchanger, Bolt, and Piping modules currently in the beta testing stage. A wide variety of diagnostic tools are easily incorporated into ERCAWS and accessed through the main screen interface. ERCAWS is designed to fulfill the industry need for user-friendly tools to perform power plant equipment failure root cause analysis, and training for engineering, operations and maintenance personnel on how components can fail and how to reduce failure rates or prevent failure from occurring. In addition, ERCAWS serves as a vehicle to capture lessons learned from industry wide experience. (author)

  17. Reliability Worth Analysis of Distribution Systems Using Cascade Correlation Neural Networks

    DEFF Research Database (Denmark)

    Heidari, Alireza; Agelidis, Vassilios; Pou, Josep

    2018-01-01

    Reliability worth analysis is of great importance in the area of distribution network planning and operation. The reliability worth's precision can be affected greatly by the customer interruption cost model used. The choice of the cost models can change system and load point reliability indices....... In this study, a cascade correlation neural network is adopted to further develop two cost models comprising a probabilistic distribution model and an average or aggregate model. A contingency-based analytical technique is adopted to conduct the reliability worth analysis. Furthermore, the possible effects...

  18. Space Mission Human Reliability Analysis (HRA) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The purpose of this project is to extend current ground-based Human Reliability Analysis (HRA) techniques to a long-duration, space-based tool to more effectively...

  19. Efficient Incremental Garbage Collection for Workstation/Server Database Systems

    OpenAIRE

    Amsaleg , Laurent; Gruber , Olivier; Franklin , Michael

    1994-01-01

    Projet RODIN; We describe an efficient server-based algorithm for garbage collecting object-oriented databases in a workstation/server environment. The algorithm is incremental and runs concurrently with client transactions, however, it does not hold any locks on data and does not require callbacks to clients. It is fault tolerant, but performs very little logging. The algorithm has been designed to be integrated into existing OODB systems, and therefore it works with standard implementation ...

  20. Integrated model for line balancing with workstation inventory management

    OpenAIRE

    Dilip Roy; Debdip khan

    2010-01-01

    In this paper, we address the optimization of an integrated line balancing process with workstation inventory management. While doing so, we have studied the interconnection between line balancing and its conversion process. Almost each and every moderate to large manufacturing industry depends on a long and integrated supply chain, consisting of inbound logistic, conversion process and outbound logistic. In this sense an approach addresses a very general problem of integrated line balancing....

  1. A new approach for reliability analysis with time-variant performance characteristics

    International Nuclear Information System (INIS)

    Wang, Zequn; Wang, Pingfeng

    2013-01-01

    Reliability represents safety level in industry practice and may variant due to time-variant operation condition and components deterioration throughout a product life-cycle. Thus, the capability to perform time-variant reliability analysis is of vital importance in practical engineering applications. This paper presents a new approach, referred to as nested extreme response surface (NERS), that can efficiently tackle time dependency issue in time-variant reliability analysis and enable to solve such problem by easily integrating with advanced time-independent tools. The key of the NERS approach is to build a nested response surface of time corresponding to the extreme value of the limit state function by employing Kriging model. To obtain the data for the Kriging model, the efficient global optimization technique is integrated with the NERS to extract the extreme time responses of the limit state function for any given system input. An adaptive response prediction and model maturation mechanism is developed based on mean square error (MSE) to concurrently improve the accuracy and computational efficiency of the proposed approach. With the nested response surface of time, the time-variant reliability analysis can be converted into the time-independent reliability analysis and existing advanced reliability analysis methods can be used. Three case studies are used to demonstrate the efficiency and accuracy of NERS approach

  2. [From data entry to data presentation at a clinical workstation--experiences with Anesthesia Information Management Systems (AIMS)].

    Science.gov (United States)

    Benson, M; Junger, A; Quinzio, L; Michel, A; Sciuk, G; Fuchs, C; Marquardt, K; Hempelmannn, G

    2000-09-01

    Anesthesia Information Management Systems (AIMS) are required to supply large amounts of data for various purposes such as performance recording, quality assurance, training, operating room management and research. It was our objective to establish an AIMS that enables every member of the department to independently access queries at his/her work station and at the same time allows the presentation of data in a suitable manner in order to increase the transfer of different information to the clinical workstation. Apple Macintosh Clients (Apple Computer, Inc. Cupertino, California) and the file- and database servers were installed into the already partially existing hospital network. The most important components installed on each computer are the anesthesia documenting software NarkoData (ProLogic GmbH, Erkrath), HIS client software and a HTML browser. More than 250 queries for easy evaluation were formulated with the software Voyant (Brossco Systems, Espoo, Finland). Together with the documentation they are the evaluation module of the AIMS. Today, more than 20,000 anesthesia procedures are recorded each year at 112 decentralised workstations with the AIMS. In 1998, 90.8% of the 20,383 performed anesthetic procedures were recorded online and 9.2% entered postopeatively into the system. With a corresponding user access it is possible to receive all available patient data at each single anesthesiological workstation via HIS (diagnoses, laboratory results) anytime. The available information includes previous anesthesia records, statistics and all data available from the hospitals intranet. This additional information is of great advantage in comparison to previous working conditions. The implementation of an AIMS allowed to greatly enhance the quota but also the quality of documentation and an increased flow of information at the anesthesia workstation. The circuit between data entry and the presentation and evaluation of data, statistics and results directly

  3. Methodology for reliability allocation based on fault tree analysis and dualistic contrast

    Institute of Scientific and Technical Information of China (English)

    TONG Lili; CAO Xuewu

    2008-01-01

    Reliability allocation is a difficult multi-objective optimization problem.This paper presents a methodology for reliability allocation that can be applied to determine the reliability characteristics of reactor systems or subsystems.The dualistic contrast,known as one of the most powerful tools for optimization problems,is applied to the reliability allocation model of a typical system in this article.And the fault tree analysis,deemed to be one of the effective methods of reliability analysis,is also adopted.Thus a failure rate allocation model based on the fault tree analysis and dualistic contrast is achieved.An application on the emergency diesel generator in the nuclear power plant is given to illustrate the proposed method.

  4. Reliability analysis of protection system of advanced pressurized water reactor - APR 1400

    International Nuclear Information System (INIS)

    Varde, P. V.; Choi, J. G.; Lee, D. Y.; Han, J. B.

    2003-04-01

    Reliability analysis was carried out for the protection system of the Korean Advanced Pressurized Water Reactor - APR 1400. The main focus of this study was the reliability analysis of digital protection system, however, towards giving an integrated statement of complete protection reliability an attempt has been made to include the shutdown devices and other related aspects based on the information available to date. The sensitivity analysis has been carried out for the critical components / functions in the system. Other aspects like importance analysis and human error reliability for the critical human actions form part of this work. The framework provided by this study and the results obtained shows that this analysis has potential to be utilized as part of risk informed approach for future design / regulatory applications

  5. How users organize electronic files on their workstations in the office environment: a preliminary study of personal information organization behaviour

    Directory of Open Access Journals (Sweden)

    Christopher S.G. Khoo

    2007-01-01

    Full Text Available An ongoing study of how people organize their computer files and folders on the hard disk of their office workstations. A questionnaire was used to collect information on the subjects, their work responsibilities and characteristics of their workstations. Data on file and folder names and file structure were extracted from the hard disk using a computer program STG FolderPrint Plus, DOS command and screen capture. A semi-structured interview collected information on subjects' strategies in naming and organizing files and folders, and in locating and retrieving files. The data were analysed mainly through qualitative analysis and content analysis. The subjects organized their folders in a variety of structures, from broad and shallow to narrow and deep hierarchies. One to three levels of folders is common. The labels for first level folders tended to be task-based or project-based. Most subjects located files by browsing the folder structure, with searching used as a last resort. The most common types of folder names were document type, organizational function or structure, and miscellaneous or temporary. The frequency of folders of different types appear related to the type of occupation.

  6. Reliability analysis of diverse safety logic systems of fast breeder reactor

    International Nuclear Information System (INIS)

    Ravi Kumar, Bh.; Apte, P.R.; Srivani, L.; Ilango Sambasivan, S.; Swaminathan, P.

    2006-01-01

    Safety Logic for Fast Breeder Reactor (FBR) is designed to initiate safety action against Design Basis Events. Based on the outputs of various processing circuits, Safety logic system drives the control rods of the shutdown system. So, Safety Logic system is classified as safety critical system. Therefore, reliability analysis has to be performed. This paper discusses the Reliability analysis of Diverse Safety logic systems of FBRs. For this literature survey on safety critical systems, system reliability approach and standards to be followed like IEC-61508 are discussed in detail. For Programmable Logic device based systems, Hardware Description Languages (HDL) are used. So this paper also discusses the Verification and Validation for HDLs. Finally a case study for the Reliability analysis of Safety logic is discussed. (author)

  7. Reliability analysis of safety systems of nuclear power plant and utility experience with reliability safeguarding of systems during specified normal operation

    International Nuclear Information System (INIS)

    Balfanz, H.P.

    1989-01-01

    The paper gives an outline of the methods applied for reliability analysis of safety systems in nuclear power plant. The main tasks are to check the system design for detection of weak points, and to find possibilities of optimizing the strategies for inspection, inspection intervals, maintenance periods. Reliability safeguarding measures include the determination and verification of the broundary conditions of the analysis with regard to the reliability parameters and maintenance parameters used in the analysis, and the analysis of data feedback reflecting the plant response during operation. (orig.) [de

  8. Reliability-Based Robustness Analysis for a Croatian Sports Hall

    DEFF Research Database (Denmark)

    Čizmar, Dean; Kirkegaard, Poul Henning; Sørensen, John Dalsgaard

    2011-01-01

    This paper presents a probabilistic approach for structural robustness assessment for a timber structure built a few years ago. The robustness analysis is based on a structural reliability based framework for robustness and a simplified mechanical system modelling of a timber truss system....... A complex timber structure with a large number of failure modes is modelled with only a few dominant failure modes. First, a component based robustness analysis is performed based on the reliability indices of the remaining elements after the removal of selected critical elements. The robustness...... is expressed and evaluated by a robustness index. Next, the robustness is assessed using system reliability indices where the probabilistic failure model is modelled by a series system of parallel systems....

  9. Reliability analysis of prestressed concrete containment structures

    International Nuclear Information System (INIS)

    Jiang, J.; Zhao, Y.; Sun, J.

    1993-01-01

    The reliability analysis of prestressed concrete containment structures subjected to combinations of static and dynamic loads with consideration of uncertainties of structural and load parameters is presented. Limit state probabilities for given parameters are calculated using the procedure developed at BNL, while that with consideration of parameter uncertainties are calculated by a fast integration for time variant structural reliability. The limit state surface of the prestressed concrete containment is constructed directly incorporating the prestress. The sensitivities of the Choleskey decomposition matrix and the natural vibration character are calculated by simplified procedures. (author)

  10. Issues about home computer workstations and primary school children in Hong Kong: a pilot study.

    Science.gov (United States)

    Py Szeto, Grace; Tsui, Macy Mei Sze; Sze, Winky Wing Yu; Chan, Irene Sin Ting; Chung, Cyrus Chak Fai; Lee, Felix Wai Kit

    2014-01-01

    All around the world, there is a rising trend of computer use among young children especially at home; yet the computer furniture is usually not designed specifically for children's use. In Hong Kong, this creates an even greater problem as most people live in very small apartments in high-rise buildings. Most of the past research literature is focused on computer use in children in the school environment and not about the home setting. The present pilot study aimed to examine ergonomic issues in children's use of computers at home in Hong Kong, which has some unique home environmental issues. Fifteen children (six male, nine female) aged 8-11 years and their parents were recruited by convenience sampling. Participants were asked to provide information on their computer use habits and related musculoskeletal symptoms. Participants were photographed when sitting at the computer workstation in their usual postures and joint angles were measured. The participants used computers frequently for less than two hours daily and the majority shared their workstations with other family members. Computer furniture was designed more for adult use and a mismatch of furniture and body size was found. Ergonomic issues included inappropriate positioning of the display screen, keyboard, and mouse, as well as lack of forearm support and suitable backrest. These led to awkward or constrained postures while some postural problems may be habitual. Three participants reported neck and shoulder discomfort in the past 12 months and 4 reported computer-related discomfort. Inappropriate computer workstation settings may have adverse effects on children's postures. More research on workstation setup at home, where children may use their computers the most, is needed.

  11. Thermal load at workstations in the underground coal mining: Results of research carried out in 6 coal mines

    Directory of Open Access Journals (Sweden)

    Krzysztof Słota

    2016-08-01

    Full Text Available Background: Statistics shows that almost half of Polish extraction in underground mines takes place at workstations where temperature exceeds 28°C. The number of employees working in such conditions is gradually increasing, therefore, the problem of safety and health protection is still growing. Material and Methods: In the present study we assessed the heat load of employees at different workstations in the mining industry, taking into account current thermal conditions and work costs. The evaluation of energy cost of work was carried out in 6 coal mines. A total of 221 miners employed at different workstations were assessed. Individual groups of miners were characterized and thermal safety of the miners was assessed relying on thermal discomfort index. Results: The results of this study indicate considerable differences in the durations of analyzed work processes at individual workstations. The highest average energy cost was noted during the work performed in the forehead. The lowest value was found in the auxiliary staff. The calculated index of discomfort clearly indicated numerous situations in which the admissible range of thermal load exceeded the parameters of thermal load safe for human health. It should be noted that the values of average labor cost fall within the upper, albeit admissible, limits of thermal load. Conclusions: The results of the study indicate that in some cases work in mining is performed in conditions of thermal discomfort. Due to high variability and complexity of work conditions it becomes necessary to verify the workers’ load at different workstations, which largely depends on the environmental conditions and work organization, as well as on the performance of workers themselves. Med Pr 2016;67(4:477–498

  12. Modeling and Analysis of Component Faults and Reliability

    DEFF Research Database (Denmark)

    Le Guilly, Thibaut; Olsen, Petur; Ravn, Anders Peter

    2016-01-01

    This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets that are automati......This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets...... that are automatically generated. The stochastic information on the faults is used to estimate the reliability of the fault affected system. The reliability is given with respect to properties of the system state space. We illustrate the process on a concrete example using the Uppaal model checker for validating...... the ideal system model and the fault modeling. Then the statistical version of the tool, UppaalSMC, is used to find reliability estimates....

  13. Applying human factors to the design of control centre and workstation of a nuclear reactor

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Isaac J.A. Luquetti dos; Carvalho, Paulo V.R.; Goncalves, Gabriel de L., E-mail: luquetti@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Souza, Tamara D.M.F.; Falcao, Mariana A. [Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, RJ (Brazil). Dept. de Desenho Industrial

    2013-07-01

    Human factors is a body of scientific factors about human characteristics, covering biomedical, psychological and psychosocial considerations, including principles and applications in the personnel selection areas, training, job performance aid tools and human performance evaluation. Control Centre is a combination of control rooms, control suites and local control stations which are functionally related and all on the same site. Digital control room includes an arrangement of systems, equipment such as computers and communication terminals and workstations at which control and monitoring functions are conducted by operators. Inadequate integration between control room and operators reduces safety, increases the operation complexity, complicates operator training and increases the likelihood of human errors occurrence. The objective of this paper is to present a specific approach for the conceptual and basic design of the control centre and workstation of a nuclear reactor used to produce radioisotope. The approach is based on human factors standards, guidelines and the participation of a multidisciplinary team in the conceptual and basic phases of the design. Using the information gathered from standards and from the multidisciplinary team, an initial sketch 3D of the control centre and workstation are being developed. (author)

  14. Applying human factors to the design of control centre and workstation of a nuclear reactor

    International Nuclear Information System (INIS)

    Santos, Isaac J.A. Luquetti dos; Carvalho, Paulo V.R.; Goncalves, Gabriel de L.; Souza, Tamara D.M.F.; Falcao, Mariana A.

    2013-01-01

    Human factors is a body of scientific factors about human characteristics, covering biomedical, psychological and psychosocial considerations, including principles and applications in the personnel selection areas, training, job performance aid tools and human performance evaluation. Control Centre is a combination of control rooms, control suites and local control stations which are functionally related and all on the same site. Digital control room includes an arrangement of systems, equipment such as computers and communication terminals and workstations at which control and monitoring functions are conducted by operators. Inadequate integration between control room and operators reduces safety, increases the operation complexity, complicates operator training and increases the likelihood of human errors occurrence. The objective of this paper is to present a specific approach for the conceptual and basic design of the control centre and workstation of a nuclear reactor used to produce radioisotope. The approach is based on human factors standards, guidelines and the participation of a multidisciplinary team in the conceptual and basic phases of the design. Using the information gathered from standards and from the multidisciplinary team, an initial sketch 3D of the control centre and workstation are being developed. (author)

  15. Montecarlo Simulations for a Lep Experiment with Unix Workstation Clusters

    Science.gov (United States)

    Bonesini, M.; Calegari, A.; Rossi, P.; Rossi, V.

    Modular systems of RISC CPU based computers have been implemented for large productions of Montecarlo simulated events for the DELPHI experiment at CERN. From a pilot system based on DEC 5000 CPU’s, a full size system based on a CONVEX C3820 UNIX supercomputer and a cluster of HP 735 workstations has been put into operation as a joint effort between INFN Milano and CILEA.

  16. Fast Calibration of Industrial Mobile Robots to Workstations using QR Codes

    DEFF Research Database (Denmark)

    Andersen, Rasmus Skovgaard; Damgaard, Jens Skov; Madsen, Ole

    2013-01-01

    is proposed. With this QR calibration, it is possible to calibrate an AIMM to a workstation in 3D in less than 1 second, which is significantly faster than existing methods. The accuracy of the calibration is ±4 mm. The method is modular in the sense that it directly supports integration and calibration...

  17. Computed radiography and the workstation in a study of the cervical spine. Technical and cost implications

    International Nuclear Information System (INIS)

    Garcia, J. M.; Lopez-Galiacho, N.; Martinez, M.

    1999-01-01

    To demonstrate the advantages of computed radiography and the workstation in assessing the images acquired in a study of the cervical spine. Lateral projections of cervical spine obtained using a computed radiography system in 63 ambulatory patients were studied in a workstation. Images of the tip of the odontoid process. C1-C2, basion-opisthion and C7 were visualized prior to and after their transmission and processing, and the overall improvement in their diagnostic quality was assessed. The rate of detection of the tip of the odontoid process, C1-C2, the foramen magnum and C/ increased by 17,6, 11 and 14 percentage points, respectively. Image processing improved the diagnostic quality in over 75% of cases. Image processing in a workstation improved the visualization of the anatomical points being studied and the diagnostic quality of the images. These advantages as well as the possibility of transferring the images to a picture archiving and communication system (PACS) are convincing reasons for using digital radiography. (Author) 7 refs

  18. Reliability analysis of wind embedded power generation system for ...

    African Journals Online (AJOL)

    This paper presents a method for Reliability Analysis of wind energy embedded in power generation system for Indian scenario. This is done by evaluating the reliability index, loss of load expectation, for the power generation system with and without integration of wind energy sources in the overall electric power system.

  19. Reliability analysis for thermal cutting method based non-explosive separation device

    International Nuclear Information System (INIS)

    Choi, Jun Woo; Hwang, Kuk Ha; Kim, Byung Kyu

    2016-01-01

    In order to increase the reliability of a separation device for a small satellite, a new non-explosive separation device is invented. This device is activated using a thermal cutting method with a Ni-Cr wire. A reliability analysis is carried out for the proposed non-explosive separation device by applying the Fault tree analysis (FTA) method. In the FTA results for the separation device, only ten single-point failure modes are found. The reliability modeling and analysis for the device are performed considering failure of the power supply, the Ni-Cr wire burns failure and unwinds, the holder separation failure, the balls separation failure, and the pin release failure. Ultimately, the reliability of the proposed device is calculated as 0.999989 with five Ni-Cr wire coils

  20. Reliability analysis for thermal cutting method based non-explosive separation device

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jun Woo; Hwang, Kuk Ha; Kim, Byung Kyu [Korea Aerospace University, Goyang (Korea, Republic of)

    2016-12-15

    In order to increase the reliability of a separation device for a small satellite, a new non-explosive separation device is invented. This device is activated using a thermal cutting method with a Ni-Cr wire. A reliability analysis is carried out for the proposed non-explosive separation device by applying the Fault tree analysis (FTA) method. In the FTA results for the separation device, only ten single-point failure modes are found. The reliability modeling and analysis for the device are performed considering failure of the power supply, the Ni-Cr wire burns failure and unwinds, the holder separation failure, the balls separation failure, and the pin release failure. Ultimately, the reliability of the proposed device is calculated as 0.999989 with five Ni-Cr wire coils.

  1. Graphical user interface for a robotic workstation in a surgical environment.

    Science.gov (United States)

    Bielski, A; Lohmann, C P; Maier, M; Zapp, D; Nasseri, M A

    2016-08-01

    Surgery using a robotic system has proven to have significant potential but is still a highly challenging task for the surgeon. An eye surgery assistant has been developed to eliminate the problem of tremor caused by human motions endangering the outcome of ophthalmic surgery. In order to exploit the full potential of the robot and improve the workflow of the surgeon, providing the ability to change control parameters live in the system as well as the ability to connect additional ancillary systems is necessary. Additionally the surgeon should always be able to get an overview over the status of all systems with a quick glance. Therefore a workstation has been built. The contribution of this paper is the design and the implementation of an intuitive graphical user interface for this workstation. The interface has been designed with feedback from surgeons and technical staff in order to ensure its usability in a surgical environment. Furthermore, the system was designed with the intent of supporting additional systems with minimal additional effort.

  2. Functionalizing 2PP-fabricated microtools for optical manipulation on the BioPhotonics Workstation

    DEFF Research Database (Denmark)

    Matsuoka, Tomoyo; Nishi, Masayuki; Sakakura, Masaaki

    Functionalization of the structures fabricated by two-photon polymerization was achieved by coating them with sol-gel materials, which contain calcium indicators. The structures are expected to work potentially as nano-sensors on the BioPhotonics Workstation....

  3. Development of an EVA systems cost model. Volume 2: Shuttle orbiter crew and equipment translation concepts and EVA workstation concept development and integration

    Science.gov (United States)

    1975-01-01

    EVA crewman/equipment translational concepts are developed for a shuttle orbiter payload application. Also considered are EVA workstation systems to meet orbiter and payload requirements for integration of workstations into candidate orbiter payload worksites.

  4. Statistical models and methods for reliability and survival analysis

    CERN Document Server

    Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo

    2013-01-01

    Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical

  5. Comparison of methods for dependency determination between human failure events within human reliability analysis

    International Nuclear Information System (INIS)

    Cepis, M.

    2007-01-01

    The Human Reliability Analysis (HRA) is a highly subjective evaluation of human performance, which is an input for probabilistic safety assessment, which deals with many parameters of high uncertainty. The objective of this paper is to show that subjectivism can have a large impact on human reliability results and consequently on probabilistic safety assessment results and applications. The objective is to identify the key features, which may decrease of subjectivity of human reliability analysis. Human reliability methods are compared with focus on dependency comparison between Institute Jozef Stefan - Human Reliability Analysis (IJS-HRA) and Standardized Plant Analysis Risk Human Reliability Analysis (SPAR-H). Results show large differences in the calculated human error probabilities for the same events within the same probabilistic safety assessment, which are the consequence of subjectivity. The subjectivity can be reduced by development of more detailed guidelines for human reliability analysis with many practical examples for all steps of the process of evaluation of human performance. (author)

  6. Comparison of Methods for Dependency Determination between Human Failure Events within Human Reliability Analysis

    International Nuclear Information System (INIS)

    Cepin, M.

    2008-01-01

    The human reliability analysis (HRA) is a highly subjective evaluation of human performance, which is an input for probabilistic safety assessment, which deals with many parameters of high uncertainty. The objective of this paper is to show that subjectivism can have a large impact on human reliability results and consequently on probabilistic safety assessment results and applications. The objective is to identify the key features, which may decrease subjectivity of human reliability analysis. Human reliability methods are compared with focus on dependency comparison between Institute Jozef Stefan human reliability analysis (IJS-HRA) and standardized plant analysis risk human reliability analysis (SPAR-H). Results show large differences in the calculated human error probabilities for the same events within the same probabilistic safety assessment, which are the consequence of subjectivity. The subjectivity can be reduced by development of more detailed guidelines for human reliability analysis with many practical examples for all steps of the process of evaluation of human performance

  7. Embedding knowledge in a workstation

    Energy Technology Data Exchange (ETDEWEB)

    Barber, G

    1982-01-01

    This paper describes an approach to supporting work in the office. Using and extending ideas from the field of artificial intelligence (AI) it describes office work as a problem solving activity. A knowledge embedding language called OMEGA is used to embed knowledge of the organization into an office worker's workstation in order to support the office worker in his or her problem solving. A particular approach to reasoning about change and contradiction is discussed. This approach uses OMEGA's viewpoint mechanism. OMEGA's viewpoint mechanism is a general contradiction handling facility. Unlike other knowledge representation systems, when a contradiction is reached the reasons for the contradiction can be analyzed by the reduction mechanism without having to resort to a backtracking mechanism. The viewpoint mechanism is the heart of the problem solving support paradigm. This paradigm is an alternative to the classical view of problem solving in AI. Office workers are supported using the problem solving support paradigm. 16 references.

  8. Evaluation plan for a cardiological multi-media workstation (I4C project)

    NARCIS (Netherlands)

    Hofstede, J.W. van der; Quak, A.B.; Ginneken, A.M. van; Macfarlane, P.W.; Watson, J.; Hendriks, P.R.; Zeelenberg, C.

    1997-01-01

    The goal of the I4C project (Integration and Communication for the Continuity of Cardiac Care) is to build a multi-media workstation for cardiac care and to assess its impact in the clinical setting. This paper describes the technical evaluation plan for the prototype.

  9. Validity and reliability of acoustic analysis of respiratory sounds in infants

    Science.gov (United States)

    Elphick, H; Lancaster, G; Solis, A; Majumdar, A; Gupta, R; Smyth, R

    2004-01-01

    Objective: To investigate the validity and reliability of computerised acoustic analysis in the detection of abnormal respiratory noises in infants. Methods: Blinded, prospective comparison of acoustic analysis with stethoscope examination. Validity and reliability of acoustic analysis were assessed by calculating the degree of observer agreement using the κ statistic with 95% confidence intervals (CI). Results: 102 infants under 18 months were recruited. Convergent validity for agreement between stethoscope examination and acoustic analysis was poor for wheeze (κ = 0.07 (95% CI, –0.13 to 0.26)) and rattles (κ = 0.11 (–0.05 to 0.27)) and fair for crackles (κ = 0.36 (0.18 to 0.54)). Both the stethoscope and acoustic analysis distinguished well between sounds (discriminant validity). Agreement between observers for the presence of wheeze was poor for both stethoscope examination and acoustic analysis. Agreement for rattles was moderate for the stethoscope but poor for acoustic analysis. Agreement for crackles was moderate using both techniques. Within-observer reliability for all sounds using acoustic analysis was moderate to good. Conclusions: The stethoscope is unreliable for assessing respiratory sounds in infants. This has important implications for its use as a diagnostic tool for lung disorders in infants, and confirms that it cannot be used as a gold standard. Because of the unreliability of the stethoscope, the validity of acoustic analysis could not be demonstrated, although it could discriminate between sounds well and showed good within-observer reliability. For acoustic analysis, targeted training and the development of computerised pattern recognition systems may improve reliability so that it can be used in clinical practice. PMID:15499065

  10. Human reliability analysis of performing tasks in plants based on fuzzy integral

    International Nuclear Information System (INIS)

    Washio, Takashi; Kitamura, Yutaka; Takahashi, Hideaki

    1991-01-01

    The effective improvement of the human working conditions in nuclear power plants might be a solution for the enhancement of the operation safety. The human reliability analysis (HRA) gives a methodological basis of the improvement based on the evaluation of human reliability under various working conditions. This study investigates some difficulties of the human reliability analysis using conventional linear models and recent fuzzy integral models, and provides some solutions to the difficulties. The following practical features of the provided methods are confirmed in comparison with the conventional methods: (1) Applicability to various types of tasks (2) Capability of evaluating complicated dependencies among working condition factors (3) A priori human reliability evaluation based on a systematic task analysis of human action processes (4) A conversion scheme to probability from indices representing human reliability. (author)

  11. Design considerations for a neuroradiologic picture archival and image processing workstation

    International Nuclear Information System (INIS)

    Fishbein, D.S.

    1986-01-01

    The design and implementation of a small scale image archival and processing workstation for use in the study of digitized neuroradiologic images is described. The system is designed to be easily interfaced to existing equipment (presently PET, NMR and CT), function independent of a central file server, and provide for a versatile image processing environment. (Auth.)

  12. Effect of immediate feedback training on observer performance on a digital radiology workstation

    International Nuclear Information System (INIS)

    Mc Neill, K.M.; Maloney, K.; Elam, E.A.; Hillman, B.J.; Witzke, D.B.

    1990-01-01

    This paper reports on testing the hypothesis that training radiologists on a digital workstation would affect their efficiency and subjective acceptance of radiologic interpretation based on images shown on a cathode ray tub (CRT). Using a digital radiology workstation, six faculty radiologists and four senior residents read seven groups of six images each. In each group, three images were ranked as easy and three were ranked as difficult. All images were abnormal posteroanterior chest radiographs. On display of each image, the observer was asked which findings were present. After the observer listed his or her findings, the experimenter listed any findings not mentioned and pointed out any incorrect findings. The time to finding was recorded for each image, along with the number of corrections and missed findings. A postexperiment questionnaire was given to obtain subjective responses from the observers

  13. Robotic, MEMS-based Multi Utility Sample Preparation Instrument for ISS Biological Workstation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This project will develop a multi-functional, automated sample preparation instrument for biological wet-lab workstations on the ISS. The instrument is based on a...

  14. Reliability Analysis and Optimal Design of Monolithic Vertical Wall Breakwaters

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Burcharth, Hans F.; Christiani, E.

    1994-01-01

    Reliability analysis and reliability-based design of monolithic vertical wall breakwaters are considered. Probabilistic models of the most important failure modes, sliding failure, failure of the foundation and overturning failure are described . Relevant design variables are identified...

  15. Reliability importance analysis of Markovian systems at steady state using perturbation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Phuc Do Van [Institut Charles Delaunay - FRE CNRS 2848, Systems Modeling and Dependability Group, Universite de technologie de Troyes, 12, rue Marie Curie, BP 2060-10010 Troyes cedex (France); Barros, Anne [Institut Charles Delaunay - FRE CNRS 2848, Systems Modeling and Dependability Group, Universite de technologie de Troyes, 12, rue Marie Curie, BP 2060-10010 Troyes cedex (France)], E-mail: anne.barros@utt.fr; Berenguer, Christophe [Institut Charles Delaunay - FRE CNRS 2848, Systems Modeling and Dependability Group, Universite de technologie de Troyes, 12, rue Marie Curie, BP 2060-10010 Troyes cedex (France)

    2008-11-15

    Sensitivity analysis has been primarily defined for static systems, i.e. systems described by combinatorial reliability models (fault or event trees). Several structural and probabilistic measures have been proposed to assess the components importance. For dynamic systems including inter-component and functional dependencies (cold spare, shared load, shared resources, etc.), and described by Markov models or, more generally, by discrete events dynamic systems models, the problem of sensitivity analysis remains widely open. In this paper, the perturbation method is used to estimate an importance factor, called multi-directional sensitivity measure, in the framework of Markovian systems. Some numerical examples are introduced to show why this method offers a promising tool for steady-state sensitivity analysis of Markov processes in reliability studies.

  16. Reliability importance analysis of Markovian systems at steady state using perturbation analysis

    International Nuclear Information System (INIS)

    Phuc Do Van; Barros, Anne; Berenguer, Christophe

    2008-01-01

    Sensitivity analysis has been primarily defined for static systems, i.e. systems described by combinatorial reliability models (fault or event trees). Several structural and probabilistic measures have been proposed to assess the components importance. For dynamic systems including inter-component and functional dependencies (cold spare, shared load, shared resources, etc.), and described by Markov models or, more generally, by discrete events dynamic systems models, the problem of sensitivity analysis remains widely open. In this paper, the perturbation method is used to estimate an importance factor, called multi-directional sensitivity measure, in the framework of Markovian systems. Some numerical examples are introduced to show why this method offers a promising tool for steady-state sensitivity analysis of Markov processes in reliability studies

  17. A study of operational and testing reliability in software reliability analysis

    International Nuclear Information System (INIS)

    Yang, B.; Xie, M.

    2000-01-01

    Software reliability is an important aspect of any complex equipment today. Software reliability is usually estimated based on reliability models such as nonhomogeneous Poisson process (NHPP) models. Software systems are improving in testing phase, while it normally does not change in operational phase. Depending on whether the reliability is to be predicted for testing phase or operation phase, different measure should be used. In this paper, two different reliability concepts, namely, the operational reliability and the testing reliability, are clarified and studied in detail. These concepts have been mixed up or even misused in some existing literature. Using different reliability concept will lead to different reliability values obtained and it will further lead to different reliability-based decisions made. The difference of the estimated reliabilities is studied and the effect on the optimal release time is investigated

  18. Ergonomic Evaluations of Microgravity Workstations

    Science.gov (United States)

    Whitmore, Mihriban; Berman, Andrea H.; Byerly, Diane

    1996-01-01

    Various gloveboxes (GBXs) have been used aboard the Shuttle and ISS. Though the overall technical specifications are similar, each GBX's crew interface is unique. JSC conducted a series of ergonomic evaluations of the various glovebox designs to identify human factors requirements for new designs to provide operator commonality across different designs. We conducted 2 0g evaluations aboard the Shuttle to evaluate the material sciences GBX and the General Purpose Workstation (GPWS), and a KC-135 evaluation to compare combinations of arm hole interfaces and foot restraints (flexible arm holes were better than rigid ports for repetitive fine manipulation tasks). Posture analysis revealed that the smallest and tallest subjects assumed similar postures at all four configurations, suggesting that problematic postures are not necessarily a function of the operator s height but a function of the task characteristics. There was concern that the subjects were using the restrictive nature of the GBX s cuffs as an upper-body restraint to achieve such high forces, which might lead to neck/shoulder discomfort. EMG data revealed more consistent muscle performance at the GBX; the variability in the EMG profiles observed at the GPWS was attributed to the subjects attempts to provide more stabilization for themselves in the loose, flexible gauntlets. Tests revealed that the GBX should be designed for a 95 percentile American male to accommodate a neutral working posture. In addition, the foot restraint with knee support appeared beneficial for GBX operations. Crew comments were to provide 2 foot restraint mechanical modes, loose and lock-down, to accommodate a wide range of tasks without egressing the restraint system. Thus far, we have developed preliminary design guidelines for GBXs and foot.

  19. Networking issues---Lan and Wan needs---The impact of workstations

    International Nuclear Information System (INIS)

    Harvey, J.

    1990-01-01

    This review focuses on the use of networks in the LEP experiments at CERN. The role of the extended LAN at CERN is discussed in some detail, with particular emphasis on the impact the sudden growth in the use of workstations is having. The problem of network congestion is highlighted and possible evolution to FDDI mentioned. The status and use of the wide area connections are also reported

  20. Beyond reliability, multi-state failure analysis of satellite subsystems: A statistical approach

    International Nuclear Information System (INIS)

    Castet, Jean-Francois; Saleh, Joseph H.

    2010-01-01

    Reliability is widely recognized as a critical design attribute for space systems. In recent articles, we conducted nonparametric analyses and Weibull fits of satellite and satellite subsystems reliability for 1584 Earth-orbiting satellites launched between January 1990 and October 2008. In this paper, we extend our investigation of failures of satellites and satellite subsystems beyond the binary concept of reliability to the analysis of their anomalies and multi-state failures. In reliability analysis, the system or subsystem under study is considered to be either in an operational or failed state; multi-state failure analysis introduces 'degraded states' or partial failures, and thus provides more insights through finer resolution into the degradation behavior of an item and its progression towards complete failure. The database used for the statistical analysis in the present work identifies five states for each satellite subsystem: three degraded states, one fully operational state, and one failed state (complete failure). Because our dataset is right-censored, we calculate the nonparametric probability of transitioning between states for each satellite subsystem with the Kaplan-Meier estimator, and we derive confidence intervals for each probability of transitioning between states. We then conduct parametric Weibull fits of these probabilities using the Maximum Likelihood Estimation (MLE) approach. After validating the results, we compare the reliability versus multi-state failure analyses of three satellite subsystems: the thruster/fuel; the telemetry, tracking, and control (TTC); and the gyro/sensor/reaction wheel subsystems. The results are particularly revealing of the insights that can be gleaned from multi-state failure analysis and the deficiencies, or blind spots, of the traditional reliability analysis. In addition to the specific results provided here, which should prove particularly useful to the space industry, this work highlights the importance

  1. Using a Hybrid Cost-FMEA Analysis for Wind Turbine Reliability Analysis

    Directory of Open Access Journals (Sweden)

    Nacef Tazi

    2017-02-01

    Full Text Available Failure mode and effects analysis (FMEA has been proven to be an effective methodology to improve system design reliability. However, the standard approach reveals some weaknesses when applied to wind turbine systems. The conventional criticality assessment method has been criticized as having many limitations such as the weighting of severity and detection factors. In this paper, we aim to overcome these drawbacks and develop a hybrid cost-FMEA by integrating cost factors to assess the criticality, these costs vary from replacement costs to expected failure costs. Then, a quantitative comparative study is carried out to point out average failure rate, main cause of failure, expected failure costs and failure detection techniques. A special reliability analysis of gearbox and rotor-blades are presented.

  2. Reliability analysis of the automatic control and power supply of reactor equipment

    International Nuclear Information System (INIS)

    Monori, Pal; Nagy, J.A.; Meszaros, Zoltan; Konkoly, Laszlo; Szabo, Antal; Nagy, Laszlo

    1988-01-01

    Based on reliability analysis the shortcomings of nuclear facilities are discovered. Fault tree types constructed for the technology of automatic control and for power supply serve as input data of the ORCHARD 2 computer code. In order to charaterize the reliability of the system, availability, failure rates and time intervals between failures are calculated. The results of the reliability analysis of the feedwater system of the Paks Nuclear Power Plant showed that the system consisted of elements of similar reliabilities. (V.N.) 8 figs.; 3 tabs

  3. Probabilistic safety analysis and human reliability analysis. Proceedings. Working material

    International Nuclear Information System (INIS)

    1996-01-01

    An international meeting on Probabilistic Safety Assessment (PSA) and Human Reliability Analysis (HRA) was jointly organized by Electricite de France - Research and Development (EDF DER) and SRI International in co-ordination with the International Atomic Energy Agency. The meeting was held in Paris 21-23 November 1994. A group of international and French specialists in PSA and HRA participated at the meeting and discussed the state of the art and current trends in the following six topics: PSA Methodology; PSA Applications; From PSA to Dependability; Incident Analysis; Safety Indicators; Human Reliability. For each topic a background paper was prepared by EDF/DER and reviewed by the international group of specialists who attended the meeting. The results of this meeting provide a comprehensive overview of the most important questions related to the readiness of PSA for specific uses and areas where further research and development is required. Refs, figs, tabs

  4. Probabilistic safety analysis and human reliability analysis. Proceedings. Working material

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-31

    An international meeting on Probabilistic Safety Assessment (PSA) and Human Reliability Analysis (HRA) was jointly organized by Electricite de France - Research and Development (EDF DER) and SRI International in co-ordination with the International Atomic Energy Agency. The meeting was held in Paris 21-23 November 1994. A group of international and French specialists in PSA and HRA participated at the meeting and discussed the state of the art and current trends in the following six topics: PSA Methodology; PSA Applications; From PSA to Dependability; Incident Analysis; Safety Indicators; Human Reliability. For each topic a background paper was prepared by EDF/DER and reviewed by the international group of specialists who attended the meeting. The results of this meeting provide a comprehensive overview of the most important questions related to the readiness of PSA for specific uses and areas where further research and development is required. Refs, figs, tabs.

  5. Structural reliability analysis applied to pipeline risk analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gardiner, M. [GL Industrial Services, Loughborough (United Kingdom); Mendes, Renato F.; Donato, Guilherme V.P. [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil)

    2009-07-01

    Quantitative Risk Assessment (QRA) of pipelines requires two main components to be provided. These are models of the consequences that follow from some loss of containment incident, and models for the likelihood of such incidents occurring. This paper describes how PETROBRAS have used Structural Reliability Analysis for the second of these, to provide pipeline- and location-specific predictions of failure frequency for a number of pipeline assets. This paper presents an approach to estimating failure rates for liquid and gas pipelines, using Structural Reliability Analysis (SRA) to analyze the credible basic mechanisms of failure such as corrosion and mechanical damage. SRA is a probabilistic limit state method: for a given failure mechanism it quantifies the uncertainty in parameters to mathematical models of the load-resistance state of a structure and then evaluates the probability of load exceeding resistance. SRA can be used to benefit the pipeline risk management process by optimizing in-line inspection schedules, and as part of the design process for new construction in pipeline rights of way that already contain multiple lines. A case study is presented to show how the SRA approach has recently been used on PETROBRAS pipelines and the benefits obtained from it. (author)

  6. Reliability Analysis of Wireless Sensor Networks Using Markovian Model

    Directory of Open Access Journals (Sweden)

    Jin Zhu

    2012-01-01

    Full Text Available This paper investigates reliability analysis of wireless sensor networks whose topology is switching among possible connections which are governed by a Markovian chain. We give the quantized relations between network topology, data acquisition rate, nodes' calculation ability, and network reliability. By applying Lyapunov method, sufficient conditions of network reliability are proposed for such topology switching networks with constant or varying data acquisition rate. With the conditions satisfied, the quantity of data transported over wireless network node will not exceed node capacity such that reliability is ensured. Our theoretical work helps to provide a deeper understanding of real-world wireless sensor networks, which may find its application in the fields of network design and topology control.

  7. Analysis of sodium valve reliability data at CREDO

    International Nuclear Information System (INIS)

    Bott, T.F.; Haas, P.M.

    1979-01-01

    The Centralized Reliability Data Organization (CREDO) has been established at Oak Ridge National Laboratory (ORNL) by the Department of Energy to provide a centralized source of data for reliability/maintainabilty analysis of advanced reactor systems. The current schedule calls for develoment of the data system at a moderate pace, with the first major distribution of data in late FY-1980. Continuous long-term collection of engineering, operating, and event data has been initiated at EBR-II and FFTF

  8. Interrater reliability of videotaped observational gait-analysis assessments.

    Science.gov (United States)

    Eastlack, M E; Arvidson, J; Snyder-Mackler, L; Danoff, J V; McGarvey, C L

    1991-06-01

    The purpose of this study was to determine the interrater reliability of videotaped observational gait-analysis (VOGA) assessments. Fifty-four licensed physical therapists with varying amounts of clinical experience served as raters. Three patients with rheumatoid arthritis who demonstrated an abnormal gait pattern served as subjects for the videotape. The raters analyzed each patient's most severely involved knee during the four subphases of stance for the kinematic variables of knee flexion and genu valgum. Raters were asked to determine whether these variables were inadequate, normal, or excessive. The temporospatial variables analyzed throughout the entire gait cycle were cadence, step length, stride length, stance time, and step width. Generalized kappa coefficients ranged from .11 to .52. Intraclass correlation coefficients (2,1) and (3,1) were slightly higher. Our results indicate that physical therapists' VOGA assessments are only slightly to moderately reliable and that improved interrater reliability of the assessments of physical therapists utilizing this technique is needed. Our data suggest that there is a need for greater standardization of gait-analysis training.

  9. The relationship between cost estimates reliability and BIM adoption: SEM analysis

    Science.gov (United States)

    Ismail, N. A. A.; Idris, N. H.; Ramli, H.; Rooshdi, R. R. Raja Muhammad; Sahamir, S. R.

    2018-02-01

    This paper presents the usage of Structural Equation Modelling (SEM) approach in analysing the effects of Building Information Modelling (BIM) technology adoption in improving the reliability of cost estimates. Based on the questionnaire survey results, SEM analysis using SPSS-AMOS application examined the relationships between BIM-improved information and cost estimates reliability factors, leading to BIM technology adoption. Six hypotheses were established prior to SEM analysis employing two types of SEM models, namely the Confirmatory Factor Analysis (CFA) model and full structural model. The SEM models were then validated through the assessment on their uni-dimensionality, validity, reliability, and fitness index, in line with the hypotheses tested. The final SEM model fit measures are: P-value=0.000, RMSEA=0.0790.90, TLI=0.956>0.90, NFI=0.935>0.90 and ChiSq/df=2.259; indicating that the overall index values achieved the required level of model fitness. The model supports all the hypotheses evaluated, confirming that all relationship exists amongst the constructs are positive and significant. Ultimately, the analysis verified that most of the respondents foresee better understanding of project input information through BIM visualization, its reliable database and coordinated data, in developing more reliable cost estimates. They also perceive to accelerate their cost estimating task through BIM adoption.

  10. Temporal digital subtraction radiography with a personal computer digital workstation

    International Nuclear Information System (INIS)

    Kircos, L.; Holt, W.; Khademi, J.

    1990-01-01

    Technique have been developed and implemented on a personal computer (PC)-based digital workstation to accomplish temporal digital subtraction radiography (TDSR). TDSR is useful in recording radiologic change over time. Thus, this technique is useful not only for monitoring chronic disease processes but also for monitoring the temporal course of interventional therapies. A PC-based digital workstation was developed on a PC386 platform with add-in hardware and software. Image acquisition, storage, and processing was accomplished using 512 x 512 x 8- or 12-bit frame grabber. Software and hardware were developed to accomplish image orientation, registration, gray scale compensation, subtraction, and enhancement. Temporal radiographs of the jaws were made in a fixed and reproducible orientation between the x-ray source and image receptor enabling TDSR. Temporal changes secondary to chronic periodontal disease, osseointegration of endosseous implants, and wound healing were demonstrated. Use of TDSR for chest imaging was also demonstrated with identification of small, subtle focal masses that were not apparent with routine viewing. The large amount of radiologic information in images of the jaws and chest may obfuscate subtle changes that TDSR seems to identify. TDSR appears to be useful as a tool to record temporal and subtle changes in radiologic images

  11. Biomek Cell Workstation: A Variable System for Automated Cell Cultivation.

    Science.gov (United States)

    Lehmann, R; Severitt, J C; Roddelkopf, T; Junginger, S; Thurow, K

    2016-06-01

    Automated cell cultivation is an important tool for simplifying routine laboratory work. Automated methods are independent of skill levels and daily constitution of laboratory staff in combination with a constant quality and performance of the methods. The Biomek Cell Workstation was configured as a flexible and compatible system. The modified Biomek Cell Workstation enables the cultivation of adherent and suspension cells. Until now, no commercially available systems enabled the automated handling of both types of cells in one system. In particular, the automated cultivation of suspension cells in this form has not been published. The cell counts and viabilities were nonsignificantly decreased for cells cultivated in AutoFlasks in automated handling. The proliferation of manual and automated bioscreening by the WST-1 assay showed a nonsignificant lower proliferation of automatically disseminated cells associated with a mostly lower standard error. The disseminated suspension cell lines showed different pronounced proliferations in descending order, starting with Jurkat cells followed by SEM, Molt4, and RS4 cells having the lowest proliferation. In this respect, we successfully disseminated and screened suspension cells in an automated way. The automated cultivation and dissemination of a variety of suspension cells can replace the manual method. © 2015 Society for Laboratory Automation and Screening.

  12. Reliability analysis of self-actuated shutdown system

    International Nuclear Information System (INIS)

    Itooka, S.; Kumasaka, K.; Okabe, A.; Satoh, K.; Tsukui, Y.

    1991-01-01

    An analytical study was performed for the reliability of a self-actuated shutdown system (SASS) under the unprotected loss of flow (ULOF) event in a typical loop-type liquid metal fast breeder reactor (LMFBR) by the use of the response surface Monte Carlo analysis method. Dominant parameters for the SASS, such as Curie point characteristics, subassembly outlet coolant temperature, electromagnetic surface condition, etc., were selected and their probability density functions (PDFs) were determined by the design study information and experimental data. To get the response surface function (RSF) for the maximum coolant temperature, transient analyses of ULOF were performed by utilizing the experimental design method in the determination of analytical cases. Then, the RSF was derived by the multi-variable regression analysis. The unreliability of the SASS was evaluated as a probability that the maximum coolant temperature exceeded an acceptable level, employing the Monte Carlo calculation using the above PDFs and RSF. In this study, sensitivities to the dominant parameter were compared. The dispersion of subassembly outlet coolant temperature near the SASS-was found to be one of the most sensitive parameters. Fault tree analysis was performed using this value for the SASS in order to evaluate the shutdown system reliability. As a result of this study, the effectiveness of the SASS on the reliability improvement in the LMFBR shutdown system was analytically confirmed. This study has been performed as a part of joint research and development projects for DFBR under the sponsorship of the nine Japanese electric power companies, Electric Power Development Company and the Japan Atomic Power Company. (author)

  13. Reliability analysis framework for computer-assisted medical decision systems

    International Nuclear Information System (INIS)

    Habas, Piotr A.; Zurada, Jacek M.; Elmaghraby, Adel S.; Tourassi, Georgia D.

    2007-01-01

    We present a technique that enhances computer-assisted decision (CAD) systems with the ability to assess the reliability of each individual decision they make. Reliability assessment is achieved by measuring the accuracy of a CAD system with known cases similar to the one in question. The proposed technique analyzes the feature space neighborhood of the query case to dynamically select an input-dependent set of known cases relevant to the query. This set is used to assess the local (query-specific) accuracy of the CAD system. The estimated local accuracy is utilized as a reliability measure of the CAD response to the query case. The underlying hypothesis of the study is that CAD decisions with higher reliability are more accurate. The above hypothesis was tested using a mammographic database of 1337 regions of interest (ROIs) with biopsy-proven ground truth (681 with masses, 656 with normal parenchyma). Three types of decision models, (i) a back-propagation neural network (BPNN), (ii) a generalized regression neural network (GRNN), and (iii) a support vector machine (SVM), were developed to detect masses based on eight morphological features automatically extracted from each ROI. The performance of all decision models was evaluated using the Receiver Operating Characteristic (ROC) analysis. The study showed that the proposed reliability measure is a strong predictor of the CAD system's case-specific accuracy. Specifically, the ROC area index for CAD predictions with high reliability was significantly better than for those with low reliability values. This result was consistent across all decision models investigated in the study. The proposed case-specific reliability analysis technique could be used to alert the CAD user when an opinion that is unlikely to be reliable is offered. The technique can be easily deployed in the clinical environment because it is applicable with a wide range of classifiers regardless of their structure and it requires neither additional

  14. Development of a data acquisition system using a RISC/UNIXTM workstation

    International Nuclear Information System (INIS)

    Takeuchi, Y.; Tanimori, T.; Yasu, Y.

    1993-01-01

    We have developed a compact data acquisition system on RISC/UNIX workstations. A SUN TM SPARCstation TM IPC was used, in which an extension bus 'SBus TM ' was linked to a VMEbus. The transfer rate achieved was better than 7 Mbyte/s between the VMEbus and the SUN. A device driver for CAMAC was developed in order to realize an interruptive feature in UNIX. In addition, list processing has been incorporated in order to keep the high priority of the data handling process in UNIX. The successful developments of both device driver and list processing have made it possible to realize the good real-time feature on the RISC/UNIX system. Based on this architecture, a portable and versatile data taking system has been developed, which consists of a graphical user interface, I/O handler, user analysis process, process manager and a CAMAC device driver. (orig.)

  15. A taxonomy for human reliability analysis

    International Nuclear Information System (INIS)

    Beattie, J.D.; Iwasa-Madge, K.M.

    1984-01-01

    A human interaction taxonomy (classification scheme) was developed to facilitate human reliability analysis in a probabilistic safety evaluation of a nuclear power plant, being performed at Ontario Hydro. A human interaction occurs, by definition, when operators or maintainers manipulate, or respond to indication from, a plant component or system. The taxonomy aids the fault tree analyst by acting as a heuristic device. It helps define the range and type of human errors to be identified in the construction of fault trees, while keeping the identification by different analysts consistent. It decreases the workload associated with preliminary quantification of the large number of identified interactions by including a category called 'simple interactions'. Fault tree analysts quantify these according to a procedure developed by a team of human reliability specialists. The interactions which do not fit into this category are called 'complex' and are quantified by the human reliability team. The taxonomy is currently being used in fault tree construction in a probabilistic safety evaluation. As far as can be determined at this early stage, the potential benefits of consistency and completeness in identifying human interactions and streamlining the initial quantification are being realized

  16. A survey on reliability and safety analysis techniques of robot systems in nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Eom, H S; Kim, J H; Lee, J C; Choi, Y R; Moon, S S

    2000-12-01

    The reliability and safety analysis techniques was surveyed for the purpose of overall quality improvement of reactor inspection system which is under development in our current project. The contents of this report are : 1. Reliability and safety analysis techniques suvey - Reviewed reliability and safety analysis techniques are generally accepted techniques in many industries including nuclear industry. And we selected a few techniques which are suitable for our robot system. They are falut tree analysis, failure mode and effect analysis, reliability block diagram, markov model, combinational method, and simulation method. 2. Survey on the characteristics of robot systems which are distinguished from other systems and which are important to the analysis. 3. Survey on the nuclear environmental factors which affect the reliability and safety analysis of robot system 4. Collection of the case studies of robot reliability and safety analysis which are performed in foreign countries. The analysis results of this survey will be applied to the improvement of reliability and safety of our robot system and also will be used for the formal qualification and certification of our reactor inspection system.

  17. A survey on reliability and safety analysis techniques of robot systems in nuclear power plants

    International Nuclear Information System (INIS)

    Eom, H.S.; Kim, J.H.; Lee, J.C.; Choi, Y.R.; Moon, S.S.

    2000-12-01

    The reliability and safety analysis techniques was surveyed for the purpose of overall quality improvement of reactor inspection system which is under development in our current project. The contents of this report are : 1. Reliability and safety analysis techniques suvey - Reviewed reliability and safety analysis techniques are generally accepted techniques in many industries including nuclear industry. And we selected a few techniques which are suitable for our robot system. They are falut tree analysis, failure mode and effect analysis, reliability block diagram, markov model, combinational method, and simulation method. 2. Survey on the characteristics of robot systems which are distinguished from other systems and which are important to the analysis. 3. Survey on the nuclear environmental factors which affect the reliability and safety analysis of robot system 4. Collection of the case studies of robot reliability and safety analysis which are performed in foreign countries. The analysis results of this survey will be applied to the improvement of reliability and safety of our robot system and also will be used for the formal qualification and certification of our reactor inspection system

  18. Reliability analysis of service water system under earthquake

    International Nuclear Information System (INIS)

    Yu Yu; Qian Xiaoming; Lu Xuefeng; Wang Shengfei; Niu Fenglei

    2013-01-01

    Service water system is one of the important safety systems in nuclear power plant, whose failure probability is always gained by system reliability analysis. The probability of equipment failure under the earthquake is the function of the peak acceleration of earthquake motion, while the occurrence of earthquake is of randomicity, thus the traditional fault tree method in current probability safety assessment is not powerful enough to deal with such case of conditional probability problem. An analysis frame was put forward for system reliability evaluation in seismic condition in this paper, in which Monte Carlo simulation was used to deal with conditional probability problem. Annual failure probability of service water system was calculated, and failure probability of 1.46X10 -4 per year was obtained. The analysis result is in accordance with the data which indicate equipment seismic resistance capability, and the rationality of the model is validated. (authors)

  19. Small nuclear power reactor emergency electric power supply system reliability comparative analysis

    International Nuclear Information System (INIS)

    Bonfietti, Gerson

    2003-01-01

    This work presents an analysis of the reliability of the emergency power supply system, of a small size nuclear power reactor. Three different configurations are investigated and their reliability analyzed. The fault tree method is used as the main tool of analysis. The work includes a bibliographic review of emergency diesel generator reliability and a discussion of the design requirements applicable to emergency electrical systems. The influence of common cause failure influences is considered using the beta factor model. The operator action is considered using human failure probabilities. A parametric analysis shows the strong dependence between the reactor safety and the loss of offsite electric power supply. It is also shown that common cause failures can be a major contributor to the system reliability. (author)

  20. Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis

    Science.gov (United States)

    Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang

    2017-07-01

    In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.

  1. Optimizing the design and operation of reactor emergency systems using reliability analysis techniques

    International Nuclear Information System (INIS)

    Snaith, E.R.

    1975-01-01

    Following a reactor trip various reactor emergency systems, e.g. essential power supplies, emergency core cooling and boiler feed water arrangements are required to operate with a high degree of reliability. These systems must therefore be critically assessed to confirm their capability of operation and determine their reliability of performance. The use of probability analysis techniques enables the potential operating reliability of the systems to be calculated and this can then be compared with the overall reliability requirements. However, a system reliability analysis does much more than calculate an overall reliability value for the system. It establishes the reliability of all parts of the system and thus identifies the most sensitive areas of unreliability. This indicates the areas where any required improvements should be made and enables the overall systems' designs and modes of operation to be optimized, to meet the system and hence the overall reactor safety criteria. This paper gives specific examples of sensitive areas of unreliability that were identified as a result of a reliability analysis that was carried out on a reactor emergency core cooling system. Details are given of modifications to design and operation that were implemented with a resulting improvement in reliability of various reactor sub-systems. The report concludes that an initial calculation of system reliability should represent only the beginning of continuing process of system assessment. Data on equipment and system performance, particularly in those areas shown to be sensitive in their effect on the overall nuclear power plant reliability, should be collected and processed to give reliability data. These data should then be applied in further probabilistic analyses and the results correlated with the original analysis. This will demonstrate whether the required and the originally predicted system reliability is likely to be achieved, in the light of the actual history to date of

  2. Structural reliability analysis and seismic risk assessment

    International Nuclear Information System (INIS)

    Hwang, H.; Reich, M.; Shinozuka, M.

    1984-01-01

    This paper presents a reliability analysis method for safety evaluation of nuclear structures. By utilizing this method, it is possible to estimate the limit state probability in the lifetime of structures and to generate analytically the fragility curves for PRA studies. The earthquake ground acceleration, in this approach, is represented by a segment of stationary Gaussian process with a zero mean and a Kanai-Tajimi Spectrum. All possible seismic hazard at a site represented by a hazard curve is also taken into consideration. Furthermore, the limit state of a structure is analytically defined and the corresponding limit state surface is then established. Finally, the fragility curve is generated and the limit state probability is evaluated. In this paper, using a realistic reinforced concrete containment as an example, results of the reliability analysis of the containment subjected to dead load, live load and ground earthquake acceleration are presented and a fragility curve for PRA studies is also constructed

  3. Recent advances in computational structural reliability analysis methods

    Science.gov (United States)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-10-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  4. Changing of the ELAN data acquisition to an integrated system with VME frontend acquisition and VAX work station analysis; Umruestung der ELAN-Datenerfassung auf ein integriertes System mit VME-Frontend-Erfassung und VAX-Workstation-Analyse

    Energy Technology Data Exchange (ETDEWEB)

    Foerster, W.

    1991-07-01

    A new data acquisition system for the experiment ELAN at the electron stretcher accelerator ELSA had become necessary due to changes in the experimental setup. The data acquisition and analysis which formerly both were performed by a single computer system are now separately done by a VMEbus-Computer and a VAX-Workstation. Based on the software components MECDAS (Mainz Experiment Control and Data Acquisition System) and GOOSY (GSI Online Offline System) a powerfull tool for data acquisition and analysis has been adapted to the requirements of the ELAN experiment. (orig.). [Deutsch] Ziel dieser Arbeit war die Bereitstellung eines neuen Datenerfassungssystems fuer das ELAN-Experiment am Physikalischen Institut der Universitaet Bonn. Als Grundlage hierzu dienten das an der Universitaet Mainz entwickelte System MECDAS und das von der Gesellschaft fuer Schwerionenforschung in Darmstadt stammende Softwarepaket GOOSY. Die Auslese der vom Experiment kommenden Daten wird ueber ein CAMAC-System von MECDAS auf einem VME-Rechner vorgenommen. Dazu wurden in MECDAS neue Auslesealgorithmen eingebettet, die sich teilweise aus experimentellen Notwendigkeiten ergaben und zum anderen den vollen 24-bit Zugriff auf den CAMAC-Bus ermoeglichten. Die gepufferten Daten werden zu einer VAX-Workstation weitergeleitet, auf der sie von GOOSY-Prozessen gesichert und anaylsiert werden. (orig.).

  5. Reliability Analysis Study of Digital Reactor Protection System in Nuclear Power Plant

    International Nuclear Information System (INIS)

    Guo, Xiao Ming; Liu, Tao; Tong, Jie Juan; Zhao, Jun

    2011-01-01

    The Digital I and C systems are believed to improve a plants safety and reliability generally. The reliability analysis of digital I and C system has become one research hotspot. Traditional fault tree method is one of means to quantify the digital I and C system reliability. Review of advanced nuclear power plant AP1000 digital protection system evaluation makes clear both the fault tree application and analysis process to the digital system reliability. One typical digital protection system special for advanced reactor has been developed, which reliability evaluation is necessary for design demonstration. The typical digital protection system construction is introduced in the paper, and the process of FMEA and fault tree application to the digital protection system reliability evaluation are described. Reliability data and bypass logic modeling are two points giving special attention in the paper. Because the factors about time sequence and feedback not exist in reactor protection system obviously, the dynamic feature of digital system is not discussed

  6. Efficient surrogate models for reliability analysis of systems with multiple failure modes

    International Nuclear Information System (INIS)

    Bichon, Barron J.; McFarland, John M.; Mahadevan, Sankaran

    2011-01-01

    Despite many advances in the field of computational reliability analysis, the efficient estimation of the reliability of a system with multiple failure modes remains a persistent challenge. Various sampling and analytical methods are available, but they typically require accepting a tradeoff between accuracy and computational efficiency. In this work, a surrogate-based approach is presented that simultaneously addresses the issues of accuracy, efficiency, and unimportant failure modes. The method is based on the creation of Gaussian process surrogate models that are required to be locally accurate only in the regions of the component limit states that contribute to system failure. This approach to constructing surrogate models is demonstrated to be both an efficient and accurate method for system-level reliability analysis. - Highlights: → Extends efficient global reliability analysis to systems with multiple failure modes. → Constructs locally accurate Gaussian process models of each response. → Highly efficient and accurate method for assessing system reliability. → Effectiveness is demonstrated on several test problems from the literature.

  7. Exploratory factor analysis and reliability analysis with missing data: A simple method for SPSS users

    Directory of Open Access Journals (Sweden)

    Bruce Weaver

    2014-09-01

    Full Text Available Missing data is a frequent problem for researchers conducting exploratory factor analysis (EFA or reliability analysis. The SPSS FACTOR procedure allows users to select listwise deletion, pairwise deletion or mean substitution as a method for dealing with missing data. The shortcomings of these methods are well-known. Graham (2009 argues that a much better way to deal with missing data in this context is to use a matrix of expectation maximization (EM covariances(or correlations as input for the analysis. SPSS users who have the Missing Values Analysis add-on module can obtain vectors ofEM means and standard deviations plus EM correlation and covariance matrices via the MVA procedure. But unfortunately, MVA has no /MATRIX subcommand, and therefore cannot write the EM correlations directly to a matrix dataset of the type needed as input to the FACTOR and RELIABILITY procedures. We describe two macros that (in conjunction with an intervening MVA command carry out the data management steps needed to create two matrix datasets, one containing EM correlations and the other EM covariances. Either of those matrix datasets can then be used asinput to the FACTOR procedure, and the EM correlations can also be used as input to RELIABILITY. We provide an example that illustrates the use of the two macros to generate the matrix datasets and how to use those datasets as input to the FACTOR and RELIABILITY procedures. We hope that this simple method for handling missing data will prove useful to both students andresearchers who are conducting EFA or reliability analysis.

  8. The development of a Flight Test Engineer's Workstation for the Automated Flight Test Management System

    Science.gov (United States)

    Tartt, David M.; Hewett, Marle D.; Duke, Eugene L.; Cooper, James A.; Brumbaugh, Randal W.

    1989-01-01

    The Automated Flight Test Management System (ATMS) is being developed as part of the NASA Aircraft Automation Program. This program focuses on the application of interdisciplinary state-of-the-art technology in artificial intelligence, control theory, and systems methodology to problems of operating and flight testing high-performance aircraft. The development of a Flight Test Engineer's Workstation (FTEWS) is presented, with a detailed description of the system, technical details, and future planned developments. The goal of the FTEWS is to provide flight test engineers and project officers with an automated computer environment for planning, scheduling, and performing flight test programs. The FTEWS system is an outgrowth of the development of ATMS and is an implementation of a component of ATMS on SUN workstations.

  9. Reliability analysis of the solar array based on Fault Tree Analysis

    International Nuclear Information System (INIS)

    Wu Jianing; Yan Shaoze

    2011-01-01

    The solar array is an important device used in the spacecraft, which influences the quality of in-orbit operation of the spacecraft and even the launches. This paper analyzes the reliability of the mechanical system and certifies the most vital subsystem of the solar array. The fault tree analysis (FTA) model is established according to the operating process of the mechanical system based on DFH-3 satellite; the logical expression of the top event is obtained by Boolean algebra and the reliability of the solar array is calculated. The conclusion shows that the hinges are the most vital links between the solar arrays. By analyzing the structure importance(SI) of the hinge's FTA model, some fatal causes, including faults of the seal, insufficient torque of the locking spring, temperature in space, and friction force, can be identified. Damage is the initial stage of the fault, so limiting damage is significant to prevent faults. Furthermore, recommendations for improving reliability associated with damage limitation are discussed, which can be used for the redesigning of the solar array and the reliability growth planning.

  10. Reliability analysis of the solar array based on Fault Tree Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wu Jianing; Yan Shaoze, E-mail: yansz@mail.tsinghua.edu.cn [State Key Laboratory of Tribology, Department of Precision Instruments and Mechanology, Tsinghua University,Beijing 100084 (China)

    2011-07-19

    The solar array is an important device used in the spacecraft, which influences the quality of in-orbit operation of the spacecraft and even the launches. This paper analyzes the reliability of the mechanical system and certifies the most vital subsystem of the solar array. The fault tree analysis (FTA) model is established according to the operating process of the mechanical system based on DFH-3 satellite; the logical expression of the top event is obtained by Boolean algebra and the reliability of the solar array is calculated. The conclusion shows that the hinges are the most vital links between the solar arrays. By analyzing the structure importance(SI) of the hinge's FTA model, some fatal causes, including faults of the seal, insufficient torque of the locking spring, temperature in space, and friction force, can be identified. Damage is the initial stage of the fault, so limiting damage is significant to prevent faults. Furthermore, recommendations for improving reliability associated with damage limitation are discussed, which can be used for the redesigning of the solar array and the reliability growth planning.

  11. Kuhn-Tucker optimization based reliability analysis for probabilistic finite elements

    Science.gov (United States)

    Liu, W. K.; Besterfield, G.; Lawrence, M.; Belytschko, T.

    1988-01-01

    The fusion of probability finite element method (PFEM) and reliability analysis for fracture mechanics is considered. Reliability analysis with specific application to fracture mechanics is presented, and computational procedures are discussed. Explicit expressions for the optimization procedure with regard to fracture mechanics are given. The results show the PFEM is a very powerful tool in determining the second-moment statistics. The method can determine the probability of failure or fracture subject to randomness in load, material properties and crack length, orientation, and location.

  12. Test-retest reliability of trunk accelerometric gait analysis

    DEFF Research Database (Denmark)

    Henriksen, Marius; Lund, Hans; Moe-Nilssen, R

    2004-01-01

    The purpose of this study was to determine the test-retest reliability of a trunk accelerometric gait analysis in healthy subjects. Accelerations were measured during walking using a triaxial accelerometer mounted on the lumbar spine of the subjects. Six men and 14 women (mean age 35.2; range 18...... a definite potential in clinical gait analysis....

  13. 76 FR 21775 - Notice of Issuance of Final Determination Concerning Certain Office Workstations

    Science.gov (United States)

    2011-04-18

    ... Ethospace office workstations both feature ``frame-and-tile'' construction, which consists of a sturdy steel... respect to the frames, Herman Miller staff roll form rolled steel (coils) from a domestic source into....-sourced tiles, frames, connectors, finished ends, work surfaces, flipper door unit, shelf, task lights...

  14. Design of the HANARO operator workstation having the enhanced usability and data handling capability

    International Nuclear Information System (INIS)

    Kim, M. J.; Kim, Y. K.; Jung, H. S.; Choi, Y. S.; Woo, J. S.; Jeon, B. J.

    2003-01-01

    As a first step to the upgrade plan of the HANARO reactor control computer system, we furnished IBM workstation class PC to replace the existing operator workstation, the dedicated HMI console. Also designed is the new human-machine interface by using the commercial HMI development software that is operating on the MS-Windows. We expect that we would not have any more difficulties in preparing replacement parts and providing maintenance of hardware. In this paper, we introduce the features of new interface, which adopted the virtue of the existing design and enabled the safe and efficient reactor operation by correcting the demerits. Also described are the functionality of historian server that provides the simpler storage, retrieval and search operation and the design of trend display screen that replaces the existing chart recorder by using the dual monitor feature of PC graphic card

  15. Reliability analysis and initial requirements for FC systems and stacks

    Science.gov (United States)

    Åström, K.; Fontell, E.; Virtanen, S.

    In the year 2000 Wärtsilä Corporation started an R&D program to develop SOFC systems for CHP applications. The program aims to bring to the market highly efficient, clean and cost competitive fuel cell systems with rated power output in the range of 50-250 kW for distributed generation and marine applications. In the program Wärtsilä focuses on system integration and development. System reliability and availability are key issues determining the competitiveness of the SOFC technology. In Wärtsilä, methods have been implemented for analysing the system in respect to reliability and safety as well as for defining reliability requirements for system components. A fault tree representation is used as the basis for reliability prediction analysis. A dynamic simulation technique has been developed to allow for non-static properties in the fault tree logic modelling. Special emphasis has been placed on reliability analysis of the fuel cell stacks in the system. A method for assessing reliability and critical failure predictability requirements for fuel cell stacks in a system consisting of several stacks has been developed. The method is based on a qualitative model of the stack configuration where each stack can be in a functional, partially failed or critically failed state, each of the states having different failure rates and effects on the system behaviour. The main purpose of the method is to understand the effect of stack reliability, critical failure predictability and operating strategy on the system reliability and availability. An example configuration, consisting of 5 × 5 stacks (series of 5 sets of 5 parallel stacks) is analysed in respect to stack reliability requirements as a function of predictability of critical failures and Weibull shape factor of failure rate distributions.

  16. Reliability analysis of maintenance operations for railway tracks

    International Nuclear Information System (INIS)

    Rhayma, N.; Bressolette, Ph.; Breul, P.; Fogli, M.; Saussine, G.

    2013-01-01

    Railway engineering is confronted with problems due to degradation of the railway network that requires important and costly maintenance work. However, because of the lack of knowledge on the geometrical and mechanical parameters of the track, it is difficult to optimize the maintenance management. In this context, this paper presents a new methodology to analyze the behavior of railway tracks. It combines new diagnostic devices which permit to obtain an important amount of data and thus to make statistics on the geometric and mechanical parameters and a non-intrusive stochastic approach which can be coupled with any mechanical model. Numerical results show the possibilities of this methodology for reliability analysis of different maintenance operations. In the future this approach will give important informations to railway managers to optimize maintenance operations using a reliability analysis

  17. Distribution System Reliability Analysis for Smart Grid Applications

    Science.gov (United States)

    Aljohani, Tawfiq Masad

    Reliability of power systems is a key aspect in modern power system planning, design, and operation. The ascendance of the smart grid concept has provided high hopes of developing an intelligent network that is capable of being a self-healing grid, offering the ability to overcome the interruption problems that face the utility and cost it tens of millions in repair and loss. To address its reliability concerns, the power utilities and interested parties have spent extensive amount of time and effort to analyze and study the reliability of the generation and transmission sectors of the power grid. Only recently has attention shifted to be focused on improving the reliability of the distribution network, the connection joint between the power providers and the consumers where most of the electricity problems occur. In this work, we will examine the effect of the smart grid applications in improving the reliability of the power distribution networks. The test system used in conducting this thesis is the IEEE 34 node test feeder, released in 2003 by the Distribution System Analysis Subcommittee of the IEEE Power Engineering Society. The objective is to analyze the feeder for the optimal placement of the automatic switching devices and quantify their proper installation based on the performance of the distribution system. The measures will be the changes in the reliability system indices including SAIDI, SAIFI, and EUE. The goal is to design and simulate the effect of the installation of the Distributed Generators (DGs) on the utility's distribution system and measure the potential improvement of its reliability. The software used in this work is DISREL, which is intelligent power distribution software that is developed by General Reliability Co.

  18. Reliability analysis and utilization of PEMs in space application

    Science.gov (United States)

    Jiang, Xiujie; Wang, Zhihua; Sun, Huixian; Chen, Xiaomin; Zhao, Tianlin; Yu, Guanghua; Zhou, Changyi

    2009-11-01

    More and more plastic encapsulated microcircuits (PEMs) are used in space missions to achieve high performance. Since PEMs are designed for use in terrestrial operating conditions, the successful usage of PEMs in space harsh environment is closely related to reliability issues, which should be considered firstly. However, there is no ready-made methodology for PEMs in space applications. This paper discusses the reliability for the usage of PEMs in space. This reliability analysis can be divided into five categories: radiation test, radiation hardness, screening test, reliability calculation and reliability assessment. One case study is also presented to illuminate the details of the process, in which a PEM part is used in a joint space program Double-Star Project between the European Space Agency (ESA) and China. The influence of environmental constrains including radiation, humidity, temperature and mechanics on the PEM part has been considered. Both Double-Star Project satellites are still running well in space now.

  19. A computer graphics pilot project - Spacecraft mission support with an interactive graphics workstation

    Science.gov (United States)

    Hagedorn, John; Ehrner, Marie-Jacqueline; Reese, Jodi; Chang, Kan; Tseng, Irene

    1986-01-01

    The NASA Computer Graphics Pilot Project was undertaken to enhance the quality control, productivity and efficiency of mission support operations at the Goddard Operations Support Computing Facility. The Project evolved into a set of demonstration programs for graphics intensive simulated control room operations, particularly in connection with the complex space missions that began in the 1980s. Complex mission mean more data. Graphic displays are a means to reduce the probabilities of operator errors. Workstations were selected with 1024 x 768 pixel color displays controlled by a custom VLSI chip coupled to an MC68010 chip running UNIX within a shell that permits operations through the medium of mouse-accessed pulldown window menus. The distributed workstations run off a host NAS 8040 computer. Applications of the system for tracking spacecraft orbits and monitoring Shuttle payload handling illustrate the system capabilities, noting the built-in capabilities of shifting the point of view and rotating and zooming in on three-dimensional views of spacecraft.

  20. Using RGB-D sensors and evolutionary algorithms for the optimization of workstation layouts.

    Science.gov (United States)

    Diego-Mas, Jose Antonio; Poveda-Bautista, Rocio; Garzon-Leal, Diana

    2017-11-01

    RGB-D sensors can collect postural data in an automatized way. However, the application of these devices in real work environments requires overcoming problems such as lack of accuracy or body parts' occlusion. This work presents the use of RGB-D sensors and genetic algorithms for the optimization of workstation layouts. RGB-D sensors are used to capture workers' movements when they reach objects on workbenches. Collected data are then used to optimize workstation layout by means of genetic algorithms considering multiple ergonomic criteria. Results show that typical drawbacks of using RGB-D sensors for body tracking are not a problem for this application, and that the combination with intelligent algorithms can automatize the layout design process. The procedure described can be used to automatically suggest new layouts when workers or processes of production change, to adapt layouts to specific workers based on their ways to do the tasks, or to obtain layouts simultaneously optimized for several production processes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Reliability analysis for new technology-based transmitters

    Energy Technology Data Exchange (ETDEWEB)

    Brissaud, Florent, E-mail: florent.brissaud.2007@utt.f [Institut National de l' Environnement Industriel et des Risques (INERIS), Parc Technologique Alata, BP 2, 60550 Verneuil-en-Halatte (France); Universite de Technologie de Troyes (UTT), Institut Charles Delaunay (ICD) and STMR UMR CNRS 6279, 12 rue Marie Curie, BP 2060, 10010 Troyes cedex (France); Barros, Anne; Berenguer, Christophe [Universite de Technologie de Troyes (UTT), Institut Charles Delaunay (ICD) and STMR UMR CNRS 6279, 12 rue Marie Curie, BP 2060, 10010 Troyes cedex (France); Charpentier, Dominique [Institut National de l' Environnement Industriel et des Risques (INERIS), Parc Technologique Alata, BP 2, 60550 Verneuil-en-Halatte (France)

    2011-02-15

    The reliability analysis of new technology-based transmitters has to deal with specific issues: various interactions between both material elements and functions, undefined behaviours under faulty conditions, several transmitted data, and little reliability feedback. To handle these particularities, a '3-step' model is proposed, based on goal tree-success tree (GTST) approaches to represent both the functional and material aspects, and includes the faults and failures as a third part for supporting reliability analyses. The behavioural aspects are provided by relationship matrices, also denoted master logic diagrams (MLD), with stochastic values which represent direct relationships between system elements. Relationship analyses are then proposed to assess the effect of any fault or failure on any material element or function. Taking these relationships into account, the probabilities of malfunction and failure modes are evaluated according to time. Furthermore, uncertainty analyses tend to show that even if the input data and system behaviour are not well known, these previous results can be obtained in a relatively precise way. An illustration is provided by a case study on an infrared gas transmitter. These properties make the proposed model and corresponding reliability analyses especially suitable for intelligent transmitters (or 'smart sensors').

  2. Reliability on intra-laboratory and inter-laboratory data of hair mineral analysis comparing with blood analysis.

    Science.gov (United States)

    Namkoong, Sun; Hong, Seung Phil; Kim, Myung Hwa; Park, Byung Cheol

    2013-02-01

    Nowadays, although its clinical value remains controversial institutions utilize hair mineral analysis. Arguments about the reliability of hair mineral analysis persist, and there have been evaluations of commercial laboratories performing hair mineral analysis. The objective of this study was to assess the reliability of intra-laboratory and inter-laboratory data at three commercial laboratories conducting hair mineral analysis, compared to serum mineral analysis. Two divided hair samples taken from near the scalp were submitted for analysis at the same time, to all laboratories, from one healthy volunteer. Each laboratory sent a report consisting of quantitative results and their interpretation of health implications. Differences among intra-laboratory and interlaboratory data were analyzed using SPSS version 12.0 (SPSS Inc., USA). All the laboratories used identical methods for quantitative analysis, and they generated consistent numerical results according to Friedman analysis of variance. However, the normal reference ranges of each laboratory varied. As such, each laboratory interpreted the patient's health differently. On intra-laboratory data, Wilcoxon analysis suggested they generated relatively coherent data, but laboratory B could not in one element, so its reliability was doubtful. In comparison with the blood test, laboratory C generated identical results, but not laboratory A and B. Hair mineral analysis has its limitations, considering the reliability of inter and intra laboratory analysis comparing with blood analysis. As such, clinicians should be cautious when applying hair mineral analysis as an ancillary tool. Each laboratory included in this study requires continuous refinement from now on for inducing standardized normal reference levels.

  3. Construct validity and reliability of a checklist for volleyball serve analysis

    Directory of Open Access Journals (Sweden)

    Cicero Luciano Alves Costa

    2018-03-01

    Full Text Available This study aims to investigate the construct validity and reliability of the checklist for qualitative analysis of the overhand serve in Volleyball. Fifty-five male subjects aged 13-17 years participated in the study. The overhand serve was analyzed using the checklist proposed by Meira Junior (2003, which analyzes the pattern of serve movement in four phases: (I initial position, (II ball lifting, (III ball attacking, and (IV finalization. Construct validity was analyzed using confirmatory factorial analysis and reliability through the Cronbach’s alpha coefficient. The construct validity was supported by confirmatory factor analysis with the RMSEA results (0.037 [confidence interval 90% = 0.020-0.040], CFI (0.970 and TLI (0.950 indicating good fit of the model. In relation to reliability, Cronbach’s alpha coefficient was 0.661, being this value considered acceptable. Among the items on the checklist, ball lifting and attacking showed higher factor loadings, 0.69 and 0.99, respectively. In summary, the checklist for the qualitative analysis of the overhand serve of Meira Junior (2003 can be considered a valid and reliable instrument for use in research in the field of Sports Sciences.

  4. Reliability analysis of nuclear containment without metallic liners against jet aircraft crash

    Energy Technology Data Exchange (ETDEWEB)

    Siddiqui, N.A.; Iqbal, M.A.; Abbas, H. E-mail: abbas_husain@hotmail.com; Paul, D.K

    2003-09-01

    The present study presents a methodology for detailed reliability analysis of nuclear containment without metallic liners against aircraft crash. For this purpose, a nonlinear limit state function has been derived using violation of tolerable crack width as failure criterion. This criterion has been considered as failure criterion because radioactive radiations may come out if size of crack becomes more than the tolerable crack width. The derived limit state uses the response of containment that has been obtained from a detailed dynamic analysis of nuclear containment under an impact of a large size Boeing jet aircraft. Using this response in conjunction with limit state function, the reliabilities and probabilities of failures are obtained at a number of vulnerable locations employing an efficient first-order reliability method (FORM). These values of reliability and probability of failure at various vulnerable locations are then used for the estimation of conditional and annual reliabilities of nuclear containment as a function of its location from the airport. To study the influence of the various random variables on containment reliability the sensitivity analysis has been performed. Some parametric studies have also been included to obtain the results of field and academic interest.

  5. [Reliability theory based on quality risk network analysis for Chinese medicine injection].

    Science.gov (United States)

    Li, Zheng; Kang, Li-Yuan; Fan, Xiao-Hui

    2014-08-01

    A new risk analysis method based upon reliability theory was introduced in this paper for the quality risk management of Chinese medicine injection manufacturing plants. The risk events including both cause and effect ones were derived in the framework as nodes with a Bayesian network analysis approach. It thus transforms the risk analysis results from failure mode and effect analysis (FMEA) into a Bayesian network platform. With its structure and parameters determined, the network can be used to evaluate the system reliability quantitatively with probabilistic analytical appraoches. Using network analysis tools such as GeNie and AgenaRisk, we are able to find the nodes that are most critical to influence the system reliability. The importance of each node to the system can be quantitatively evaluated by calculating the effect of the node on the overall risk, and minimization plan can be determined accordingly to reduce their influences and improve the system reliability. Using the Shengmai injection manufacturing plant of SZYY Ltd as a user case, we analyzed the quality risk with both static FMEA analysis and dynamic Bayesian Network analysis. The potential risk factors for the quality of Shengmai injection manufacturing were identified with the network analysis platform. Quality assurance actions were further defined to reduce the risk and improve the product quality.

  6. Application of Fault Tree Analysis for Estimating Temperature Alarm Circuit Reliability

    International Nuclear Information System (INIS)

    El-Shanshoury, A.I.; El-Shanshoury, G.I.

    2011-01-01

    Fault Tree Analysis (FTA) is one of the most widely-used methods in system reliability analysis. It is a graphical technique that provides a systematic description of the combinations of possible occurrences in a system, which can result in an undesirable outcome. The presented paper deals with the application of FTA method in analyzing temperature alarm circuit. The criticality failure of this circuit comes from failing to alarm when temperature exceeds a certain limit. In order for a circuit to be safe, a detailed analysis of the faults causing circuit failure is performed by configuring fault tree diagram (qualitative analysis). Calculations of circuit quantitative reliability parameters such as Failure Rate (FR) and Mean Time between Failures (MTBF) are also done by using Relex 2009 computer program. Benefits of FTA are assessing system reliability or safety during operation, improving understanding of the system, and identifying root causes of equipment failures

  7. Integrated system reliability analysis

    DEFF Research Database (Denmark)

    Gintautas, Tomas; Sørensen, John Dalsgaard

    Specific targets: 1) The report shall describe the state of the art of reliability and risk-based assessment of wind turbine components. 2) Development of methodology for reliability and risk-based assessment of the wind turbine at system level. 3) Describe quantitative and qualitative measures...

  8. The effectiveness of domain balancing strategies on workstation clusters demonstrated by viscous flow problems

    NARCIS (Netherlands)

    Streng, Martin; Streng, M.; ten Cate, Eric; ten Cate, Eric (H.H.); Geurts, Bernardus J.; Kuerten, Johannes G.M.

    1998-01-01

    We consider several aspects of efficient numerical simulation of viscous compressible flow on both homogeneous and heterogeneous workstation-clusters. We consider dedicated systems, as well as clusters operating in a multi-user environment. For dedicated homogeneous clusters, we show that with

  9. Micro machining workstation for a diode pumped Nd:YAG high brightness laser system

    NARCIS (Netherlands)

    Kleijhorst, R.A.; Offerhaus, Herman L.; Bant, P.

    1998-01-01

    A Nd:YAG micro-machining workstation that allows cutting on a scale of a few microns has been developed and operated. The system incorporates a telescope viewing system that allows control during the work and a software interface to translate AutoCad files. Some examples of the performance are

  10. ERP Reliability Analysis (ERA) Toolbox: An open-source toolbox for analyzing the reliability of event-related brain potentials.

    Science.gov (United States)

    Clayson, Peter E; Miller, Gregory A

    2017-01-01

    Generalizability theory (G theory) provides a flexible, multifaceted approach to estimating score reliability. G theory's approach to estimating score reliability has important advantages over classical test theory that are relevant for research using event-related brain potentials (ERPs). For example, G theory does not require parallel forms (i.e., equal means, variances, and covariances), can handle unbalanced designs, and provides a single reliability estimate for designs with multiple sources of error. This monograph provides a detailed description of the conceptual framework of G theory using examples relevant to ERP researchers, presents the algorithms needed to estimate ERP score reliability, and provides a detailed walkthrough of newly-developed software, the ERP Reliability Analysis (ERA) Toolbox, that calculates score reliability using G theory. The ERA Toolbox is open-source, Matlab software that uses G theory to estimate the contribution of the number of trials retained for averaging, group, and/or event types on ERP score reliability. The toolbox facilitates the rigorous evaluation of psychometric properties of ERP scores recommended elsewhere in this special issue. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Validation of COG10 and ENDFB6R7 on the Auk Workstation for General Application to Highly Enriched Uranium Systems

    Energy Technology Data Exchange (ETDEWEB)

    Percher, Catherine G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2011-08-08

    The COG 10 code package1 on the Auk workstation is now validated with the ENBFB6R7 neutron cross section library for general application to highly enriched uranium (HEU) systems by comparison of the calculated keffective to the expected keffective of several relevant experimental benchmarks. This validation is supplemental to the installation and verification of COG 10 on the Auk workstation2.

  12. THE RELIABILITY ANALYSIS OF EXISTING REINFORCED CONCRETE PILES IN PERMAFROST REGIONS

    Directory of Open Access Journals (Sweden)

    Vladimir S. Utkin

    2017-06-01

    Full Text Available The article describes the general problem of safe operation of buildings and structures with the dynamics of permafrost in Russia and other countries. The global warming on Earth will lead to global disasters such as failures of buildings and structures. The main reason of these failures will be a reduction of bearing capacity and the reliability of foundations. It is necessary to organize the observations (monitoring for the process of reducing the bearing capacity of foundations to prevent such accidents and reduce negative consequences, to development of preventive measures and operational methods for the piles reliability analysis. The main load-bearing elements of the foundation are reinforced concrete piles and frozen ground. Reinforced concrete piles have a tendency to decrease the bearing capacity and reliability of the upper (aerial part and the part in the soil. The article discusses the problem of reliability analysis of existing reinforced concrete piles in upper part in permafrost regions by the reason of pile degradation in the contact zone of seasonal thawing and freezing soil. The evaluation of the probability of failure is important in itself, but also it important for the reliability of foundation: consisting of piles and frozen soil. Authors offers the methods for reliability analysis of upper part of reinforced concrete piles in the contact zone with seasonally thawed soil under different number of random variables (fuzzy variables in the design mathematical model of a limit state by the strength criterion.

  13. Performance assessment of advanced engineering workstations for fuel management applications

    International Nuclear Information System (INIS)

    Turinsky, P.J.

    1989-07-01

    The purpose of this project was to assess the performance of an advanced engineering workstation [AEW] with regard to applications to incore fuel management for LWRs. The attributes of most interest to us that define an AEW are parallel computational hardware and graphics capabilities. The AEWs employed were super microcomputers manufactured by MASSCOMP, Inc. These computers utilize a 32-bit architecture, graphics co-processor, multi-CPUs [up to six] attached to common memory and multi-vector accelerators. 7 refs., 33 figs., 4 tabs

  14. Reliability analysis of the service water system of Angra 1 reactor

    International Nuclear Information System (INIS)

    Tayt-Sohn, L.C.; Oliveira, L.F.S. de.

    1984-01-01

    A reliability analysis of the service water system is done aiming to use in the evaluation of the non reliability of the Component Cooling System (SRC) for great loss of cooling accidents in nuclear power plants. (E.G.) [pt

  15. Reliability analysis of the service water system of Angra 1 reactor

    International Nuclear Information System (INIS)

    Oliveira, L.F.S. de; Fleming, P.V.; Frutuoso e Melo, P.F.F.; Tayt-Sohn, L.C.

    1983-01-01

    A reliability analysis of the service water system is done aiming to use in the evaluation of the non reliability of the component cooling system (SRC) for great loss of cooling accidents in nuclear power plants. (E.G.) [pt

  16. Performance of the coupled thermalhydraulics/neutron kinetics code R/P/C on workstation clusters and multiprocessor systems

    International Nuclear Information System (INIS)

    Hammer, C.; Paffrath, M.; Boeer, R.; Finnemann, H.; Jackson, C.J.

    1996-01-01

    The light water reactor core simulation code PANBOX has been coupled with the transient analysis code RELAP5 for the purpose of performing plant safety analyses with a three-dimensional (3-D) neutron kinetics model. The system has been parallelized to improve the computational efficiency. The paper describes the features of this system with emphasis on performance aspects. Performance results are given for different types of parallelization, i. e. for using an automatic parallelizing compiler, using the portable PVM platform on a workstation cluster, using PVM on a shared memory multiprocessor, and for using machine dependent interfaces. (author)

  17. Effects of dynamic workstation Oxidesk on acceptance, physical activity, mental fitness and work performance

    NARCIS (Netherlands)

    Groenesteijn, L.; Commissaris, D.A.C.M.; Berg-Zwetsloot, M. van den; Hiemstra-Van Mastrigt, S.

    2016-01-01

    BACKGROUND: Working in an office environment is characterised by physical inactivity and sedentary behaviour. This behaviour contributes to several health risks in the long run. Dynamic workstations which allow people to combine desk activities with physical activity, may contribute to prevention of

  18. A study in the reliability analysis method for nuclear power plant structures (I)

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Byung Hwan; Choi, Seong Cheol; Shin, Ho Sang; Yang, In Hwan; Kim, Yi Sung; Yu, Young; Kim, Se Hun [Seoul, Nationl Univ., Seoul (Korea, Republic of)

    1999-03-15

    Nuclear power plant structures may be exposed to aggressive environmental effects that may cause their strength and stiffness to decrease over their service life. Although the physics of these damage mechanisms are reasonably well understood and quantitative evaluation of their effects on time-dependent structural behavior is possible in some instances, such evaluations are generally very difficult and remain novel. The assessment of existing steel containment in nuclear power plants for continued service must provide quantitative evidence that they are able to withstand future extreme loads during a service period with an acceptable level of reliability. Rational methodologies to perform the reliability assessment can be developed from mechanistic models of structural deterioration, using time-dependent structural reliability analysis to take loading and strength uncertainties into account. The final goal of this study is to develop the analysis method for the reliability of containment structures. The cause and mechanism of corrosion is first clarified and the reliability assessment method has been established. By introducing the equivalent normal distribution, the procedure of reliability analysis which can determine the failure probabilities has been established. The influence of design variables to reliability and the relation between the reliability and service life will be continued second year research.

  19. Reliability Analysis of Free Jet Scour Below Dams

    Directory of Open Access Journals (Sweden)

    Chuanqi Li

    2012-12-01

    Full Text Available Current formulas for calculating scour depth below of a free over fall are mostly deterministic in nature and do not adequately consider the uncertainties of various scouring parameters. A reliability-based assessment of scour, taking into account uncertainties of parameters and coefficients involved, should be performed. This paper studies the reliability of a dam foundation under the threat of scour. A model for calculating the reliability of scour and estimating the probability of failure of the dam foundation subjected to scour is presented. The Maximum Entropy Method is applied to construct the probability density function (PDF of the performance function subject to the moment constraints. Monte Carlo simulation (MCS is applied for uncertainty analysis. An example is considered, and there liability of its scour is computed, the influence of various random variables on the probability failure is analyzed.

  20. Reliability model analysis and primary experimental evaluation of laser triggered pulse trigger

    International Nuclear Information System (INIS)

    Chen Debiao; Yang Xinglin; Li Yuan; Li Jin

    2012-01-01

    High performance pulse trigger can enhance performance and stability of the PPS. It is necessary to evaluate the reliability of the LTGS pulse trigger, so we establish the reliability analysis model of this pulse trigger based on CARMES software, the reliability evaluation is accord with the statistical results. (authors)

  1. Design and Analysis of Transport Protocols for Reliable High-Speed Communications

    NARCIS (Netherlands)

    Oláh, A.

    1997-01-01

    The design and analysis of transport protocols for reliable communications constitutes the topic of this dissertation. These transport protocols guarantee the sequenced and complete delivery of user data over networks which may lose, duplicate and reorder packets. Reliable transport services are

  2. System Reliability Engineering

    International Nuclear Information System (INIS)

    Lim, Tae Jin

    2005-02-01

    This book tells of reliability engineering, which includes quality and reliability, reliability data, importance of reliability engineering, reliability and measure, the poisson process like goodness of fit test and the poisson arrival model, reliability estimation like exponential distribution, reliability of systems, availability, preventive maintenance such as replacement policies, minimal repair policy, shock models, spares, group maintenance and periodic inspection, analysis of common cause failure, and analysis model of repair effect.

  3. Effects of dynamic workstation Oxidesk on acceptance, physical activity, mental fitness and work performance.

    Science.gov (United States)

    Groenesteijn, L; Commissaris, D A C M; Van den Berg-Zwetsloot, M; Hiemstra-Van Mastrigt, S

    2016-07-19

    Working in an office environment is characterised by physical inactivity and sedentary behaviour. This behaviour contributes to several health risks in the long run. Dynamic workstations which allow people to combine desk activities with physical activity, may contribute to prevention of these health risks. A dynamic workstation, called Oxidesk, was evaluated to determine the possible contribution to healthy behaviour and the impact on perceived work performance. A field test was conducted with 22 office workers, employed at a health insurance company in the Netherlands. The Oxidesk was well accepted, positively perceived for fitness and the participants maintained their work performance. Physical activity was lower than the activity level required in the Dutch guidelines for sufficient physical activity. Although there was a slight increase in physical activity, the Oxidesk may be helpful in the reducing health risks involved and seems applicable for introduction to office environments.

  4. High-performance floating-point image computing workstation for medical applications

    Science.gov (United States)

    Mills, Karl S.; Wong, Gilman K.; Kim, Yongmin

    1990-07-01

    The medical imaging field relies increasingly on imaging and graphics techniques in diverse applications with needs similar to (or more stringent than) those of the military, industrial and scientific communities. However, most image processing and graphics systems available for use in medical imaging today are either expensive, specialized, or in most cases both. High performance imaging and graphics workstations which can provide real-time results for a number of applications, while maintaining affordability and flexibility, can facilitate the application of digital image computing techniques in many different areas. This paper describes the hardware and software architecture of a medium-cost floating-point image processing and display subsystem for the NeXT computer, and its applications as a medical imaging workstation. Medical imaging applications of the workstation include use in a Picture Archiving and Communications System (PACS), in multimodal image processing and 3-D graphics workstation for a broad range of imaging modalities, and as an electronic alternator utilizing its multiple monitor display capability and large and fast frame buffer. The subsystem provides a 2048 x 2048 x 32-bit frame buffer (16 Mbytes of image storage) and supports both 8-bit gray scale and 32-bit true color images. When used to display 8-bit gray scale images, up to four different 256-color palettes may be used for each of four 2K x 2K x 8-bit image frames. Three of these image frames can be used simultaneously to provide pixel selectable region of interest display. A 1280 x 1024 pixel screen with 1: 1 aspect ratio can be windowed into the frame buffer for display of any portion of the processed image or images. In addition, the system provides hardware support for integer zoom and an 82-color cursor. This subsystem is implemented on an add-in board occupying a single slot in the NeXT computer. Up to three boards may be added to the NeXT for multiple display capability (e

  5. A design study investigating augmented reality and photograph annotation in a digitalized grossing workstation

    Directory of Open Access Journals (Sweden)

    Joyce A Chow

    2017-01-01

    Full Text Available Context: Within digital pathology, digitalization of the grossing procedure has been relatively underexplored in comparison to digitalization of pathology slides. Aims: Our investigation focuses on the interaction design of an augmented reality gross pathology workstation and refining the interface so that information and visualizations are easily recorded and displayed in a thoughtful view. Settings and Design: The work in this project occurred in two phases: the first phase focused on implementation of an augmented reality grossing workstation prototype while the second phase focused on the implementation of an incremental prototype in parallel with a deeper design study. Subjects and Methods: Our research institute focused on an experimental and “designerly” approach to create a digital gross pathology prototype as opposed to focusing on developing a system for immediate clinical deployment. Statistical Analysis Used: Evaluation has not been limited to user tests and interviews, but rather key insights were uncovered through design methods such as “rapid ethnography” and “conversation with materials”. Results: We developed an augmented reality enhanced digital grossing station prototype to assist pathology technicians in capturing data during examination. The prototype uses a magnetically tracked scalpel to annotate planned cuts and dimensions onto photographs taken of the work surface. This article focuses on the use of qualitative design methods to evaluate and refine the prototype. Our aims were to build on the strengths of the prototype's technology, improve the ergonomics of the digital/physical workstation by considering numerous alternative design directions, and to consider the effects of digitalization on personnel and the pathology diagnostics information flow from a wider perspective. A proposed interface design allows the pathology technician to place images in relation to its orientation, annotate directly on the

  6. Reliability Analysis of Operation for Cableways by FTA (Fault Tree Analysis Method

    Directory of Open Access Journals (Sweden)

    Sergej Težak

    2010-05-01

    Full Text Available This paper examines the reliability of the operation of cableway systems in Slovenia, which has major impact on the quality of service in the mountain tourism, mainly in wintertime. Different types of cableway installations in Slovenia were captured in a sample and fault tree analysis (FTA was made on the basis of the obtained data. The paper presents the results of the analysis. With these results it is possible to determine the probability of faults of different types of cableways, which types of faults have the greatest impact on the termination of operation, which components of cableways fail most, what is the impact of age of cableways on the occurrence of the faults. Finally, an attempt was made to find if occurrence of faults on individual cableway installation has also impact on traffic on this cableway due to reduced quality of service. KEYWORDS: cableways, aerial ropeways, chairlifts, ski-tows, quality, faults, fault tree analysis, reliability, service quality, winter tourism, mountain tourist centre

  7. Reliability Analysis Of Fire System On The Industry Facility By Use Fameca Method

    International Nuclear Information System (INIS)

    Sony T, D.T.; Situmorang, Johnny; Ismu W, Puradwi; Demon H; Mulyanto, Dwijo; Kusmono, Slamet; Santa, Sigit Asmara

    2000-01-01

    FAMECA is one of the analysis method to determine system reliability on the industry facility. Analysis is done by some procedure that is identification of component function, determination of failure mode, severity level and effect of their failure. Reliability value is determined by three combinations that is severity level, component failure value and critical component. Reliability of analysis has been done for fire system on the industry by FAMECA method. Critical component which identified is pump, air release valve, check valve, manual test valve, isolation valve, control system etc

  8. The quantitative failure of human reliability analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, C.T.

    1995-07-01

    This philosophical treatise argues the merits of Human Reliability Analysis (HRA) in the context of the nuclear power industry. Actually, the author attacks historic and current HRA as having failed in informing policy makers who make decisions based on risk that humans contribute to systems performance. He argues for an HRA based on Bayesian (fact-based) inferential statistics, which advocates a systems analysis process that employs cogent heuristics when using opinion, and tempers itself with a rational debate over the weight given subjective and empirical probabilities.

  9. Reliability analysis of HVDC grid combined with power flow simulations

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yongtao; Langeland, Tore; Solvik, Johan [DNV AS, Hoevik (Norway); Stewart, Emma [DNV KEMA, Camino Ramon, CA (United States)

    2012-07-01

    Based on a DC grid power flow solver and the proposed GEIR, we carried out reliability analysis for a HVDC grid test system proposed by CIGRE working group B4-58, where the failure statistics are collected from literature survey. The proposed methodology is used to evaluate the impact of converter configuration on the overall reliability performance of the HVDC grid, where the symmetrical monopole configuration is compared with the bipole with metallic return wire configuration. The results quantify the improvement on reliability by using the later alternative. (orig.)

  10. Reliability analysis of neutron transport simulation using Monte Carlo method

    International Nuclear Information System (INIS)

    Souza, Bismarck A. de; Borges, Jose C.

    1995-01-01

    This work presents a statistical and reliability analysis covering data obtained by computer simulation of neutron transport process, using the Monte Carlo method. A general description of the method and its applications is presented. Several simulations, corresponding to slowing down and shielding problems have been accomplished. The influence of the physical dimensions of the materials and of the sample size on the reliability level of results was investigated. The objective was to optimize the sample size, in order to obtain reliable results, optimizing computation time. (author). 5 refs, 8 figs

  11. An exact method for solving logical loops in reliability analysis

    International Nuclear Information System (INIS)

    Matsuoka, Takeshi

    2009-01-01

    This paper presents an exact method for solving logical loops in reliability analysis. The systems that include logical loops are usually described by simultaneous Boolean equations. First, present a basic rule of solving simultaneous Boolean equations. Next, show the analysis procedures for three-component system with external supports. Third, more detailed discussions are given for the establishment of logical loop relation. Finally, take up two typical structures which include more than one logical loop. Their analysis results and corresponding GO-FLOW charts are given. The proposed analytical method is applicable to loop structures that can be described by simultaneous Boolean equations, and it is very useful in evaluating the reliability of complex engineering systems.

  12. Damage tolerance reliability analysis of automotive spot-welded joints

    International Nuclear Information System (INIS)

    Mahadevan, Sankaran; Ni Kan

    2003-01-01

    This paper develops a damage tolerance reliability analysis methodology for automotive spot-welded joints under multi-axial and variable amplitude loading history. The total fatigue life of a spot weld is divided into two parts, crack initiation and crack propagation. The multi-axial loading history is obtained from transient response finite element analysis of a vehicle model. A three-dimensional finite element model of a simplified joint with four spot welds is developed for static stress/strain analysis. A probabilistic Miner's rule is combined with a randomized strain-life curve family and the stress/strain analysis result to develop a strain-based probabilistic fatigue crack initiation life prediction for spot welds. Afterwards, the fatigue crack inside the base material sheet is modeled as a surface crack. Then a probabilistic crack growth model is combined with the stress analysis result to develop a probabilistic fatigue crack growth life prediction for spot welds. Both methods are implemented with MSC/NASTRAN and MSC/FATIGUE software, and are useful for reliability assessment of automotive spot-welded joints against fatigue and fracture

  13. Guided Learning at Workstations about Drug Prevention with Low Achievers in Science Education

    Science.gov (United States)

    Thomas, Heyne; Bogner, Franz X.

    2012-01-01

    Our study focussed on the cognitive achievement potential of low achieving eighth graders, dealing with drug prevention (cannabis). The learning process was guided by a teacher, leading this target group towards a modified learning at workstations which is seen as an appropriate approach for low achievers. We compared this specific open teaching…

  14. Applying reliability analysis to design electric power systems for More-electric aircraft

    Science.gov (United States)

    Zhang, Baozhu

    The More-Electric Aircraft (MEA) is a type of aircraft that replaces conventional hydraulic and pneumatic systems with electrically powered components. These changes have significantly challenged the aircraft electric power system design. This thesis investigates how reliability analysis can be applied to automatically generate system topologies for the MEA electric power system. We first use a traditional method of reliability block diagrams to analyze the reliability level on different system topologies. We next propose a new methodology in which system topologies, constrained by a set reliability level, are automatically generated. The path-set method is used for analysis. Finally, we interface these sets of system topologies with control synthesis tools to automatically create correct-by-construction control logic for the electric power system.

  15. Evaluation of total workstation CT interpretation quality: a single-screen pilot study

    Science.gov (United States)

    Beard, David V.; Perry, John R.; Muller, Keith E.; Misra, Ram B.; Brown, P.; Hemminger, Bradley M.; Johnston, Richard E.; Mauro, J. Matthew; Jaques, P. F.; Schiebler, M.

    1991-07-01

    An interpretation report, generated with an electronic viewbox, is affected by two factors: image quality, which encompasses what can be seen on the display, and computer human interaction (CHI), which accounts for the cognitive load effect of locating, moving, and manipulating images with the workstation controls. While a number of subject experiments have considered image quality, only recently has the affect of CHI on total interpretation quality been measured. This paper presents the results of a pilot study conducted to evaluate the total interpretation quality of the FilmPlane2.2 radiology workstation for patient folders containing single forty-slice CT studies. First, radiologists interpreted cases and dictated reports using FilmPlane2.2. Requisition forms were provided. Film interpretation was provided by the original clinical report and interpretation forms generated from a previous experiment. Second, an evaluator developed a list of findings for each case based on those listed in all the reports for each case and then evaluated each report for its response on each finding. Third, the reports were compared to determine how well they agreed with one another. Interpretation speed and observation data was also gathered.

  16. Integrating UNIX workstation into existing online data acquisition systems for Fermilab experiments

    International Nuclear Information System (INIS)

    Oleynik, G.

    1991-03-01

    With the availability of cost effective computing prior from multiple vendors of UNIX workstations, experiments at Fermilab are adding such computers to their VMS based online data acquisition systems. In anticipation of this trend, we have extended the software products available in our widely used VAXONLINE and PANDA data acquisition software systems, to provide support for integrating these workstations into existing distributed online systems. The software packages we are providing pave the way for the smooth migration of applications from the current Data Acquisition Host and Monitoring computers running the VMS operating systems, to UNIX based computers of various flavors. We report on software for Online Event Distribution from VAXONLINE and PANDA, integration of Message Reporting Facilities, and a framework under UNIX for experiments to monitor and view the raw event data produced at any level in their DA system. We have developed software that allows host UNIX computers to communicate with intelligent front-end embedded read-out controllers and processor boards running the pSOS operating system. Both RS-232 and Ethernet control paths are supported. This enables calibration and hardware monitoring applications to be migrated to these platforms. 6 refs., 5 figs

  17. Children and computer use in the home: workstations, behaviors and parental attitudes.

    Science.gov (United States)

    Kimmerly, Lisa; Odell, Dan

    2009-01-01

    This study examines the home computer use of 26 children (aged 6-18) in ten upper middle class families using direct observation, typing tests, questionnaires and semi-structured interviews. The goals of the study were to gather information on how children use computers in the home and to understand how both parents and children perceive this computer use. Large variations were seen in computing skills, behaviors, and opinions, as well as equipment and workstation setups. Typing speed averaged over 40 words per minute for children over 13 years old, and less than 10 words per minute for children younger than 10. The results show that for this sample, Repetitive Stress Injury (RSI) concerns ranked very low among parents, whereas security and privacy concerns ranked much higher. Meanwhile, children's behaviors and workstations were observed to place children in awkward working postures. Photos showing common postures are presented. The greatest opportunity to improve children's work postures appears to be in providing properly-sized work surfaces and chairs, as well as education. Possible explanations for the difference between parental perception of computing risks and the physical reality of children's observed ergonomics are discussed and ideas for further research are proposed.

  18. Reliability of three-dimensional gait analysis in cervical spondylotic myelopathy.

    LENUS (Irish Health Repository)

    McDermott, Ailish

    2010-10-01

    Gait impairment is one of the primary symptoms of cervical spondylotic myelopathy (CSM). Detailed assessment is possible using three-dimensional gait analysis (3DGA), however the reliability of 3DGA for this population has not been established. The aim of this study was to evaluate the test-retest reliability of temporal-spatial, kinematic and kinetic parameters in a CSM population.

  19. Reliability Calculations

    DEFF Research Database (Denmark)

    Petersen, Kurt Erling

    1986-01-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety...... and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic...... approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very...

  20. Hybrid Structural Reliability Analysis under Multisource Uncertainties Based on Universal Grey Numbers

    Directory of Open Access Journals (Sweden)

    Xingfa Yang

    2018-01-01

    Full Text Available Nondeterministic parameters of certain distribution are employed to model structural uncertainties, which are usually assumed as stochastic factors. However, model parameters may not be precisely represented due to some factors in engineering practices, such as lack of sufficient data, data with fuzziness, and unknown-but-bounded conditions. To this end, interval and fuzzy parameters are implemented and an efficient approach to structural reliability analysis with random-interval-fuzzy hybrid parameters is proposed in this study. Fuzzy parameters are first converted to equivalent random ones based on the equal entropy principle. 3σ criterion is then employed to transform the equivalent random and the original random parameters to interval variables. In doing this, the hybrid reliability problem is transformed into the one only with interval variables, in other words, nonprobabilistic reliability analysis problem. Nevertheless, the problem of interval extension existed in interval arithmetic, especially for the nonlinear systems. Therefore, universal grey mathematics, which can tackle the issue of interval extension, is employed to solve the nonprobabilistic reliability analysis problem. The results show that the proposed method can obtain more conservative results of the hybrid structural reliability.

  1. Inclusion of task dependence in human reliability analysis

    International Nuclear Information System (INIS)

    Su, Xiaoyan; Mahadevan, Sankaran; Xu, Peida; Deng, Yong

    2014-01-01

    Dependence assessment among human errors in human reliability analysis (HRA) is an important issue, which includes the evaluation of the dependence among human tasks and the effect of the dependence on the final human error probability (HEP). This paper represents a computational model to handle dependence in human reliability analysis. The aim of the study is to automatically provide conclusions on the overall degree of dependence and calculate the conditional human error probability (CHEP) once the judgments of the input factors are given. The dependence influencing factors are first identified by the experts and the priorities of these factors are also taken into consideration. Anchors and qualitative labels are provided as guidance for the HRA analyst's judgment of the input factors. The overall degree of dependence between human failure events is calculated based on the input values and the weights of the input factors. Finally, the CHEP is obtained according to a computing formula derived from the technique for human error rate prediction (THERP) method. The proposed method is able to quantify the subjective judgment from the experts and improve the transparency in the HEP evaluation process. Two examples are illustrated to show the effectiveness and the flexibility of the proposed method. - Highlights: • We propose a computational model to handle dependence in human reliability analysis. • The priorities of the dependence influencing factors are taken into consideration. • The overall dependence degree is determined by input judgments and the weights of factors. • The CHEP is obtained according to a computing formula derived from THERP

  2. Infusing Reliability Techniques into Software Safety Analysis

    Science.gov (United States)

    Shi, Ying

    2015-01-01

    Software safety analysis for a large software intensive system is always a challenge. Software safety practitioners need to ensure that software related hazards are completely identified, controlled, and tracked. This paper discusses in detail how to incorporate the traditional reliability techniques into the entire software safety analysis process. In addition, this paper addresses how information can be effectively shared between the various practitioners involved in the software safety analyses. The author has successfully applied the approach to several aerospace applications. Examples are provided to illustrate the key steps of the proposed approach.

  3. Multidisciplinary Inverse Reliability Analysis Based on Collaborative Optimization with Combination of Linear Approximations

    Directory of Open Access Journals (Sweden)

    Xin-Jia Meng

    2015-01-01

    Full Text Available Multidisciplinary reliability is an important part of the reliability-based multidisciplinary design optimization (RBMDO. However, it usually has a considerable amount of calculation. The purpose of this paper is to improve the computational efficiency of multidisciplinary inverse reliability analysis. A multidisciplinary inverse reliability analysis method based on collaborative optimization with combination of linear approximations (CLA-CO is proposed in this paper. In the proposed method, the multidisciplinary reliability assessment problem is first transformed into a problem of most probable failure point (MPP search of inverse reliability, and then the process of searching for MPP of multidisciplinary inverse reliability is performed based on the framework of CLA-CO. This method improves the MPP searching process through two elements. One is treating the discipline analyses as the equality constraints in the subsystem optimization, and the other is using linear approximations corresponding to subsystem responses as the replacement of the consistency equality constraint in system optimization. With these two elements, the proposed method realizes the parallel analysis of each discipline, and it also has a higher computational efficiency. Additionally, there are no difficulties in applying the proposed method to problems with nonnormal distribution variables. One mathematical test problem and an electronic packaging problem are used to demonstrate the effectiveness of the proposed method.

  4. Progress of data processing system in JT-60 utilizing the UNIX-based workstations

    International Nuclear Information System (INIS)

    Sakata, Shinya; Kiyono, Kimihiro; Oshima, Takayuki; Sato, Minoru; Ozeki, Takahisa

    2007-07-01

    JT-60 data processing system (DPS) possesses three-level hierarchy. At the top level of hierarchy is JT-60 inter-shot processor (MSP-ISP), which is a mainframe computer, provides communication with the JT-60 supervisory control system and supervises the internal communication inside the DPS. The middle level of hierarchy has minicomputers and the bottom level of hierarchy has individual diagnostic subsystems, which consist of the CAMAC and VME modules. To meet the demand for advanced diagnostics, the DPS has been progressed in stages from a three-level hierarchy system, which was dependent on the processing power of the MSP-ISP, to a two-level hierarchy system, which is decentralized data processing system (New-DPS) by utilizing the UNIX-based workstations and network technology. This replacement had been accomplished, and the New-DPS has been started to operate in October 2005. In this report, we describe the development and improvement of the New-DPS, whose functions were decentralized from the MSP-ISP to the UNIX-based workstations. (author)

  5. Internationalization of healthcare applications: a generic approach for PACS workstations.

    Science.gov (United States)

    Hussein, R; Engelmann, U; Schroeter, A; Meinzer, H P

    2004-01-01

    Along with the revolution of information technology and the increasing use of computers world-wide, software providers recognize the emerging need for internationalized, or global, software applications. The importance of internationalization comes from its benefits such as addressing a broader audience, making the software applications more accessible, easier to use, more flexible to support and providing users with more consistent information. In addition, some governmental agencies, e.g., in Spain, accept only fully localized software. Although the healthcare communication standards, namely, Digital Imaging and Communication in Medicine (DICOM) and Health Level Seven (HL7) support wide areas of internationalization, most of the implementers are still protective about supporting the complex languages. This paper describes a generic internationalization approach for Picture Archiving and Communication System (PACS) workstations. The Unicode standard is used to internationalize the application user interface. An encoding converter was developed to encode and decode the data between the rendering module (in Unicode encoding) and the DICOM data (in ISO 8859 encoding). An integration gateway was required to integrate the internationalized PACS components with the different PACS installations. To introduce a pragmatic example, the described approach was applied to the CHILI PACS workstation. The approach has enabled the application to handle the different internationalization aspects transparently, such as supporting complex languages, switching between different languages at runtime, and supporting multilingual clinical reports. In the healthcare enterprises, internationalized applications play an essential role in supporting a seamless flow of information between the heterogeneous multivendor information systems.

  6. Survey of ANL organization plans for word processors, personal computers, workstations, and associated software

    Energy Technology Data Exchange (ETDEWEB)

    Fenske, K.R.

    1991-11-01

    The Computing and Telecommunications Division (CTD) has compiled this Survey of ANL Organization Plans for Word Processors, Personal Computers, Workstations, and Associated Software to provide DOE and Argonne with a record of recent growth in the acquisition and use of personal computers, microcomputers, and word processors at ANL. Laboratory planners, service providers, and people involved in office automation may find the Survey useful. It is for internal use only, and any unauthorized use is prohibited. Readers of the Survey should use it as a reference that documents the plans of each organization for office automation, identifies appropriate planners and other contact people in those organizations, and encourages the sharing of this information among those people making plans for organizations and decisions about office automation. The Survey supplements information in both the ANL Statement of Site Strategy for Computing Workstations and the ANL Site Response for the DOE Information Technology Resources Long-Range Plan.

  7. Survey of ANL organization plans for word processors, personal computers, workstations, and associated software

    Energy Technology Data Exchange (ETDEWEB)

    Fenske, K.R.; Rockwell, V.S.

    1992-08-01

    The Computing and Telecommunications Division (CTD) has compiled this Survey of ANL Organization plans for Word Processors, Personal Computers, Workstations, and Associated Software (ANL/TM, Revision 4) to provide DOE and Argonne with a record of recent growth in the acquisition and use of personal computers, microcomputers, and word processors at ANL. Laboratory planners, service providers, and people involved in office automation may find the Survey useful. It is for internal use only, and any unauthorized use is prohibited. Readers of the Survey should use it as a reference document that (1) documents the plans of each organization for office automation, (2) identifies appropriate planners and other contact people in those organizations and (3) encourages the sharing of this information among those people making plans for organizations and decisions about office automation. The Survey supplements information in both the ANL Statement of Site Strategy for Computing Workstations (ANL/TM 458) and the ANL Site Response for the DOE Information Technology Resources Long-Range Plan (ANL/TM 466).

  8. Reliability test and failure analysis of high power LED packages

    International Nuclear Information System (INIS)

    Chen Zhaohui; Zhang Qin; Wang Kai; Luo Xiaobing; Liu Sheng

    2011-01-01

    A new type application specific light emitting diode (LED) package (ASLP) with freeform polycarbonate lens for street lighting is developed, whose manufacturing processes are compatible with a typical LED packaging process. The reliability test methods and failure criterions from different vendors are reviewed and compared. It is found that test methods and failure criterions are quite different. The rapid reliability assessment standards are urgently needed for the LED industry. 85 0 C/85 RH with 700 mA is used to test our LED modules with three other vendors for 1000 h, showing no visible degradation in optical performance for our modules, with two other vendors showing significant degradation. Some failure analysis methods such as C-SAM, Nano X-ray CT and optical microscope are used for LED packages. Some failure mechanisms such as delaminations and cracks are detected in the LED packages after the accelerated reliability testing. The finite element simulation method is helpful for the failure analysis and design of the reliability of the LED packaging. One example is used to show one currently used module in industry is vulnerable and may not easily pass the harsh thermal cycle testing. (semiconductor devices)

  9. Reliability analysis based on the losses from failures.

    Science.gov (United States)

    Todinov, M T

    2006-04-01

    The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the

  10. Johnson Space Center's Risk and Reliability Analysis Group 2008 Annual Report

    Science.gov (United States)

    Valentine, Mark; Boyer, Roger; Cross, Bob; Hamlin, Teri; Roelant, Henk; Stewart, Mike; Bigler, Mark; Winter, Scott; Reistle, Bruce; Heydorn,Dick

    2009-01-01

    The Johnson Space Center (JSC) Safety & Mission Assurance (S&MA) Directorate s Risk and Reliability Analysis Group provides both mathematical and engineering analysis expertise in the areas of Probabilistic Risk Assessment (PRA), Reliability and Maintainability (R&M) analysis, and data collection and analysis. The fundamental goal of this group is to provide National Aeronautics and Space Administration (NASA) decisionmakers with the necessary information to make informed decisions when evaluating personnel, flight hardware, and public safety concerns associated with current operating systems as well as with any future systems. The Analysis Group includes a staff of statistical and reliability experts with valuable backgrounds in the statistical, reliability, and engineering fields. This group includes JSC S&MA Analysis Branch personnel as well as S&MA support services contractors, such as Science Applications International Corporation (SAIC) and SoHaR. The Analysis Group s experience base includes nuclear power (both commercial and navy), manufacturing, Department of Defense, chemical, and shipping industries, as well as significant aerospace experience specifically in the Shuttle, International Space Station (ISS), and Constellation Programs. The Analysis Group partners with project and program offices, other NASA centers, NASA contractors, and universities to provide additional resources or information to the group when performing various analysis tasks. The JSC S&MA Analysis Group is recognized as a leader in risk and reliability analysis within the NASA community. Therefore, the Analysis Group is in high demand to help the Space Shuttle Program (SSP) continue to fly safely, assist in designing the next generation spacecraft for the Constellation Program (CxP), and promote advanced analytical techniques. The Analysis Section s tasks include teaching classes and instituting personnel qualification processes to enhance the professional abilities of our analysts

  11. Analysis of NPP protection structure reliability under impact of a falling aircraft

    International Nuclear Information System (INIS)

    Shul'man, G.S.

    1996-01-01

    Methodology for evaluation of NPP protection structure reliability by impact of aircraft fall down is considered. The methodology is base on the probabilistic analysis of all potential events. The problem is solved in three stages: determination of loads on structural units, calculation of local reliability of protection structures by assigned loads and estimation of the structure reliability. The methodology proposed may be applied at the NPP design stage and by determination of reliability of already available structures

  12. Reliability Analysis of Retaining Walls Subjected to Blast Loading by Finite Element Approach

    Science.gov (United States)

    GuhaRay, Anasua; Mondal, Stuti; Mohiuddin, Hisham Hasan

    2018-02-01

    Conventional design methods adopt factor of safety as per practice and experience, which are deterministic in nature. The limit state method, though not completely deterministic, does not take into account effect of design parameters, which are inherently variable such as cohesion, angle of internal friction, etc. for soil. Reliability analysis provides a measure to consider these variations into analysis and hence results in a more realistic design. Several studies have been carried out on reliability of reinforced concrete walls and masonry walls under explosions. Also, reliability analysis of retaining structures against various kinds of failure has been done. However, very few research works are available on reliability analysis of retaining walls subjected to blast loading. Thus, the present paper considers the effect of variation of geotechnical parameters when a retaining wall is subjected to blast loading. However, it is found that the variation of geotechnical random variables does not have a significant effect on the stability of retaining walls subjected to blast loading.

  13. Sled Tests Using the Hybrid III Rail Safety ATD and Workstation Tables for Passenger Trains

    Science.gov (United States)

    2017-08-01

    The Hybrid III Rail Safety (H3-RS) anthropomorphic test device (ATD) is a crash test dummy developed in the United Kingdom to evaluate abdomen and lower thorax injuries that occur when passengers impact workstation tables during train accidents. The ...

  14. Reliability Engineering

    International Nuclear Information System (INIS)

    Lee, Sang Yong

    1992-07-01

    This book is about reliability engineering, which describes definition and importance of reliability, development of reliability engineering, failure rate and failure probability density function about types of it, CFR and index distribution, IFR and normal distribution and Weibull distribution, maintainability and movability, reliability test and reliability assumption in index distribution type, normal distribution type and Weibull distribution type, reliability sampling test, reliability of system, design of reliability and functionality failure analysis by FTA.

  15. Decision theory, the context for risk and reliability analysis

    International Nuclear Information System (INIS)

    Kaplan, S.

    1985-01-01

    According to this model of the decision process then, the optimum decision is that option having the largest expected utility. This is the fundamental model of a decision situation. It is necessary to remark that in order for the model to represent a real-life decision situation, it must include all the options present in that situation, including, for example, the option of not deciding--which is itself a decision, although usually not the optimum one. Similarly, it should include the option of delaying the decision while the authors gather further information. Both of these options have probabilities, outcomes, impacts, and utilities like any option and should be included explicitly in the decision diagram. The reason for doing a quantitative risk or reliability analysis is always that, somewhere underlying there is a decision to be made. The decision analysis therefore always forms the context for the risk or reliability analysis, and this context shapes the form and language of that analysis. Therefore, they give in this section a brief review of the well-known decision theory diagram

  16. Inclusion of fatigue effects in human reliability analysis

    Energy Technology Data Exchange (ETDEWEB)

    Griffith, Candice D. [Vanderbilt University, Nashville, TN (United States); Mahadevan, Sankaran, E-mail: sankaran.mahadevan@vanderbilt.edu [Vanderbilt University, Nashville, TN (United States)

    2011-11-15

    The effect of fatigue on human performance has been observed to be an important factor in many industrial accidents. However, defining and measuring fatigue is not easily accomplished. This creates difficulties in including fatigue effects in probabilistic risk assessments (PRA) of complex engineering systems that seek to include human reliability analysis (HRA). Thus the objectives of this paper are to discuss (1) the importance of the effects of fatigue on performance, (2) the difficulties associated with defining and measuring fatigue, (3) the current status of inclusion of fatigue in HRA methods, and (4) the future directions and challenges for the inclusion of fatigue, specifically sleep deprivation, in HRA. - Highlights: >We highlight the need for fatigue and sleep deprivation effects on performance to be included in human reliability analysis (HRA) methods. Current methods do not explicitly include sleep deprivation effects. > We discuss the difficulties in defining and measuring fatigue. > We review sleep deprivation research, and discuss the limitations and future needs of the current HRA methods.

  17. Modeling of seismic hazards for dynamic reliability analysis

    International Nuclear Information System (INIS)

    Mizutani, M.; Fukushima, S.; Akao, Y.; Katukura, H.

    1993-01-01

    This paper investigates the appropriate indices of seismic hazard curves (SHCs) for seismic reliability analysis. In the most seismic reliability analyses of structures, the seismic hazards are defined in the form of the SHCs of peak ground accelerations (PGAs). Usually PGAs play a significant role in characterizing ground motions. However, PGA is not always a suitable index of seismic motions. When random vibration theory developed in the frequency domain is employed to obtain statistics of responses, it is more convenient for the implementation of dynamic reliability analysis (DRA) to utilize an index which can be determined in the frequency domain. In this paper, we summarize relationships among the indices which characterize ground motions. The relationships between the indices and the magnitude M are arranged as well. In this consideration, duration time plays an important role in relating two distinct class, i.e. energy class and power class. Fourier and energy spectra are involved in the energy class, and power and response spectra and PGAs are involved in the power class. These relationships are also investigated by using ground motion records. Through these investigations, we have shown the efficiency of employing the total energy as an index of SHCs, which can be determined in the time and frequency domains and has less variance than the other indices. In addition, we have proposed the procedure of DRA based on total energy. (author)

  18. The Monte Carlo Simulation Method for System Reliability and Risk Analysis

    CERN Document Server

    Zio, Enrico

    2013-01-01

    Monte Carlo simulation is one of the best tools for performing realistic analysis of complex systems as it allows most of the limiting assumptions on system behavior to be relaxed. The Monte Carlo Simulation Method for System Reliability and Risk Analysis comprehensively illustrates the Monte Carlo simulation method and its application to reliability and system engineering. Readers are given a sound understanding of the fundamentals of Monte Carlo sampling and simulation and its application for realistic system modeling.   Whilst many of the topics rely on a high-level understanding of calculus, probability and statistics, simple academic examples will be provided in support to the explanation of the theoretical foundations to facilitate comprehension of the subject matter. Case studies will be introduced to provide the practical value of the most advanced techniques.   This detailed approach makes The Monte Carlo Simulation Method for System Reliability and Risk Analysis a key reference for senior undergra...

  19. Summary of component reliability data for probabilistic safety analysis of Korean standard nuclear power plant

    International Nuclear Information System (INIS)

    Choi, S. Y.; Han, S. H.

    2004-01-01

    The reliability data of Korean NPP that reflects the plant specific characteristics is necessary for PSA of Korean nuclear power plants. We have performed a study to develop the component reliability DB and S/W for component reliability analysis. Based on the system, we had have collected the component operation data and failure/repair data during plant operation data to 1998/2000 for YGN 3,4/UCN 3,4 respectively. Recently, we have upgraded the database by collecting additional data by 2002 for Korean standard nuclear power plants and performed component reliability analysis and Bayesian analysis again. In this paper, we supply the summary of component reliability data for probabilistic safety analysis of Korean standard nuclear power plant and describe the plant specific characteristics compared to the generic data

  20. reliability analysis of a two span floor designed according

    African Journals Online (AJOL)

    user

    deterministic approach, considering both ultimate and serviceability limit states. Reliability analysis of the floor ... loading, strength and stiffness parameters, dimensions .... to show that there is a direct relation between the failure probability (Pf) ...

  1. Reliability analysis and updating of deteriorating systems with subset simulation

    DEFF Research Database (Denmark)

    Schneider, Ronald; Thöns, Sebastian; Straub, Daniel

    2017-01-01

    An efficient approach to reliability analysis of deteriorating structural systems is presented, which considers stochastic dependence among element deterioration. Information on a deteriorating structure obtained through inspection or monitoring is included in the reliability assessment through B...... is an efficient and robust sampling-based algorithm suitable for such analyses. The approach is demonstrated in two case studies considering a steel frame structure and a Daniels system subjected to high-cycle fatigue....

  2. Use of COMCAN III in system design and reliability analysis

    International Nuclear Information System (INIS)

    Rasmuson, D.M.; Shepherd, J.C.; Marshall, N.H.; Fitch, L.R.

    1982-03-01

    This manual describes the COMCAN III computer program and its use. COMCAN III is a tool that can be used by the reliability analyst performing a probabilistic risk assessment or by the designer of a system desiring improved performance and efficiency. COMCAN III can be used to determine minimal cut sets of a fault tree, to calculate system reliability characteristics, and to perform qualitative common cause failure analysis

  3. Design of a Workstation for People with Upper-Limb Disabilities Using a Brain Computer Interface

    Directory of Open Access Journals (Sweden)

    John E. Muñoz-Cardona

    2013-11-01

    Full Text Available  This paper shows the design of work-station for work-related inclusion people upper-limb disability. The system involves the use of novel brain computer interface used to bridge the user-computer interaction. Our hope objective is elucidating functional, technological, ergonomic and procedural aspects to runaway operation station; with propose to scratch barrier to impossibility access to TIC’s tools and work done for individual disability person. We found access facility ergonomics, adaptability and portable issue of workstation are most important design criteria. Prototype implementations in workplace environment have TIR estimate of 43% for retrieve. Finally we list a typology of services that could be the most appropriate for the process of labor including: telemarketing, telesales, telephone surveys, order taking, social assistance in disasters, general information and inquiries, reservations at tourist sites, technical support, emergency, online support and after-sales services.

  4. Structural systems reliability analysis

    International Nuclear Information System (INIS)

    Frangopol, D.

    1975-01-01

    For an exact evaluation of the reliability of a structure it appears necessary to determine the distribution densities of the loads and resistances and to calculate the correlation coefficients between loads and between resistances. These statistical characteristics can be obtained only on the basis of a long activity period. In case that such studies are missing the statistical properties formulated here give upper and lower bounds of the reliability. (orig./HP) [de

  5. Reliability analysis of containment isolation systems

    International Nuclear Information System (INIS)

    Pelto, P.J.; Ames, K.R.; Gallucci, R.H.

    1985-06-01

    This report summarizes the results of the Reliability Analysis of Containment Isolation System Project. Work was performed in five basic areas: design review, operating experience review, related research review, generic analysis and plant specific analysis. Licensee Event Reports (LERs) and Integrated Leak Rate Test (ILRT) reports provided the major sources of containment performance information used in this study. Data extracted from LERs were assembled into a computer data base. Qualitative and quantitative information developed for containment performance under normal operating conditions and design basis accidents indicate that there is room for improvement. A rough estimate of overall containment unavailability for relatively small leaks which violate plant technical specifications is 0.3. An estimate of containment unavailability due to large leakage events is in the range of 0.001 to 0.01. These estimates are dependent on several assumptions (particularly on event duration times) which are documented in the report

  6. Sensitivity analysis in optimization and reliability problems

    International Nuclear Information System (INIS)

    Castillo, Enrique; Minguez, Roberto; Castillo, Carmen

    2008-01-01

    The paper starts giving the main results that allow a sensitivity analysis to be performed in a general optimization problem, including sensitivities of the objective function, the primal and the dual variables with respect to data. In particular, general results are given for non-linear programming, and closed formulas for linear programming problems are supplied. Next, the methods are applied to a collection of civil engineering reliability problems, which includes a bridge crane, a retaining wall and a composite breakwater. Finally, the sensitivity analysis formulas are extended to calculus of variations problems and a slope stability problem is used to illustrate the methods

  7. Sensitivity analysis in optimization and reliability problems

    Energy Technology Data Exchange (ETDEWEB)

    Castillo, Enrique [Department of Applied Mathematics and Computational Sciences, University of Cantabria, Avda. Castros s/n., 39005 Santander (Spain)], E-mail: castie@unican.es; Minguez, Roberto [Department of Applied Mathematics, University of Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: roberto.minguez@uclm.es; Castillo, Carmen [Department of Civil Engineering, University of Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: mariacarmen.castillo@uclm.es

    2008-12-15

    The paper starts giving the main results that allow a sensitivity analysis to be performed in a general optimization problem, including sensitivities of the objective function, the primal and the dual variables with respect to data. In particular, general results are given for non-linear programming, and closed formulas for linear programming problems are supplied. Next, the methods are applied to a collection of civil engineering reliability problems, which includes a bridge crane, a retaining wall and a composite breakwater. Finally, the sensitivity analysis formulas are extended to calculus of variations problems and a slope stability problem is used to illustrate the methods.

  8. Development and Reliability Analysis of HTR-PM Reactor Protection System

    International Nuclear Information System (INIS)

    Li Duo; Guo Chao; Xiong Huasheng

    2014-01-01

    High Temperature Gas-Cooled Reactor-Pebble bed Module (HTR-PM) digital Reactor Protection System (RPS) is a dedicated system, which is designed and developed according to HTR-PM NPP protection specifications. To decrease the probability of accident trips and increase the system reliability, HTR-PM RPS has such features as a framework of four redundant channels, two diverse sub-systems in each channel, and two level two-out-of-four logic voters. Reliability analysis of HTR-PM RPS is based on fault tree model. A fault tree is built based on HTR-PM RPS Failure Modes and Effects Analysis (FMEA), and special analysis is focused on the sub-tree of redundant channel ''2-out-of-4'' logic and the fault tree under one channel is bypassed. The qualitative analysis of fault tree, such as RPS weakness according to minimal cut sets, is summarized in the paper. (author)

  9. Method of reliability allocation based on fault tree analysis and fuzzy math in nuclear power plants

    International Nuclear Information System (INIS)

    Chen Zhaobing; Deng Jian; Cao Xuewu

    2005-01-01

    Reliability allocation is a kind of a difficult multi-objective optimization problem. It can not only be applied to determine the reliability characteristic of reactor systems, subsystem and main components but also be performed to improve the design, operation and maintenance of nuclear plants. The fuzzy math known as one of the powerful tools for fuzzy optimization and the fault analysis deemed to be one of the effective methods of reliability analysis can be applied to the reliability allocation model so as to work out the problems of fuzzy characteristic of some factors and subsystem's choice respectively in this paper. Thus we develop a failure rate allocation model on the basis of the fault tree analysis and fuzzy math. For the choice of the reliability constraint factors, we choose the six important ones according to practical need for conducting the reliability allocation. The subsystem selected by the top-level fault tree analysis is to avoid allocating reliability for all the equipment and components including the unnecessary parts. During the reliability process, some factors can be calculated or measured quantitatively while others only can be assessed qualitatively by the expert rating method. So we adopt fuzzy decision and dualistic contrast to realize the reliability allocation with the help of fault tree analysis. Finally the example of the emergency diesel generator's reliability allocation is used to illustrate reliability allocation model and improve this model simple and applicable. (authors)

  10. Reliability engineering analysis of ATLAS data reprocessing campaigns

    International Nuclear Information System (INIS)

    Vaniachine, A; Golubkov, D; Karpenko, D

    2014-01-01

    During three years of LHC data taking, the ATLAS collaboration completed three petascale data reprocessing campaigns on the Grid, with up to 2 PB of data being reprocessed every year. In reprocessing on the Grid, failures can occur for a variety of reasons, while Grid heterogeneity makes failures hard to diagnose and repair quickly. As a result, Big Data processing on the Grid must tolerate a continuous stream of failures, errors and faults. While ATLAS fault-tolerance mechanisms improve the reliability of Big Data processing in the Grid, their benefits come at costs and result in delays making the performance prediction difficult. Reliability Engineering provides a framework for fundamental understanding of the Big Data processing on the Grid, which is not a desirable enhancement but a necessary requirement. In ATLAS, cost monitoring and performance prediction became critical for the success of the reprocessing campaigns conducted in preparation for the major physics conferences. In addition, our Reliability Engineering approach supported continuous improvements in data reprocessing throughput during LHC data taking. The throughput doubled in 2011 vs. 2010 reprocessing, then quadrupled in 2012 vs. 2011 reprocessing. We present the Reliability Engineering analysis of ATLAS data reprocessing campaigns providing the foundation needed to scale up the Big Data processing technologies beyond the petascale.

  11. LIF: A new Kriging based learning function and its application to structural reliability analysis

    International Nuclear Information System (INIS)

    Sun, Zhili; Wang, Jian; Li, Rui; Tong, Cao

    2017-01-01

    The main task of structural reliability analysis is to estimate failure probability of a studied structure taking randomness of input variables into account. To consider structural behavior practically, numerical models become more and more complicated and time-consuming, which increases the difficulty of reliability analysis. Therefore, sequential strategies of design of experiment (DoE) are raised. In this research, a new learning function, named least improvement function (LIF), is proposed to update DoE of Kriging based reliability analysis method. LIF values how much the accuracy of estimated failure probability will be improved if adding a given point into DoE. It takes both statistical information provided by the Kriging model and the joint probability density function of input variables into account, which is the most important difference from the existing learning functions. Maximum point of LIF is approximately determined with Markov Chain Monte Carlo(MCMC) simulation. A new reliability analysis method is developed based on the Kriging model, in which LIF, MCMC and Monte Carlo(MC) simulation are employed. Three examples are analyzed. Results show that LIF and the new method proposed in this research are very efficient when dealing with nonlinear performance function, small probability, complicated limit state and engineering problems with high dimension. - Highlights: • Least improvement function (LIF) is proposed for structural reliability analysis. • LIF takes both Kriging based statistical information and joint PDF into account. • A reliability analysis method is constructed based on Kriging, MCS and LIF.

  12. Using reliability analysis to support decision making\\ud in phased mission systems

    OpenAIRE

    Zhang, Yang; Prescott, Darren

    2017-01-01

    Due to the environments in which they will operate, future autonomous systems must be capable of reconfiguring quickly and safely following faults or environmental changes. Past research has shown how, by considering autonomous systems to perform phased missions, reliability analysis can support decision making by allowing comparison of the probability of success of different missions following reconfiguration. Binary Decision Diagrams (BDDs) offer fast, accurate reliability analysis that cou...

  13. Human Reliability Analysis in Support of Risk Assessment for Positive Train Control

    Science.gov (United States)

    2003-06-01

    This report describes an approach to evaluating the reliability of human actions that are modeled in a probabilistic risk assessment : (PRA) of train control operations. This approach to human reliability analysis (HRA) has been applied in the case o...

  14. An Intelligent Method for Structural Reliability Analysis Based on Response Surface

    Institute of Scientific and Technical Information of China (English)

    桂劲松; 刘红; 康海贵

    2004-01-01

    As water depth increases, the structural safety and reliability of a system become more and more important and challenging. Therefore, the structural reliability method must be applied in ocean engineering design such as offshore platform design. If the performance function is known in structural reliability analysis, the first-order second-moment method is often used. If the performance function could not be definitely expressed, the response surface method is always used because it has a very clear train of thought and simple programming. However, the traditional response surface method fits the response surface of quadratic polynomials where the problem of accuracy could not be solved, because the true limit state surface can be fitted well only in the area near the checking point. In this paper, an intelligent computing method based on the whole response surface is proposed, which can be used for the situation where the performance function could not be definitely expressed in structural reliability analysis. In this method, a response surface of the fuzzy neural network for the whole area should be constructed first, and then the structural reliability can be calculated by the genetic algorithm. In the proposed method, all the sample points for the training network come from the whole area, so the true limit state surface in the whole area can be fitted. Through calculational examples and comparative analysis, it can be known that the proposed method is much better than the traditional response surface method of quadratic polynomials, because, the amount of calculation of finite element analysis is largely reduced, the accuracy of calculation is improved,and the true limit state surface can be fitted very well in the whole area. So, the method proposed in this paper is suitable for engineering application.

  15. A versatile nondestructive evaluation imaging workstation

    Science.gov (United States)

    Chern, E. James; Butler, David W.

    1994-01-01

    Ultrasonic C-scan and eddy current imaging systems are of the pointwise type evaluation systems that rely on a mechanical scanner to physically maneuver a probe relative to the specimen point by point in order to acquire data and generate images. Since the ultrasonic C-scan and eddy current imaging systems are based on the same mechanical scanning mechanisms, the two systems can be combined using the same PC platform with a common mechanical manipulation subsystem and integrated data acquisition software. Based on this concept, we have developed an IBM PC-based combined ultrasonic C-scan and eddy current imaging system. The system is modularized and provides capacity for future hardware and software expansions. Advantages associated with the combined system are: (1) eliminated duplication of the computer and mechanical hardware, (2) unified data acquisition, processing and storage software, (3) reduced setup time for repetitious ultrasonic and eddy current scans, and (4) improved system efficiency. The concept can be adapted to many engineering systems by integrating related PC-based instruments into one multipurpose workstation such as dispensing, machining, packaging, sorting, and other industrial applications.

  16. An informatics model for guiding assembly of telemicrobiology workstations for malaria collaborative diagnostics using commodity products and open-source software

    Directory of Open Access Journals (Sweden)

    Crandall Ian

    2009-07-01

    Full Text Available Abstract Background Deficits in clinical microbiology infrastructure exacerbate global infectious disease burdens. This paper examines how commodity computation, communication, and measurement products combined with open-source analysis and communication applications can be incorporated into laboratory medicine microbiology protocols. Those commodity components are all now sourceable globally. An informatics model is presented for guiding the use of low-cost commodity components and free software in the assembly of clinically useful and usable telemicrobiology workstations. Methods The model incorporates two general principles: 1 collaborative diagnostics, where free and open communication and networking applications are used to link distributed collaborators for reciprocal assistance in organizing and interpreting digital diagnostic data; and 2 commodity engineering, which leverages globally available consumer electronics and open-source informatics applications, to build generic open systems that measure needed information in ways substantially equivalent to more complex proprietary systems. Routine microscopic examination of Giemsa and fluorescently stained blood smears for diagnosing malaria is used as an example to validate the model. Results The model is used as a constraint-based guide for the design, assembly, and testing of a functioning, open, and commoditized telemicroscopy system that supports distributed acquisition, exploration, analysis, interpretation, and reporting of digital microscopy images of stained malarial blood smears while also supporting remote diagnostic tracking, quality assessment and diagnostic process development. Conclusion The open telemicroscopy workstation design and use-process described here can address clinical microbiology infrastructure deficits in an economically sound and sustainable manner. It can boost capacity to deal with comprehensive measurement of disease and care outcomes in individuals and

  17. An informatics model for guiding assembly of telemicrobiology workstations for malaria collaborative diagnostics using commodity products and open-source software.

    Science.gov (United States)

    Suhanic, West; Crandall, Ian; Pennefather, Peter

    2009-07-17

    Deficits in clinical microbiology infrastructure exacerbate global infectious disease burdens. This paper examines how commodity computation, communication, and measurement products combined with open-source analysis and communication applications can be incorporated into laboratory medicine microbiology protocols. Those commodity components are all now sourceable globally. An informatics model is presented for guiding the use of low-cost commodity components and free software in the assembly of clinically useful and usable telemicrobiology workstations. The model incorporates two general principles: 1) collaborative diagnostics, where free and open communication and networking applications are used to link distributed collaborators for reciprocal assistance in organizing and interpreting digital diagnostic data; and 2) commodity engineering, which leverages globally available consumer electronics and open-source informatics applications, to build generic open systems that measure needed information in ways substantially equivalent to more complex proprietary systems. Routine microscopic examination of Giemsa and fluorescently stained blood smears for diagnosing malaria is used as an example to validate the model. The model is used as a constraint-based guide for the design, assembly, and testing of a functioning, open, and commoditized telemicroscopy system that supports distributed acquisition, exploration, analysis, interpretation, and reporting of digital microscopy images of stained malarial blood smears while also supporting remote diagnostic tracking, quality assessment and diagnostic process development. The open telemicroscopy workstation design and use-process described here can address clinical microbiology infrastructure deficits in an economically sound and sustainable manner. It can boost capacity to deal with comprehensive measurement of disease and care outcomes in individuals and groups in a distributed and collaborative fashion. The workstation

  18. Dynamic decision-making for reliability and maintenance analysis of manufacturing systems based on failure effects

    Science.gov (United States)

    Zhang, Ding; Zhang, Yingjie

    2017-09-01

    A framework for reliability and maintenance analysis of job shop manufacturing systems is proposed in this paper. An efficient preventive maintenance (PM) policy in terms of failure effects analysis (FEA) is proposed. Subsequently, reliability evaluation and component importance measure based on FEA are performed under the PM policy. A job shop manufacturing system is applied to validate the reliability evaluation and dynamic maintenance policy. Obtained results are compared with existed methods and the effectiveness is validated. Some vague understandings for issues such as network modelling, vulnerabilities identification, the evaluation criteria of repairable systems, as well as PM policy during manufacturing system reliability analysis are elaborated. This framework can help for reliability optimisation and rational maintenance resources allocation of job shop manufacturing systems.

  19. Reliability analysis of production ships with emphasis on load combination and ultimate strength

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Xiaozhi

    1995-05-01

    This thesis deals with ultimate strength and reliability analysis of offshore production ships, accounting for stochastic load combinations, using a typical North Sea production ship for reference. A review of methods for structural reliability analysis is presented. Probabilistic methods are established for the still water and vertical wave bending moments. Linear stress analysis of a midships transverse frame is carried out, four different finite element models are assessed. Upon verification of the general finite element code ABAQUS with a typical ship transverse girder example, for which test results are available, ultimate strength analysis of the reference transverse frame is made to obtain the ultimate load factors associated with the specified pressure loads in Det norske Veritas Classification rules for ships and rules for production vessels. Reliability analysis is performed to develop appropriate design criteria for the transverse structure. It is found that the transverse frame failure mode does not seem to contribute to the system collapse. Ultimate strength analysis of the longitudinally stiffened panels is performed, accounting for the combined biaxial and lateral loading. Reliability based design of the longitudinally stiffened bottom and deck panels is accomplished regarding the collapse mode under combined biaxial and lateral loads. 107 refs., 76 refs., 37 tabs.

  20. Systems reliability/structural reliability

    International Nuclear Information System (INIS)

    Green, A.E.

    1980-01-01

    The question of reliability technology using quantified techniques is considered for systems and structures. Systems reliability analysis has progressed to a viable and proven methodology whereas this has yet to be fully achieved for large scale structures. Structural loading variants over the half-time of the plant are considered to be more difficult to analyse than for systems, even though a relatively crude model may be a necessary starting point. Various reliability characteristics and environmental conditions are considered which enter this problem. The rare event situation is briefly mentioned together with aspects of proof testing and normal and upset loading conditions. (orig.)

  1. Reliability Analysis of a Two Dissimilar Unit Cold Standby System ...

    African Journals Online (AJOL)

    (2009) using linear first order differential equation evaluated the reliability and availability characteristics of two-dissimilar-unit cold standby system with three mode for which no cost benefit analysis was considered. El-said (1994) contributed on stochastic analysis of a two-dissimilar-unit standby redundant system.

  2. Application of Reliability Analysis for Optimal Design of Monolithic Vertical Wall Breakwaters

    DEFF Research Database (Denmark)

    Burcharth, H. F.; Sørensen, John Dalsgaard; Christiani, E.

    1995-01-01

    Reliability analysis and reliability-based design of monolithic vertical wall breakwaters are considered. Probabilistic models of some of the most important failure modes are described. The failures are sliding and slip surface failure of a rubble mound and a clay foundation. Relevant design...

  3. Reliability Analysis of the CERN Radiation Monitoring Electronic System CROME

    CERN Document Server

    AUTHOR|(CDS)2126870

    For the new in-house developed CERN Radiation Monitoring Electronic System (CROME) a reliability analysis is necessary to ensure compliance with the statu-tory requirements regarding the Safety Integrity Level. The required Safety Integrity Level by IEC 60532 standard is SIL 2 (for the Safety Integrated Functions Measurement, Alarm Triggering and Interlock Triggering). The first step of the reliability analysis was a system and functional analysis which served as basis for the implementation of the CROME system in the software “Iso-graph”. In the “Prediction” module of Isograph the failure rates of all components were calculated. Failure rates for passive components were calculated by the Military Standard 217 and failure rates for active components were obtained from lifetime tests by the manufacturers. The FMEA was carried out together with the board designers and implemented in the “FMECA” module of Isograph. The FMEA served as basis for the Fault Tree Analysis and the detection of weak points...

  4. Condition-based fault tree analysis (CBFTA): A new method for improved fault tree analysis (FTA), reliability and safety calculations

    International Nuclear Information System (INIS)

    Shalev, Dan M.; Tiran, Joseph

    2007-01-01

    Condition-based maintenance methods have changed systems reliability in general and individual systems in particular. Yet, this change does not affect system reliability analysis. System fault tree analysis (FTA) is performed during the design phase. It uses components failure rates derived from available sources as handbooks, etc. Condition-based fault tree analysis (CBFTA) starts with the known FTA. Condition monitoring (CM) methods applied to systems (e.g. vibration analysis, oil analysis, electric current analysis, bearing CM, electric motor CM, and so forth) are used to determine updated failure rate values of sensitive components. The CBFTA method accepts updated failure rates and applies them to the FTA. The CBFTA recalculates periodically the top event (TE) failure rate (λ TE ) thus determining the probability of system failure and the probability of successful system operation-i.e. the system's reliability. FTA is a tool for enhancing system reliability during the design stages. But, it has disadvantages, mainly it does not relate to a specific system undergoing maintenance. CBFTA is tool for updating reliability values of a specific system and for calculating the residual life according to the system's monitored conditions. Using CBFTA, the original FTA is ameliorated to a practical tool for use during the system's field life phase, not just during system design phase. This paper describes the CBFTA method and its advantages are demonstrated by an example

  5. Problems Related to Use of Some Terms in System Reliability Analysis

    Directory of Open Access Journals (Sweden)

    Nadezda Hanusova

    2004-01-01

    Full Text Available The paper deals with problems of using dependability terms, defined in actual standard STN IEC 50 (191: International electrotechnical dictionary, chap. 191: Dependability and quality of service (1993, in a technical systems dependability analysis. The goal of the paper is to find a relation between terms introduced in the mentioned standard and used in the technical systems dependability analysis and rules and practices used in a system analysis of the system theory. Description of a part of the system life cycle related to reliability is used as a starting point. The part of a system life cycle is described by the state diagram and reliability relevant therms are assigned.

  6. Measuring reliability under epistemic uncertainty: Review on non-probabilistic reliability metrics

    Directory of Open Access Journals (Sweden)

    Kang Rui

    2016-06-01

    Full Text Available In this paper, a systematic review of non-probabilistic reliability metrics is conducted to assist the selection of appropriate reliability metrics to model the influence of epistemic uncertainty. Five frequently used non-probabilistic reliability metrics are critically reviewed, i.e., evidence-theory-based reliability metrics, interval-analysis-based reliability metrics, fuzzy-interval-analysis-based reliability metrics, possibility-theory-based reliability metrics (posbist reliability and uncertainty-theory-based reliability metrics (belief reliability. It is pointed out that a qualified reliability metric that is able to consider the effect of epistemic uncertainty needs to (1 compensate the conservatism in the estimations of the component-level reliability metrics caused by epistemic uncertainty, and (2 satisfy the duality axiom, otherwise it might lead to paradoxical and confusing results in engineering applications. The five commonly used non-probabilistic reliability metrics are compared in terms of these two properties, and the comparison can serve as a basis for the selection of the appropriate reliability metrics.

  7. Reliability calculations

    International Nuclear Information System (INIS)

    Petersen, K.E.

    1986-03-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very complex systems. In order to increase the applicability of the programs variance reduction techniques can be applied to speed up the calculation process. Variance reduction techniques have been studied and procedures for implementation of importance sampling are suggested. (author)

  8. Subset simulation for structural reliability sensitivity analysis

    International Nuclear Information System (INIS)

    Song Shufang; Lu Zhenzhou; Qiao Hongwei

    2009-01-01

    Based on two procedures for efficiently generating conditional samples, i.e. Markov chain Monte Carlo (MCMC) simulation and importance sampling (IS), two reliability sensitivity (RS) algorithms are presented. On the basis of reliability analysis of Subset simulation (Subsim), the RS of the failure probability with respect to the distribution parameter of the basic variable is transformed as a set of RS of conditional failure probabilities with respect to the distribution parameter of the basic variable. By use of the conditional samples generated by MCMC simulation and IS, procedures are established to estimate the RS of the conditional failure probabilities. The formulae of the RS estimator, its variance and its coefficient of variation are derived in detail. The results of the illustrations show high efficiency and high precision of the presented algorithms, and it is suitable for highly nonlinear limit state equation and structural system with single and multiple failure modes

  9. Active workstation allows office workers to work efficiently while sitting and exercising moderately.

    Science.gov (United States)

    Koren, Katja; Pišot, Rado; Šimunič, Boštjan

    2016-05-01

    To determine the effects of a moderate-intensity active workstation on time and error during simulated office work. The aim of the study was to analyse simultaneous work and exercise for non-sedentary office workers. We monitored oxygen uptake, heart rate, sweating stains area, self-perceived effort, typing test time with typing error count and cognitive performance during 30 min of exercise with no cycling or cycling at 40 and 80 W. Compared baseline, we found increased physiological responses at 40 and 80 W, which corresponds to moderate physical activity (PA). Typing time significantly increased by 7.3% (p = 0.002) in C40W and also by 8.9% (p = 0.011) in C80W. Typing error count and cognitive performance were unchanged. Although moderate intensity exercise performed on cycling workstation during simulated office tasks increases working task execution time with, it has moderate effect size; however, it does not increase the error rate. Participants confirmed that such a working design is suitable for achieving the minimum standards for daily PA during work hours. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  10. Fifty Years of THERP and Human Reliability Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring

    2012-06-01

    In 1962 at a Human Factors Society symposium, Alan Swain presented a paper introducing a Technique for Human Error Rate Prediction (THERP). This was followed in 1963 by a Sandia Laboratories monograph outlining basic human error quantification using THERP and, in 1964, by a special journal edition of Human Factors on quantification of human performance. Throughout the 1960s, Swain and his colleagues focused on collecting human performance data for the Sandia Human Error Rate Bank (SHERB), primarily in connection with supporting the reliability of nuclear weapons assembly in the US. In 1969, Swain met with Jens Rasmussen of Risø National Laboratory and discussed the applicability of THERP to nuclear power applications. By 1975, in WASH-1400, Swain had articulated the use of THERP for nuclear power applications, and the approach was finalized in the watershed publication of the NUREG/CR-1278 in 1983. THERP is now 50 years old, and remains the most well known and most widely used HRA method. In this paper, the author discusses the history of THERP, based on published reports and personal communication and interviews with Swain. The author also outlines the significance of THERP. The foundations of human reliability analysis are found in THERP: human failure events, task analysis, performance shaping factors, human error probabilities, dependence, event trees, recovery, and pre- and post-initiating events were all introduced in THERP. While THERP is not without its detractors, and it is showing signs of its age in the face of newer technological applications, the longevity of THERP is a testament of its tremendous significance. THERP started the field of human reliability analysis. This paper concludes with a discussion of THERP in the context of newer methods, which can be seen as extensions of or departures from Swain’s pioneering work.

  11. Reliability of Computerized Neurocognitive Tests for Concussion Assessment: A Meta-Analysis.

    Science.gov (United States)

    Farnsworth, James L; Dargo, Lucas; Ragan, Brian G; Kang, Minsoo

    2017-09-01

      Although widely used, computerized neurocognitive tests (CNTs) have been criticized because of low reliability and poor sensitivity. A systematic review was published summarizing the reliability of Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT) scores; however, this was limited to a single CNT. Expansion of the previous review to include additional CNTs and a meta-analysis is needed. Therefore, our purpose was to analyze reliability data for CNTs using meta-analysis and examine moderating factors that may influence reliability.   A systematic literature search (key terms: reliability, computerized neurocognitive test, concussion) of electronic databases (MEDLINE, PubMed, Google Scholar, and SPORTDiscus) was conducted to identify relevant studies.   Studies were included if they met all of the following criteria: used a test-retest design, involved at least 1 CNT, provided sufficient statistical data to allow for effect-size calculation, and were published in English.   Two independent reviewers investigated each article to assess inclusion criteria. Eighteen studies involving 2674 participants were retained. Intraclass correlation coefficients were extracted to calculate effect sizes and determine overall reliability. The Fisher Z transformation adjusted for sampling error associated with averaging correlations. Moderator analyses were conducted to evaluate the effects of the length of the test-retest interval, intraclass correlation coefficient model selection, participant demographics, and study design on reliability. Heterogeneity was evaluated using the Cochran Q statistic.   The proportion of acceptable outcomes was greatest for the Axon Sports CogState Test (75%) and lowest for the ImPACT (25%). Moderator analyses indicated that the type of intraclass correlation coefficient model used significantly influenced effect-size estimates, accounting for 17% of the variation in reliability.   The Axon Sports CogState Test, which

  12. Inclusion of fatigue effects in human reliability analysis

    International Nuclear Information System (INIS)

    Griffith, Candice D.; Mahadevan, Sankaran

    2011-01-01

    The effect of fatigue on human performance has been observed to be an important factor in many industrial accidents. However, defining and measuring fatigue is not easily accomplished. This creates difficulties in including fatigue effects in probabilistic risk assessments (PRA) of complex engineering systems that seek to include human reliability analysis (HRA). Thus the objectives of this paper are to discuss (1) the importance of the effects of fatigue on performance, (2) the difficulties associated with defining and measuring fatigue, (3) the current status of inclusion of fatigue in HRA methods, and (4) the future directions and challenges for the inclusion of fatigue, specifically sleep deprivation, in HRA. - Highlights: →We highlight the need for fatigue and sleep deprivation effects on performance to be included in human reliability analysis (HRA) methods. Current methods do not explicitly include sleep deprivation effects. → We discuss the difficulties in defining and measuring fatigue. → We review sleep deprivation research, and discuss the limitations and future needs of the current HRA methods.

  13. Reliability analysis of the reactor protection system with fault diagnosis

    International Nuclear Information System (INIS)

    Lee, D.Y.; Han, J.B.; Lyou, J.

    2004-01-01

    The main function of a reactor protection system (RPS) is to maintain the reactor core integrity and reactor coolant system pressure boundary. The RPS consists of the 2-out-of-m redundant architecture to assure a reliable operation. The system reliability of the RPS is a very important factor for the probability safety assessment (PSA) evaluation in the nuclear field. To evaluate the system failure rate of the k-out-of-m redundant system is not so easy with the deterministic method. In this paper, the reliability analysis method using the binomial process is suggested to calculate the failure rate of the RPS system with a fault diagnosis function. The suggested method is compared with the result of the Markov process to verify the validation of the suggested method, and applied to the several kinds of RPS architectures for a comparative evaluation of the reliability. (orig.)

  14. Reliability Analysis for Adhesive Bonded Composite Stepped Lap Joints Loaded in Fatigue

    DEFF Research Database (Denmark)

    Kimiaeifar, Amin; Sørensen, John Dalsgaard; Lund, Erik

    2012-01-01

    -1, where partial safety factors are introduced together with characteristic values. Asymptotic sampling is used to estimate the reliability with support points generated by randomized Sobol sequences. The predicted reliability level is compared with the implicitly required target reliability level defined......This paper describes a probabilistic approach to calculate the reliability of adhesive bonded composite stepped lap joints loaded in fatigue using three- dimensional finite element analysis (FEA). A method for progressive damage modelling is used to assess fatigue damage accumulation and residual...... by the wind turbine standard IEC 61400-1. Finally, an approach for the assessment of the reliability of adhesive bonded composite stepped lap joints loaded in fatigue is presented. The introduced methodology can be applied in the same way to calculate the reliability level of wind turbine blade components...

  15. An Evidential Reasoning-Based CREAM to Human Reliability Analysis in Maritime Accident Process.

    Science.gov (United States)

    Wu, Bing; Yan, Xinping; Wang, Yang; Soares, C Guedes

    2017-10-01

    This article proposes a modified cognitive reliability and error analysis method (CREAM) for estimating the human error probability in the maritime accident process on the basis of an evidential reasoning approach. This modified CREAM is developed to precisely quantify the linguistic variables of the common performance conditions and to overcome the problem of ignoring the uncertainty caused by incomplete information in the existing CREAM models. Moreover, this article views maritime accident development from the sequential perspective, where a scenario- and barrier-based framework is proposed to describe the maritime accident process. This evidential reasoning-based CREAM approach together with the proposed accident development framework are applied to human reliability analysis of a ship capsizing accident. It will facilitate subjective human reliability analysis in different engineering systems where uncertainty exists in practice. © 2017 Society for Risk Analysis.

  16. Event analysis using a massively parallel processor

    International Nuclear Information System (INIS)

    Bale, A.; Gerelle, E.; Messersmith, J.; Warren, R.; Hoek, J.

    1990-01-01

    This paper describes a system for performing histogramming of n-tuple data at interactive rates using a commercial SIMD processor array connected to a work-station running the well-known Physics Analysis Workstation software (PAW). Results indicate that an order of magnitude performance improvement over current RISC technology is easily achievable

  17. Reliability Approach of a Compressor System using Reliability Block ...

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... This paper presents a reliability analysis of such a system using reliability ... Keywords-compressor system, reliability, reliability block diagram, RBD .... the same structure has been kept with the three subsystems: air flow, oil flow and .... and Safety in Engineering Design", Springer, 2009. [3] P. O'Connor ...

  18. Qualitative analysis in reliability and safety studies

    International Nuclear Information System (INIS)

    Worrell, R.B.; Burdick, G.R.

    1976-01-01

    The qualitative evaluation of system logic models is described as it pertains to assessing the reliability and safety characteristics of nuclear systems. Qualitative analysis of system logic models, i.e., models couched in an event (Boolean) algebra, is defined, and the advantages inherent in qualitative analysis are explained. Certain qualitative procedures that were developed as a part of fault-tree analysis are presented for illustration. Five fault-tree analysis computer-programs that contain a qualitative procedure for determining minimal cut sets are surveyed. For each program the minimal cut-set algorithm and limitations on its use are described. The recently developed common-cause analysis for studying the effect of common-causes of failure on system behavior is explained. This qualitative procedure does not require altering the fault tree, but does use minimal cut sets from the fault tree as part of its input. The method is applied using two different computer programs. 25 refs

  19. A discrete-time Bayesian network reliability modeling and analysis framework

    International Nuclear Information System (INIS)

    Boudali, H.; Dugan, J.B.

    2005-01-01

    Dependability tools are becoming an indispensable tool for modeling and analyzing (critical) systems. However the growing complexity of such systems calls for increasing sophistication of these tools. Dependability tools need to not only capture the complex dynamic behavior of the system components, but they must be also easy to use, intuitive, and computationally efficient. In general, current tools have a number of shortcomings including lack of modeling power, incapacity to efficiently handle general component failure distributions, and ineffectiveness in solving large models that exhibit complex dependencies between their components. We propose a novel reliability modeling and analysis framework based on the Bayesian network (BN) formalism. The overall approach is to investigate timed Bayesian networks and to find a suitable reliability framework for dynamic systems. We have applied our methodology to two example systems and preliminary results are promising. We have defined a discrete-time BN reliability formalism and demonstrated its capabilities from a modeling and analysis point of view. This research shows that a BN based reliability formalism is a powerful potential solution to modeling and analyzing various kinds of system components behaviors and interactions. Moreover, being based on the BN formalism, the framework is easy to use and intuitive for non-experts, and provides a basis for more advanced and useful analyses such as system diagnosis

  20. On Bayesian System Reliability Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Soerensen Ringi, M

    1995-05-01

    The view taken in this thesis is that reliability, the probability that a system will perform a required function for a stated period of time, depends on a person`s state of knowledge. Reliability changes as this state of knowledge changes, i.e. when new relevant information becomes available. Most existing models for system reliability prediction are developed in a classical framework of probability theory and they overlook some information that is always present. Probability is just an analytical tool to handle uncertainty, based on judgement and subjective opinions. It is argued that the Bayesian approach gives a much more comprehensive understanding of the foundations of probability than the so called frequentistic school. A new model for system reliability prediction is given in two papers. The model encloses the fact that component failures are dependent because of a shared operational environment. The suggested model also naturally permits learning from failure data of similar components in non identical environments. 85 refs.

  1. On Bayesian System Reliability Analysis

    International Nuclear Information System (INIS)

    Soerensen Ringi, M.

    1995-01-01

    The view taken in this thesis is that reliability, the probability that a system will perform a required function for a stated period of time, depends on a person's state of knowledge. Reliability changes as this state of knowledge changes, i.e. when new relevant information becomes available. Most existing models for system reliability prediction are developed in a classical framework of probability theory and they overlook some information that is always present. Probability is just an analytical tool to handle uncertainty, based on judgement and subjective opinions. It is argued that the Bayesian approach gives a much more comprehensive understanding of the foundations of probability than the so called frequentistic school. A new model for system reliability prediction is given in two papers. The model encloses the fact that component failures are dependent because of a shared operational environment. The suggested model also naturally permits learning from failure data of similar components in non identical environments. 85 refs

  2. Increasing physical activity in office workers ? the Inphact Treadmill study; a study protocol for a 13-month randomized controlled trial of treadmill workstations

    OpenAIRE

    Bergman, Frida; Boraxbekk, Carl-Johan; Wennberg, Patrik; S?rlin, Ann; Olsson, Tommy

    2015-01-01

    Background Sedentary behaviour is an independent risk factor for mortality and morbidity, especially for type 2 diabetes. Since office work is related to long periods that are largely sedentary, it is of major importance to find ways for office workers to engage in light intensity physical activity (LPA). The Inphact Treadmill study aims to investigate the effects of installing treadmill workstations in offices compared to conventional workstations. Methods/Design A two-arm, 13-month, randomi...

  3. Simulation Approach to Mission Risk and Reliability Analysis, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — It is proposed to develop and demonstrate an integrated total-system risk and reliability analysis approach that is based on dynamic, probabilistic simulation. This...

  4. Mechanical system reliability analysis using a combination of graph theory and Boolean function

    International Nuclear Information System (INIS)

    Tang, J.

    2001-01-01

    A new method based on graph theory and Boolean function for assessing reliability of mechanical systems is proposed. The procedure for this approach consists of two parts. By using the graph theory, the formula for the reliability of a mechanical system that considers the interrelations of subsystems or components is generated. Use of the Boolean function to examine the failure interactions of two particular elements of the system, followed with demonstrations of how to incorporate such failure dependencies into the analysis of larger systems, a constructive algorithm for quantifying the genuine interconnections between the subsystems or components is provided. The combination of graph theory and Boolean function provides an effective way to evaluate the reliability of a large, complex mechanical system. A numerical example demonstrates that this method an effective approaches in system reliability analysis

  5. Stochastic reliability analysis using Fokker Planck equations

    International Nuclear Information System (INIS)

    Hari Prasad, M.; Rami Reddy, G.; Srividya, A.; Verma, A.K.

    2011-01-01

    The Fokker-Planck equation describes the time evolution of the probability density function of the velocity of a particle, and can be generalized to other observables as well. It is also known as the Kolmogorov forward equation (diffusion). Hence, for any process, which evolves with time, the probability density function as a function of time can be represented with Fokker-Planck equation. In stochastic reliability analysis one is more interested in finding out the reliability or failure probability of the components or structures as a function of time rather than instantaneous failure probabilities. In this analysis the variables are represented with random processes instead of random variables. A random processes can be either stationary or non stationary. If the random process is stationary then the failure probability doesn't change with time where as in the case of non stationary processes the failure probability changes with time. In the present paper Fokker Planck equations have been used to find out the probability density function of the non stationary random processes. In this paper a flow chart has been provided which describes step by step process for carrying out stochastic reliability analysis using Fokker-Planck equations. As a first step one has to identify the failure function as a function of random processes. Then one has to solve the Fokker-Planck equation for each random process. In this paper the Fokker-Planck equation has been solved by using Finite difference method. As a result one gets the probability density values of the random process in the sample space as well as time space. Later at each time step appropriate probability distribution has to be identified based on the available probability density values. For checking the better fitness of the data Kolmogorov-Smirnov Goodness of fit test has been performed. In this way one can find out the distribution of the random process at each time step. Once one has the probability distribution

  6. High-Reliable PLC RTOS Development and RPS Structure Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sohn, H. S.; Song, D. Y.; Sohn, D. S.; Kim, J. H. [Enersys Co., Daejeon (Korea, Republic of)

    2008-04-15

    One of the KNICS objectives is to develop a platform for Nuclear Power Plant(NPP) I and C(Instrumentation and Control) system, especially plant protection system. The developed platform is POSAFE-Q and this work supports the development of POSAFE-Q with the development of high-reliable real-time operating system(RTOS) and programmable logic device(PLD) software. Another KNICS objective is to develop safety I and C systems, such as Reactor Protection System(RPS) and Engineered Safety Feature-Component Control System(ESF-CCS). This work plays an important role in the structure analysis for RPS. Validation and verification(V and V) of the safety critical software is an essential work to make digital plant protection system highly reliable and safe. Generally, the reliability and safety of software based system can be improved by strict quality assurance framework including the software development itself. In other words, through V and V, the reliability and safety of a system can be improved and the development activities like software requirement specification, software design specification, component tests, integration tests, and system tests shall be appropriately documented for V and V.

  7. High-Reliable PLC RTOS Development and RPS Structure Analysis

    International Nuclear Information System (INIS)

    Sohn, H. S.; Song, D. Y.; Sohn, D. S.; Kim, J. H.

    2008-04-01

    One of the KNICS objectives is to develop a platform for Nuclear Power Plant(NPP) I and C(Instrumentation and Control) system, especially plant protection system. The developed platform is POSAFE-Q and this work supports the development of POSAFE-Q with the development of high-reliable real-time operating system(RTOS) and programmable logic device(PLD) software. Another KNICS objective is to develop safety I and C systems, such as Reactor Protection System(RPS) and Engineered Safety Feature-Component Control System(ESF-CCS). This work plays an important role in the structure analysis for RPS. Validation and verification(V and V) of the safety critical software is an essential work to make digital plant protection system highly reliable and safe. Generally, the reliability and safety of software based system can be improved by strict quality assurance framework including the software development itself. In other words, through V and V, the reliability and safety of a system can be improved and the development activities like software requirement specification, software design specification, component tests, integration tests, and system tests shall be appropriately documented for V and V.

  8. PSA applications and piping reliability analysis: where do we stand?

    International Nuclear Information System (INIS)

    Lydell, B.O.Y.

    1997-01-01

    This reviews a recently proposed framework for piping reliability analysis. The framework was developed to promote critical interpretations of operational data on pipe failures, and to support application-specific-parameter estimation

  9. Reliability analysis and prediction of mixed mode load using Markov Chain Model

    International Nuclear Information System (INIS)

    Nikabdullah, N.; Singh, S. S. K.; Alebrahim, R.; Azizi, M. A.; K, Elwaleed A.; Noorani, M. S. M.

    2014-01-01

    The aim of this paper is to present the reliability analysis and prediction of mixed mode loading by using a simple two state Markov Chain Model for an automotive crankshaft. The reliability analysis and prediction for any automotive component or structure is important for analyzing and measuring the failure to increase the design life, eliminate or reduce the likelihood of failures and safety risk. The mechanical failures of the crankshaft are due of high bending and torsion stress concentration from high cycle and low rotating bending and torsional stress. The Markov Chain was used to model the two states based on the probability of failure due to bending and torsion stress. In most investigations it revealed that bending stress is much serve than torsional stress, therefore the probability criteria for the bending state would be higher compared to the torsion state. A statistical comparison between the developed Markov Chain Model and field data was done to observe the percentage of error. The reliability analysis and prediction was derived and illustrated from the Markov Chain Model were shown in the Weibull probability and cumulative distribution function, hazard rate and reliability curve and the bathtub curve. It can be concluded that Markov Chain Model has the ability to generate near similar data with minimal percentage of error and for a practical application; the proposed model provides a good accuracy in determining the reliability for the crankshaft under mixed mode loading

  10. Adjoint sensitivity analysis of dynamic reliability models based on Markov chains - I: Theory

    International Nuclear Information System (INIS)

    Cacuci, D. G.; Cacuci, D. G.; Ionescu-Bujor, M.

    2008-01-01

    The development of the adjoint sensitivity analysis procedure (ASAP) for generic dynamic reliability models based on Markov chains is presented, together with applications of this procedure to the analysis of several systems of increasing complexity. The general theory is presented in Part I of this work and is accompanied by a paradigm application to the dynamic reliability analysis of a simple binary component, namely a pump functioning on an 'up/down' cycle until it fails irreparably. This paradigm example admits a closed form analytical solution, which permits a clear illustration of the main characteristics of the ASAP for Markov chains. In particular, it is shown that the ASAP for Markov chains presents outstanding computational advantages over other procedures currently in use for sensitivity and uncertainty analysis of the dynamic reliability of large-scale systems. This conclusion is further underscored by the large-scale applications presented in Part II. (authors)

  11. Adjoint sensitivity analysis of dynamic reliability models based on Markov chains - I: Theory

    Energy Technology Data Exchange (ETDEWEB)

    Cacuci, D. G. [Commiss Energy Atom, Direct Energy Nucl, Saclay, (France); Cacuci, D. G. [Univ Karlsruhe, Inst Nucl Technol and Reactor Safety, D-76021 Karlsruhe, (Germany); Ionescu-Bujor, M. [Forschungszentrum Karlsruhe, Fus Program, D-76021 Karlsruhe, (Germany)

    2008-07-01

    The development of the adjoint sensitivity analysis procedure (ASAP) for generic dynamic reliability models based on Markov chains is presented, together with applications of this procedure to the analysis of several systems of increasing complexity. The general theory is presented in Part I of this work and is accompanied by a paradigm application to the dynamic reliability analysis of a simple binary component, namely a pump functioning on an 'up/down' cycle until it fails irreparably. This paradigm example admits a closed form analytical solution, which permits a clear illustration of the main characteristics of the ASAP for Markov chains. In particular, it is shown that the ASAP for Markov chains presents outstanding computational advantages over other procedures currently in use for sensitivity and uncertainty analysis of the dynamic reliability of large-scale systems. This conclusion is further underscored by the large-scale applications presented in Part II. (authors)

  12. Risk and reliability analysis theory and applications : in honor of Prof. Armen Der Kiureghian

    CERN Document Server

    2017-01-01

    This book presents a unique collection of contributions from some of the foremost scholars in the field of risk and reliability analysis. Combining the most advanced analysis techniques with practical applications, it is one of the most comprehensive and up-to-date books available on risk-based engineering. All the fundamental concepts needed to conduct risk and reliability assessments are covered in detail, providing readers with a sound understanding of the field and making the book a powerful tool for students and researchers alike. This book was prepared in honor of Professor Armen Der Kiureghian, one of the fathers of modern risk and reliability analysis.

  13. Summary of the preparation of methodology for digital system reliability analysis for PSA purposes

    International Nuclear Information System (INIS)

    Hustak, S.; Babic, P.

    2001-12-01

    The report is structured as follows: Specific features of and requirements for the digital part of NPP Instrumentation and Control (I and C) systems (Computer-controlled digital technologies and systems of the NPP I and C system; Specific types of digital technology failures and preventive provisions; Reliability requirements for the digital parts of I and C systems; Safety requirements for the digital parts of I and C systems; Defence-in-depth). Qualitative analyses of NPP I and C system reliability and safety (Introductory system analysis; Qualitative requirements for and proof of NPP I and C system reliability and safety). Quantitative reliability analyses of the digital parts of I and C systems (Selection of a suitable quantitative measure of digital system reliability; Selected qualitative and quantitative findings regarding digital system reliability; Use of relations among the occurrences of the various types of failure). Mathematical section in support of the calculation of the various types of indices (Boolean reliability models, Markovian reliability models). Example of digital system analysis (Description of a selected protective function and the relevant digital part of the I and C system; Functional chain examined, its components and fault tree). (P.A.)

  14. Data collection on the unit control room simulator as a method of operator reliability analysis

    International Nuclear Information System (INIS)

    Holy, J.

    1998-01-01

    The report consists of the following chapters: (1) Probabilistic assessment of nuclear power plant operation safety and human factor reliability analysis; (2) Simulators and simulations as human reliability analysis tools; (3) DOE project for using the collection and analysis of data from the unit control room simulator in human factor reliability analysis at the Paks nuclear power plant; (4) General requirements for the organization of the simulator data collection project; (5) Full-scale simulator at the Nuclear Power Plants Research Institute in Trnava, Slovakia, used as a training means for operators of the Dukovany NPP; (6) Assessment of the feasibility of quantification of important human actions modelled within a PSA study by employing simulator data analysis; (7) Assessment of the feasibility of using the various exercise topics for the quantification of the PSA model; (8) Assessment of the feasibility of employing the simulator in the analysis of the individual factors affecting the operator's activity; and (9) Examples of application of statistical methods in the analysis of the human reliability factor. (P.A.)

  15. The design and use of reliability data base with analysis tool

    Energy Technology Data Exchange (ETDEWEB)

    Doorepall, J.; Cooke, R.; Paulsen, J.; Hokstadt, P.

    1996-06-01

    With the advent of sophisticated computer tools, it is possible to give a distributed population of users direct access to reliability component operational histories. This allows the user a greater freedom in defining statistical populations of components and selecting failure modes. However, the reliability data analyst`s current analytical instrumentarium is not adequate for this purpose. The terminology used in organizing and gathering reliability data is standardized, and the statistical methods used in analyzing this data are not always suitably chosen. This report attempts to establish a baseline with regard to terminology and analysis methods, to support the use of a new analysis tool. It builds on results obtained in several projects for the ESTEC and SKI on the design of reliability databases. Starting with component socket time histories, we identify a sequence of questions which should be answered prior to the employment of analytical methods. These questions concern the homogeneity and stationarity of (possible dependent) competing failure modes and the independence of competing failure modes. Statistical tests, some of them new, are proposed for answering these questions. Attention is given to issues of non-identifiability of competing risk and clustering of failure-repair events. These ideas have been implemented in an analysis tool for grazing component socket time histories, and illustrative results are presented. The appendix provides background on statistical tests and competing failure modes. (au) 4 tabs., 17 ills., 61 refs.

  16. The design and use of reliability data base with analysis tool

    International Nuclear Information System (INIS)

    Doorepall, J.; Cooke, R.; Paulsen, J.; Hokstadt, P.

    1996-06-01

    With the advent of sophisticated computer tools, it is possible to give a distributed population of users direct access to reliability component operational histories. This allows the user a greater freedom in defining statistical populations of components and selecting failure modes. However, the reliability data analyst's current analytical instrumentarium is not adequate for this purpose. The terminology used in organizing and gathering reliability data is standardized, and the statistical methods used in analyzing this data are not always suitably chosen. This report attempts to establish a baseline with regard to terminology and analysis methods, to support the use of a new analysis tool. It builds on results obtained in several projects for the ESTEC and SKI on the design of reliability databases. Starting with component socket time histories, we identify a sequence of questions which should be answered prior to the employment of analytical methods. These questions concern the homogeneity and stationarity of (possible dependent) competing failure modes and the independence of competing failure modes. Statistical tests, some of them new, are proposed for answering these questions. Attention is given to issues of non-identifiability of competing risk and clustering of failure-repair events. These ideas have been implemented in an analysis tool for grazing component socket time histories, and illustrative results are presented. The appendix provides background on statistical tests and competing failure modes. (au) 4 tabs., 17 ills., 61 refs

  17. Reliability analysis of component-level redundant topologies for solid-state fault current limiter

    Science.gov (United States)

    Farhadi, Masoud; Abapour, Mehdi; Mohammadi-Ivatloo, Behnam

    2018-04-01

    Experience shows that semiconductor switches in power electronics systems are the most vulnerable components. One of the most common ways to solve this reliability challenge is component-level redundant design. There are four possible configurations for the redundant design in component level. This article presents a comparative reliability analysis between different component-level redundant designs for solid-state fault current limiter. The aim of the proposed analysis is to determine the more reliable component-level redundant configuration. The mean time to failure (MTTF) is used as the reliability parameter. Considering both fault types (open circuit and short circuit), the MTTFs of different configurations are calculated. It is demonstrated that more reliable configuration depends on the junction temperature of the semiconductor switches in the steady state. That junction temperature is a function of (i) ambient temperature, (ii) power loss of the semiconductor switch and (iii) thermal resistance of heat sink. Also, results' sensitivity to each parameter is investigated. The results show that in different conditions, various configurations have higher reliability. The experimental results are presented to clarify the theory and feasibility of the proposed approaches. At last, levelised costs of different configurations are analysed for a fair comparison.

  18. Application of reliability analysis methods to the comparison of two safety circuits

    International Nuclear Information System (INIS)

    Signoret, J.-P.

    1975-01-01

    Two circuits of different design, intended for assuming the ''Low Pressure Safety Injection'' function in PWR reactors are analyzed using reliability methods. The reliability analysis of these circuits allows the failure trees to be established and the failure probability derived. The dependence of these results on test use and maintenance is emphasized as well as critical paths. The great number of results obtained may allow a well-informed choice taking account of the reliability wanted for the type of circuits [fr

  19. Signal Quality Outage Analysis for Ultra-Reliable Communications in Cellular Networks

    DEFF Research Database (Denmark)

    Gerardino, Guillermo Andrés Pocovi; Alvarez, Beatriz Soret; Lauridsen, Mads

    2015-01-01

    Ultra-reliable communications over wireless will open the possibility for a wide range of novel use cases and applications. In cellular networks, achieving reliable communication is challenging due to many factors, particularly the fading of the desired signal and the interference. In this regard......, we investigate the potential of several techniques to combat these main threats. The analysis shows that traditional microscopic multiple-input multiple-output schemes with 2x2 or 4x4 antenna configurations are not enough to fulfil stringent reliability requirements. It is revealed how such antenna...... schemes must be complemented with macroscopic diversity as well as interference management techniques in order to ensure the necessary SINR outage performance. Based on the obtained performance results, it is discussed which of the feasible options fulfilling the ultra-reliable criteria are most promising...

  20. An application of the fault tree analysis for the power system reliability estimation

    International Nuclear Information System (INIS)

    Volkanovski, A.; Cepin, M.; Mavko, B.

    2007-01-01

    The power system is a complex system with its main function to produce, transfer and provide consumers with electrical energy. Combinations of failures of components in the system can result in a failure of power delivery to certain load points and in some cases in a full blackout of power system. The power system reliability directly affects safe and reliable operation of nuclear power plants because the loss of offsite power is a significant contributor to the core damage frequency in probabilistic safety assessments of nuclear power plants. The method, which is based on the integration of the fault tree analysis with the analysis of the power flows in the power system, was developed and implemented for power system reliability assessment. The main contributors to the power system reliability are identified, both quantitatively and qualitatively. (author)