WorldWideScience

Sample records for failure detection architecture

  1. Practical, redundant, failure-tolerant, self-reconfiguring embedded system architecture

    Science.gov (United States)

    Klarer, Paul R.; Hayward, David R.; Amai, Wendy A.

    2006-10-03

    This invention relates to system architectures, specifically failure-tolerant and self-reconfiguring embedded system architectures. The invention provides both a method and architecture for redundancy. There can be redundancy in both software and hardware for multiple levels of redundancy. The invention provides a self-reconfiguring architecture for activating redundant modules whenever other modules fail. The architecture comprises: a communication backbone connected to two or more processors and software modules running on each of the processors. Each software module runs on one processor and resides on one or more of the other processors to be available as a backup module in the event of failure. Each module and backup module reports its status over the communication backbone. If a primary module does not report, its backup module takes over its function. If the primary module becomes available again, the backup module returns to its backup status.

  2. Enron Flaws In Organizational Architecture And Its Failure

    Directory of Open Access Journals (Sweden)

    Nguyen

    2015-08-01

    Full Text Available A series of corporate scandals at the beginning of last decade has given rise to the doubt on the efficiency of corporate governance practice in the United States. Of these scandals the collapse of Enron has exceptionally captured the public concern. It was the once seventh-largest company in the United States 1. It was rated the most innovative large company in America in Fortunes Most Admired Companies survey 2. In August 2000 its stock reached a peak of nearly 70 billion 3. However within a year its stock had become almost useless papers 2. It just was unbelievable for many people. What went wrong Was it due to the failure of corporate governance in general Actually the central factor leading to the collapse of Enron was the failure in its organizational architecture. This paper starts by providing an overview of corporate governance system with an emphasis on the corporate organizational architecture as its important facet. Then it discusses flaws in the organizational architecture of Enron and argues that these eventually led to the breakdown of the whole corporate governance system at Enron. Finally some implications and lessons for the practice of corporate governance are presented.

  3. Enhanced bending failure strain in biological glass fibers due to internal lamellar architecture.

    Science.gov (United States)

    Monn, Michael A; Kesari, Haneesh

    2017-12-01

    The remarkable mechanical properties of biological structures, like tooth and bone, are often a consequence of their architecture. The tree ring-like layers that comprise the skeletal elements of the marine sponge Euplectella aspergillum are a quintessential example of the intricate architectures prevalent in biological structures. These skeletal elements, known as spicules, are hair-like fibers that consist of a concentric array of silica cylinders separated by thin, organic layers. Thousands of spicules act like roots to anchor the sponge to the sea floor. While spicules have been the subject of several structure-property investigations, those studies have mostly focused on the relationship between the spicule's layered architecture and toughness properties. In contrast, we hypothesize that the spicule's layered architecture enhances its bending failure strain, thereby allowing it to provide a better anchorage to the sea floor. We test our hypothesis by performing three-point bending tests on E. aspergillum spicules, measuring their bending failure strains, and comparing them to those of spicules from a related sponge, Tethya aurantia. The T. aurantia spicules have a similar chemical composition to E. aspergillum spicules but have no architecture. Thus, any difference between the bending failure strains of the two types of spicules can be attributed to the E. aspergillum spicules' layered architecture. We found that the bending failure strains of the E. aspergillum spicules were roughly 2.4 times larger than those of the T. aurantia spicules. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Sonographic ally Detected Architectural Distortion: Clinical Significance

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Shin Kee; Seo, Bo Kyoung; Yi, Ann; Cha, Sang Hoon; Kim, Baek Hyun; Cho, Kyu Ran; Kim, Young Sik; Son, Gil Soo; Kim, Young Soo; Kim, Hee Young [Korea University Ansan Hospital, Ansan (Korea, Republic of)

    2008-12-15

    Architectural distortion is a suspicious abnormality for the diagnosis of breast cancer. The aim of this study was to investigate the clinical significance of sonographic ally detected architectural distortion. From January 2006 to June 2008, 20 patients were identified who had sonographic ally detected architectural distortions without a history of trauma or surgery and abnormal mammographic findings related to an architectural distortion. All of the lesions were pathologically verified. We evaluated the clinical and pathological findings and then assessed the clinical significance of the sonographic ally detected architectural distortions. Based on the clinical findings, one (5%) of the 20 patients had a palpable lump and the remaining 19 patients had no symptoms. No patient had a family history of breast cancer. Based on the pathological findings, three (15%) patients had malignancies. The malignant lesions included invasive ductal carcinomas (n = 2) and ductal carcinoma in situ (n = 1). Four (20%) patients had high-risk lesions: atypical ductal hyperplasia (n = 3) and lobular carcinoma in situ (n = 1). The remaining 13 (65%) patients had benign lesions, however, seven (35%) out of 13 patients had mild-risk lesions (three intraductal papillomas, three moderate or florid epithelial hyperplasia and one sclerosing adenosis). Of the sonographic ally detected architectural distortions, 35% were breast cancers or high-risk lesions and 35% were mild-risk lesions. Thus, a biopsy might be needed for an architectural distortion without an associated mass as depicted on breast ultrasound, even though the mammographic findings are normal

  5. Sonographic ally Detected Architectural Distortion: Clinical Significance

    International Nuclear Information System (INIS)

    Kim, Shin Kee; Seo, Bo Kyoung; Yi, Ann; Cha, Sang Hoon; Kim, Baek Hyun; Cho, Kyu Ran; Kim, Young Sik; Son, Gil Soo; Kim, Young Soo; Kim, Hee Young

    2008-01-01

    Architectural distortion is a suspicious abnormality for the diagnosis of breast cancer. The aim of this study was to investigate the clinical significance of sonographic ally detected architectural distortion. From January 2006 to June 2008, 20 patients were identified who had sonographic ally detected architectural distortions without a history of trauma or surgery and abnormal mammographic findings related to an architectural distortion. All of the lesions were pathologically verified. We evaluated the clinical and pathological findings and then assessed the clinical significance of the sonographic ally detected architectural distortions. Based on the clinical findings, one (5%) of the 20 patients had a palpable lump and the remaining 19 patients had no symptoms. No patient had a family history of breast cancer. Based on the pathological findings, three (15%) patients had malignancies. The malignant lesions included invasive ductal carcinomas (n = 2) and ductal carcinoma in situ (n = 1). Four (20%) patients had high-risk lesions: atypical ductal hyperplasia (n = 3) and lobular carcinoma in situ (n = 1). The remaining 13 (65%) patients had benign lesions, however, seven (35%) out of 13 patients had mild-risk lesions (three intraductal papillomas, three moderate or florid epithelial hyperplasia and one sclerosing adenosis). Of the sonographic ally detected architectural distortions, 35% were breast cancers or high-risk lesions and 35% were mild-risk lesions. Thus, a biopsy might be needed for an architectural distortion without an associated mass as depicted on breast ultrasound, even though the mammographic findings are normal

  6. Fuel failure detection and location methods in CAGRs

    International Nuclear Information System (INIS)

    Harris, A.M.

    1982-06-01

    The release of fission products from AGR fuel failures and the way in which the signals from such failures must be detected against the background signal from uranium contamination of the fuel is considered. Theoretical assessments of failure detection are used to show the limitations of the existing Electrostatic Wire Precipitator Burst Can Detection system (BCD) and how its operating parameters can be optimised. Two promising alternative methods, the 'split count' technique and the use of iodine measurements, are described. The results of a detailed study of the mechanical and electronic performance of the present BCD trolleys are given. The limited experience of detection and location of two fuel failures in CAGR using conventional and alternative methods is reviewed. The larger failure was detected and located using the conventional BCD equipment with a high confidence level. It is shown that smaller failures may not be easy to detect and locate using the current BCD equipment, and the second smaller failure probably remained in the reactor for about a year before it was discharged. The split count technique used with modified BCD equipment was able to detect the smaller failure after careful inspection of the data. (author)

  7. Improved GLR method to instrument failure detection

    International Nuclear Information System (INIS)

    Jeong, Hak Yeoung; Chang, Soon Heung

    1985-01-01

    The generalized likehood radio(GLR) method performs statistical tests on the innovations sequence of a Kalman-Buchy filter state estimator for system failure detection and its identification. However, the major drawback of the convensional GLR is to hypothesize particular failure type in each case. In this paper, a method to solve this drawback is proposed. The improved GLR method is applied to a PWR pressurizer and gives successful results in detection and identification of any failure. Furthmore, some benefit on the processing time per each cycle of failure detection and its identification can be accompanied. (Author)

  8. Smart environment architecture for emotion detection and regulation.

    Science.gov (United States)

    Fernández-Caballero, Antonio; Martínez-Rodrigo, Arturo; Pastor, José Manuel; Castillo, José Carlos; Lozano-Monasor, Elena; López, María T; Zangróniz, Roberto; Latorre, José Miguel; Fernández-Sotos, Alicia

    2016-12-01

    This paper introduces an architecture as a proof-of-concept for emotion detection and regulation in smart health environments. The aim of the proposal is to detect the patient's emotional state by analysing his/her physiological signals, facial expression and behaviour. Then, the system provides the best-tailored actions in the environment to regulate these emotions towards a positive mood when possible. The current state-of-the-art in emotion regulation through music and colour/light is implemented with the final goal of enhancing the quality of life and care of the subject. The paper describes the three main parts of the architecture, namely "Emotion Detection", "Emotion Regulation" and "Emotion Feedback Control". "Emotion Detection" works with the data captured from the patient, whereas "Emotion Regulation" offers him/her different musical pieces and colour/light settings. "Emotion Feedback Control" performs as a feedback control loop to assess the effect of emotion regulation over emotion detection. We are currently testing the overall architecture and the intervention in real environments to achieve our final goal. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Rate based failure detection

    Science.gov (United States)

    Johnson, Brett Emery Trabun; Gamage, Thoshitha Thanushka; Bakken, David Edward

    2018-01-02

    This disclosure describes, in part, a system management component and failure detection component for use in a power grid data network to identify anomalies within the network and systematically adjust the quality of service of data published by publishers and subscribed to by subscribers within the network. In one implementation, subscribers may identify a desired data rate, a minimum acceptable data rate, desired latency, minimum acceptable latency and a priority for each subscription. The failure detection component may identify an anomaly within the network and a source of the anomaly. Based on the identified anomaly, data rates and or data paths may be adjusted in real-time to ensure that the power grid data network does not become overloaded and/or fail.

  10. Signal analysis for failure detection

    International Nuclear Information System (INIS)

    Parpaglione, M.C.; Perez, L.V.; Rubio, D.A.; Czibener, D.; D'Attellis, C.E.; Brudny, P.I.; Ruzzante, J.E.

    1994-01-01

    Several methods for analysis of acoustic emission signals are presented. They are mainly oriented to detection of changes in noisy signals and characterization of higher amplitude discrete pulses or bursts. The aim was to relate changes and events with failure, crack or wear in materials, being the final goal to obtain automatic means of detecting such changes and/or events. Performance evaluation was made using both simulated and laboratory test signals. The methods being presented are the following: 1. Application of the Hopfield Neural Network (NN) model for classifying faults in pipes and detecting wear of a bearing. 2. Application of the Kohonnen and Back Propagation Neural Network model for the same problem. 3. Application of Kalman filtering to determine time occurrence of bursts. 4. Application of a bank of Kalman filters (KF) for failure detection in pipes. 5. Study of amplitude distribution of signals for detecting changes in their shape. 6. Application of the entropy distance to measure differences between signals. (author). 10 refs, 11 figs

  11. Failure position detection device for nuclear fuel rod

    International Nuclear Information System (INIS)

    Ishida, Takeshi; Higuchi, Shin-ichi; Ito, Masaru; Matsuda, Yasuhiko

    1987-01-01

    Purpose: To easily detect failure position of a nuclear fuel rod by relatively moving an air-tightly shielded detection portion to a fuel rod. Constitution: For detecting the failure position of a leaked fuel assembly, the fuel assembly is dismantled and a portion of withdrawn fuel rod is air-tightly sealed with an inspection portion. The inside of the inspection portion is maintained at a pressure-reduced state. Then, in a case if failed openings are formed at a portion sealed by the inspection portion in the fuel rod, FP gases in the fuel rod are released based on the reduced pressure and the FP gases are detected in the detection portion. Accordingly, by relatively moving the detection portion to the fuel rod, the failure position can be detected. (Yoshino, Y.)

  12. Failure position detection device for nuclear fuel rod

    Energy Technology Data Exchange (ETDEWEB)

    Ishida, Takeshi; Higuchi, Shin-ichi; Ito, Masaru; Matsuda, Yasuhiko

    1987-03-24

    Purpose: To easily detect failure position of a nuclear fuel rod by relatively moving an air-tightly shielded detection portion to a fuel rod. Constitution: For detecting the failure position of a leaked fuel assembly, the fuel assembly is dismantled and a portion of withdrawn fuel rod is air-tightly sealed with an inspection portion. The inside of the inspection portion is maintained at a pressure-reduced state. Then, in a case if failed openings are formed at a portion sealed by the inspection portion in the fuel rod, FP gases in the fuel rod are released based on the reduced pressure and the FP gases are detected in the detection portion. Accordingly, by relatively moving the detection portion to the fuel rod, the failure position can be detected. (Yoshino, Y.).

  13. Fuel failure detection in operating reactors

    International Nuclear Information System (INIS)

    Seigel, B.; Hagen, H.H.

    1977-12-01

    Activity detectors in commercial BWRs and PWRs are examined to determine their capability to detect a small number of fuel rod failures during reactor operation. The off-gas system radiation monitor in a BWR and the letdown line radiation monitor in a PWR are calculated to have this capability, and events are cited that support this analysis. Other common detectors are found to be insensitive to small numbers of fuel failures. While adequate detectors exist for normal and transient operation, those detectors would not perform rapidly enough to be useful during accidents; in most accidents, however, primary system sensors (pressure, temperature, level) would provide adequate warning. Advanced methods of fuel failure detection are mentioned

  14. A failure detection and isolation system simulator

    International Nuclear Information System (INIS)

    Assumpcao Filho, E.O.; Nakata, H.

    1990-04-01

    A failure detection and isolation system (FDI) simulation program has been developed for IBM-PC microcomputers. The program, based on the sequential likelihood ratio testing method developed by A. Wald, was implemented with the Monte-Carlo technique. The calculated failure detection rate was favorably compared against the wind-tunnel experimental redundant temperature sensors. (author) [pt

  15. Stability and performance of propulsion control systems with distributed control architectures and failures

    Science.gov (United States)

    Belapurkar, Rohit K.

    Future aircraft engine control systems will be based on a distributed architecture, in which, the sensors and actuators will be connected to the Full Authority Digital Engine Control (FADEC) through an engine area network. Distributed engine control architecture will allow the implementation of advanced, active control techniques along with achieving weight reduction, improvement in performance and lower life cycle cost. The performance of a distributed engine control system is predominantly dependent on the performance of the communication network. Due to the serial data transmission policy, network-induced time delays and sampling jitter are introduced between the sensor/actuator nodes and the distributed FADEC. Communication network faults and transient node failures may result in data dropouts, which may not only degrade the control system performance but may even destabilize the engine control system. Three different architectures for a turbine engine control system based on a distributed framework are presented. A partially distributed control system for a turbo-shaft engine is designed based on ARINC 825 communication protocol. Stability conditions and control design methodology are developed for the proposed partially distributed turbo-shaft engine control system to guarantee the desired performance under the presence of network-induced time delay and random data loss due to transient sensor/actuator failures. A fault tolerant control design methodology is proposed to benefit from the availability of an additional system bandwidth and from the broadcast feature of the data network. It is shown that a reconfigurable fault tolerant control design can help to reduce the performance degradation in presence of node failures. A T-700 turbo-shaft engine model is used to validate the proposed control methodology based on both single input and multiple-input multiple-output control design techniques.

  16. Detecting failure of climate predictions

    Science.gov (United States)

    Runge, Michael C.; Stroeve, Julienne C.; Barrett, Andrew P.; McDonald-Madden, Eve

    2016-01-01

    The practical consequences of climate change challenge society to formulate responses that are more suited to achieving long-term objectives, even if those responses have to be made in the face of uncertainty1, 2. Such a decision-analytic focus uses the products of climate science as probabilistic predictions about the effects of management policies3. Here we present methods to detect when climate predictions are failing to capture the system dynamics. For a single model, we measure goodness of fit based on the empirical distribution function, and define failure when the distribution of observed values significantly diverges from the modelled distribution. For a set of models, the same statistic can be used to provide relative weights for the individual models, and we define failure when there is no linear weighting of the ensemble models that produces a satisfactory match to the observations. Early detection of failure of a set of predictions is important for improving model predictions and the decisions based on them. We show that these methods would have detected a range shift in northern pintail 20 years before it was actually discovered, and are increasingly giving more weight to those climate models that forecast a September ice-free Arctic by 2055.

  17. National architectures of detection in boundaries

    International Nuclear Information System (INIS)

    Astudillo Iraola, S.; Ortiz Olmo, A.

    2011-01-01

    Among the different types of smuggling, surely the nuclear or radioactive material is one of the greatest concern to the international level, mainly due to the possible consequences of a terrorist attack using such materials. The national architecture development for the detection of nuclear and radioactive materials at borders is the international response to this threat. We can define the concept as the set of systems, resources and infrastructure used in a coordinated manner allowing for adequate detection capability.

  18. Tomosynthesis-detected Architectural Distortion: Management Algorithm with Radiologic-Pathologic Correlation.

    Science.gov (United States)

    Durand, Melissa A; Wang, Steven; Hooley, Regina J; Raghu, Madhavi; Philpotts, Liane E

    2016-01-01

    As use of digital breast tomosynthesis becomes increasingly widespread, new management challenges are inevitable because tomosynthesis may reveal suspicious lesions not visible at conventional two-dimensional (2D) full-field digital mammography. Architectural distortion is a mammographic finding associated with a high positive predictive value for malignancy. It is detected more frequently at tomosynthesis than at 2D digital mammography and may even be occult at conventional 2D imaging. Few studies have focused on tomosynthesis-detected architectural distortions to date, and optimal management of these distortions has yet to be well defined. Since implementing tomosynthesis at our institution in 2011, we have learned some practical ways to assess architectural distortion. Because distortions may be subtle, tomosynthesis localization tools plus improved visualization of adjacent landmarks are crucial elements in guiding mammographic identification of elusive distortions. These same tools can guide more focused ultrasonography (US) of the breast, which facilitates detection and permits US-guided tissue sampling. Some distortions may be sonographically occult, in which case magnetic resonance imaging may be a reasonable option, both to increase diagnostic confidence and to provide a means for image-guided biopsy. As an alternative, tomosynthesis-guided biopsy, conventional stereotactic biopsy (when possible), or tomosynthesis-guided needle localization may be used to achieve tissue diagnosis. Practical uses for tomosynthesis in evaluation of architectural distortion are highlighted, potential complications are identified, and a working algorithm for management of tomosynthesis-detected architectural distortion is proposed. (©)RSNA, 2016.

  19. Artificial-neural-network-based failure detection and isolation

    Science.gov (United States)

    Sadok, Mokhtar; Gharsalli, Imed; Alouani, Ali T.

    1998-03-01

    This paper presents the design of a systematic failure detection and isolation system that uses the concept of failure sensitive variables (FSV) and artificial neural networks (ANN). The proposed approach was applied to tube leak detection in a utility boiler system. Results of the experimental testing are presented in the paper.

  20. The application of the detection filter to aircraft control surface and actuator failure detection and isolation

    Science.gov (United States)

    Bonnice, W. F.; Wagner, E.; Motyka, P.; Hall, S. R.

    1985-01-01

    The performance of the detection filter in detecting and isolating aircraft control surface and actuator failures is evaluated. The basic detection filter theory assumption of no direct input-output coupling is violated in this application due to the use of acceleration measurements for detecting and isolating failures. With this coupling, residuals produced by control surface failures may only be constrained to a known plane rather than to a single direction. A detection filter design with such planar failure signatures is presented, with the design issues briefly addressed. In addition, a modification to constrain the residual to a single known direction even with direct input-output coupling is also presented. Both the detection filter and the modification are tested using a nonlinear aircraft simulation. While no thresholds were selected, both filters demonstrated an ability to detect control surface and actuator failures. Failure isolation may be a problem if there are several control surfaces which produce similar effects on the aircraft. In addition, the detection filter was sensitive to wind turbulence and modeling errors.

  1. Detecting virological failure in HIVinfected Tanzanian children ...

    African Journals Online (AJOL)

    Background. The performance of clinical and immunological criteria to predict virological failure in HIV-infected children receiving antiretroviral therapy (ART) is not well documented. Objective. To determine the validity of clinical and immunological monitoring in detecting virological failure in children on ART. Methods.

  2. Failure Prediction And Detection In Cloud Datacenters

    Directory of Open Access Journals (Sweden)

    Purvil Bambharolia

    2017-09-01

    Full Text Available Cloud computing is a novel technology in the field of distributed computing. Usage of Cloud computing is increasing rapidly day by day. In order to serve the customers and businesses satisfactorily fault occurring in datacenters and servers must be detected and predicted efficiently in order to launch mechanisms to tolerate the failures occurred. Failure in one of the hosted datacenters may propagate to other datacenters and make the situation worse. In order to prevent such situations one can predict a failure proliferating throughout the cloud computing system and launch mechanisms to deal with it proactively. One of the ways to predict failures is to train a machine to predict failure on the basis of messages or logs passed between various components of the cloud. In the training session the machine can identify certain message patterns relating to failure of data centers. Later on the machine can be used to check whether a certain group of message logs follow such patterns or not. Moreover each cloud server can be defined by a state which indicates whether the cloud is running properly or is facing some failure. Parameters such as CPU usage memory usage etc. can be maintained for each of the servers. Using this parameters we can add a layer of detection where in we develop a decision tree based on these parameters which can classify whether the passed in parameters to the decision tree indicate failure state or proper state.

  3. Object Detection Based on Fast/Faster RCNN Employing Fully Convolutional Architectures

    Directory of Open Access Journals (Sweden)

    Yun Ren

    2018-01-01

    Full Text Available Modern object detectors always include two major parts: a feature extractor and a feature classifier as same as traditional object detectors. The deeper and wider convolutional architectures are adopted as the feature extractor at present. However, many notable object detection systems such as Fast/Faster RCNN only consider simple fully connected layers as the feature classifier. In this paper, we declare that it is beneficial for the detection performance to elaboratively design deep convolutional networks (ConvNets of various depths for feature classification, especially using the fully convolutional architectures. In addition, this paper also demonstrates how to employ the fully convolutional architectures in the Fast/Faster RCNN. Experimental results show that a classifier based on convolutional layer is more effective for object detection than that based on fully connected layer and that the better detection performance can be achieved by employing deeper ConvNets as the feature classifier.

  4. Development of failure-detecting device for γ radioimmunoassay counter

    International Nuclear Information System (INIS)

    Shao Xianzhi; Zhang Bingfeng

    1997-01-01

    A failures-detecting device based on single chip microcomputer technique for detecting of failures of γ radioimmunoassay counter is developed. The device can output signals of variable amplitude and frequency similar to the pulse of γ particle for shooting problem parts of γ counter's detecting system. By automatically comparing the shapes and amplitudes of the two signals to and from an amplifier unit, the device can distinguish if the amplifier unit works normally. The differential-input amplifier circuit gives 0.1% accuracy for the measurement of the stability of high voltage. The pulse widen circuit of this device allows for middle speed A/D detecting of periodical low-frequency pulse waves of micro-second width. This device is used specifically for the maintaining and failure-detecting of γ radioimmunoassay counter

  5. Architecture Level Safety Analyses for Safety-Critical Systems

    Directory of Open Access Journals (Sweden)

    K. S. Kushal

    2017-01-01

    Full Text Available The dependency of complex embedded Safety-Critical Systems across Avionics and Aerospace domains on their underlying software and hardware components has gradually increased with progression in time. Such application domain systems are developed based on a complex integrated architecture, which is modular in nature. Engineering practices assured with system safety standards to manage the failure, faulty, and unsafe operational conditions are very much necessary. System safety analyses involve the analysis of complex software architecture of the system, a major aspect in leading to fatal consequences in the behaviour of Safety-Critical Systems, and provide high reliability and dependability factors during their development. In this paper, we propose an architecture fault modeling and the safety analyses approach that will aid in identifying and eliminating the design flaws. The formal foundations of SAE Architecture Analysis & Design Language (AADL augmented with the Error Model Annex (EMV are discussed. The fault propagation, failure behaviour, and the composite behaviour of the design flaws/failures are considered for architecture safety analysis. The illustration of the proposed approach is validated by implementing the Speed Control Unit of Power-Boat Autopilot (PBA system. The Error Model Annex (EMV is guided with the pattern of consideration and inclusion of probable failure scenarios and propagation of fault conditions in the Speed Control Unit of Power-Boat Autopilot (PBA. This helps in validating the system architecture with the detection of the error event in the model and its impact in the operational environment. This also provides an insight of the certification impact that these exceptional conditions pose at various criticality levels and design assurance levels and its implications in verifying and validating the designs.

  6. Automatic patient respiration failure detection system with wireless transmission

    Science.gov (United States)

    Dimeff, J.; Pope, J. M.

    1968-01-01

    Automatic respiration failure detection system detects respiration failure in patients with a surgically implanted tracheostomy tube, and actuates an audible and/or visual alarm. The system incorporates a miniature radio transmitter so that the patient is unencumbered by wires yet can be monitored from a remote location.

  7. Device for detecting failure of reactor system

    International Nuclear Information System (INIS)

    Miyazawa, Tatsuo.

    1979-01-01

    Purpose: To make it possible to rapidly detect any failure in a reactor system prior to the leakage of coolants. Constitution: The dose of beta line is computed from the difference between the power of a detector for reacting with both beta and gamma lines and a detector for reacting only with gamma line to detect the failure of a reactor system, thereby to raise the detection speed and improve the detection accuracy. More specifically, a radiation detector A detects gamma and beta lines by means of piezoelectric elements. A radiation detector B caused the opening of the detector A to be covered with a metal, and detects only gamma line. The detected values of detectors A and B are amplified by an amplifier and applied to a rate meter and a counter, the values being converted into DC and introduced into a comparison circuit, where the outputs of the rate meter are compared with each other. When the difference is more than the predetermined range, it is supplied as output to an alarm circuit where an alarm signal is produced. (Nakamura, S.)

  8. Failure detection studies by layered neural network

    International Nuclear Information System (INIS)

    Ciftcioglu, O.; Seker, S.; Turkcan, E.

    1991-06-01

    Failure detection studies by layered neural network (NN) are described. The particular application area is an operating nuclear power plant and the failure detection is of concern as result of system surveillance in real-time. The NN system is considered to be consisting of 3 layers, one of which being hidden, and the NN parameters are determined adaptively by the backpropagation (BP) method, the process being the training phase. Studies are performed using the power spectra of the pressure signal of the primary system of an operating nuclear power plant of PWR type. The studies revealed that, by means of NN approach, failure detection can effectively be carried out using the redundant information as well as this is the case in this work; namely, from measurement of the primary pressure signals one can estimate the primary system coolant temperature and hence the deviation from the operational temperature state, the operational status identified in the training phase being referred to as normal. (author). 13 refs.; 4 figs.; 2 tabs

  9. Knowledge representation methods for early failure detection

    International Nuclear Information System (INIS)

    Scherer, K.P.; Stiller, P.

    1990-01-01

    To supervise technical processes like nuclear power plants, it is very important to detect failure modes in an early stage. In the nuclear research center at Karlsruhe an expert system is developed, embedded in a computer network of autonomous computers, which are used for intelligent prepocessing. Events, process data and actual parameter values are stored in slots of special frames in the knowledge base of the expert system. Both rule based and fact based knowledge representations are employed to generate cause consequence chains of failure states. By on-line surveillance of the reactor process, the slots of the frames are dynamically actualized. Immediately after the evaluation, the inference engine starts in the special domain experts (triggered by metarules from a manager) and detects the correspondend failures or anomaly state. Matching the members of the chain and regarding a catalogue of instructions and messages, what is to do by the operator, future failure states can be estimated and propagation can be prohibited. That means qualitative failure prediction based on cause consequence in the static part of the knowledge base. Also, a time series of physical data can be used to predict on analytical way future process state and to continue such a theoretical propagation with matching the cause consuquence chain

  10. Triplexer Monitor Design for Failure Detection in FTTH System

    Science.gov (United States)

    Fu, Minglei; Le, Zichun; Hu, Jinhua; Fei, Xia

    2012-09-01

    Triplexer was one of the key components in FTTH systems, which employed an analog overlay channel for video broadcasting in addition to bidirectional digital transmission. To enhance the survivability of triplexer as well as the robustness of FTTH system, a multi-ports device named triplexer monitor was designed and realized, by which failures at triplexer ports can be detected and localized. Triplexer monitor was composed of integrated circuits and its four input ports were connected with the beam splitter whose power division ratio was 95∶5. By means of detecting the sampled optical signal from the beam splitters, triplexer monitor tracked the status of the four ports in triplexer (e.g. 1310 nm, 1490 nm, 1550 nm and com ports). In this paper, the operation scenario of the triplexer monitor with external optical devices was addressed. And the integrated circuit structure of the triplexer monitor was also given. Furthermore, a failure localization algorithm was proposed, which based on the state transition diagram. In order to measure the failure detection and localization time under the circumstance of different failed ports, an experimental test-bed was built. Experiment results showed that the detection time for the failure at 1310 nm port by the triplexer monitor was less than 8.20 ms. For the failure at 1490 nm or 1550 nm port it was less than 8.20 ms and for the failure at com port it was less than 7.20 ms.

  11. SiC: An Agent Based Architecture for Preventing and Detecting Attacks to Ubiquitous Databases

    Science.gov (United States)

    Pinzón, Cristian; de Paz, Yanira; Bajo, Javier; Abraham, Ajith; Corchado, Juan M.

    One of the main attacks to ubiquitous databases is the structure query language (SQL) injection attack, which causes severe damages both in the commercial aspect and in the user’s confidence. This chapter proposes the SiC architecture as a solution to the SQL injection attack problem. This is a hierarchical distributed multiagent architecture, which involves an entirely new approach with respect to existing architectures for the prevention and detection of SQL injections. SiC incorporates a kind of intelligent agent, which integrates a case-based reasoning system. This agent, which is the core of the architecture, allows the application of detection techniques based on anomalies as well as those based on patterns, providing a great degree of autonomy, flexibility, robustness and dynamic scalability. The characteristics of the multiagent system allow an architecture to detect attacks from different types of devices, regardless of the physical location. The architecture has been tested on a medical database, guaranteeing safe access from various devices such as PDAs and notebook computers.

  12. Sensor Failure Detection of FASSIP System using Principal Component Analysis

    Science.gov (United States)

    Sudarno; Juarsa, Mulya; Santosa, Kussigit; Deswandri; Sunaryo, Geni Rina

    2018-02-01

    In the nuclear reactor accident of Fukushima Daiichi in Japan, the damages of core and pressure vessel were caused by the failure of its active cooling system (diesel generator was inundated by tsunami). Thus researches on passive cooling system for Nuclear Power Plant are performed to improve the safety aspects of nuclear reactors. The FASSIP system (Passive System Simulation Facility) is an installation used to study the characteristics of passive cooling systems at nuclear power plants. The accuracy of sensor measurement of FASSIP system is essential, because as the basis for determining the characteristics of a passive cooling system. In this research, a sensor failure detection method for FASSIP system is developed, so the indication of sensor failures can be detected early. The method used is Principal Component Analysis (PCA) to reduce the dimension of the sensor, with the Squarred Prediction Error (SPE) and statistic Hotteling criteria for detecting sensor failure indication. The results shows that PCA method is capable to detect the occurrence of a failure at any sensor.

  13. Investigation of support vector machine for the detection of architectural distortion in mammographic images

    International Nuclear Information System (INIS)

    Guo, Q; Shao, J; Ruiz, V

    2005-01-01

    This paper investigates detection of architectural distortion in mammographic images using support vector machine. Hausdorff dimension is used to characterise the texture feature of mammographic images. Support vector machine, a learning machine based on statistical learning theory, is trained through supervised learning to detect architectural distortion. Compared to the Radial Basis Function neural networks, SVM produced more accurate classification results in distinguishing architectural distortion abnormality from normal breast parenchyma

  14. Investigation of support vector machine for the detection of architectural distortion in mammographic images

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Q [Department of Cybernetics, University of Reading, Reading RG6 6AY (United Kingdom); Shao, J [Department of Electronics, University of Kent at Canterbury, Kent CT2 7NT (United Kingdom); Ruiz, V [Department of Cybernetics, University of Reading, Reading RG6 6AY (United Kingdom)

    2005-01-01

    This paper investigates detection of architectural distortion in mammographic images using support vector machine. Hausdorff dimension is used to characterise the texture feature of mammographic images. Support vector machine, a learning machine based on statistical learning theory, is trained through supervised learning to detect architectural distortion. Compared to the Radial Basis Function neural networks, SVM produced more accurate classification results in distinguishing architectural distortion abnormality from normal breast parenchyma.

  15. FFTF fuel failure detection and characterization by cover gas monitoring. Final report

    International Nuclear Information System (INIS)

    Miller, W.C.; Holt, F.E.

    1977-01-01

    The Fast Flux Test Facility (FFTF) will include a Fuel Failure Monitoring (FFM) System designed to detect, characterize, and locate fuel and absorber pin failures (i.e., cladding breaches) using a combination of delayed neutron detection, cover gas radioisotope monitoring, and gas tagging. During the past several years the Hanford Engineering Development Laboratory has been involved in the development, design, procurement, and installation of this integrated system. The paper describes one portion of the FFM System, the Cover Gas Monitoring System (CGMS), which has the primary function of fuel failure detection and characterization in the FFTF. By monitoring the various radioisotopes in the cover gas, the CGMS will both detect fuel and absorber pin failures and characterize those failures as to magnitude and severity

  16. Implementing an Intrusion Detection System in the Mysea Architecture

    National Research Council Canada - National Science Library

    Tenhunen, Thomas

    2008-01-01

    .... The objective of this thesis is to design an intrusion detection system (IDS) architecture that permits administrators operating on MYSEA client machines to conveniently view and analyze IDS alerts from the single level networks...

  17. Contribution to the physical study of sheath failure detections

    International Nuclear Information System (INIS)

    Mangin, Jean-Paul

    1968-11-01

    As the study of an installation aimed at the detection of sheath failure requires the knowledge of a great number of data related to all the fields of nuclear technology (fission mechanisms, sheath failure mechanisms, recoil of fission products, distribution of the heat transfer fluid in the reactor, techniques of measurement of beta and gamma neutrons, nuclear safety, and so on), this report aims at highlighting some specific issues, more particularly those related to sensors based on delayed neutrons. After having recalled the principles of sheath failure detection, the author presents the various aspects of the study of the formation of fission products and of their passage into the heat transfer fluid: detection by using delayed neutrons, detection by electrostatic collection, passage of fuel fission products into the coolant (recoil, corrosion, gaseous diffusion in the fuel), formation of fission products in the fuel (fission product efficiency). He reports the study of the transport of fission products by the coolant from their place of birth to the place of measurement. He presents the system of measurement by detection of delayed neutrons and by electrostatic collection, reports a sensitivity calculation, a background noise assessment, the determination of detection threshold, and the application of sensitivity and detection thresholds calculations [fr

  18. Design of fuel failure detection system for multipurpose reactor GA. Siwabessy

    International Nuclear Information System (INIS)

    Sujalmo Saiful; Kuntoro Iman; Sato, Mitsugu; Isshiki, Masahiko.

    1992-01-01

    A fuel failure detection system (FFDS) has been designed for the Reactor GA. Siwabessy. The FFDS is aimed to detect fuel failure by observing delayed neutron released by fission products such as N-17, I-137, Br-87 and Br-88 in the primary cooling system. The delayed neutrons will be detected by using four neutron detectors, type BF-3, which are located inside a Sampling Tank. The detector location has been determined and the location is associated with the transit time from the reactor core outlet to the Sampling Tank, which is approximately 60 seconds. The neutron detection efficiency was calculated by using a computer code named MORSE. The FFDS has the capability to detect as quickly as possible, even a small failure of a fuel element occurring in the reactor core. Therefore the presence of FFDS in a reactor must be considered, in order to prevent further progress if the fuel failure occurs. (author)

  19. Proof-testing strategies induced by dangerous detected failures of safety-instrumented systems

    International Nuclear Information System (INIS)

    Liu, Yiliu; Rausand, Marvin

    2016-01-01

    Some dangerous failures of safety-instrumented systems (SISs) are detected almost immediately by diagnostic self-testing as dangerous detected (DD) failures, whereas other dangerous failures can only be detected by proof-testing, and are therefore called dangerous undetected (DU) failures. Some items may have a DU- and a DD-failure at the same time. After the repair of a DD-failure is completed, the maintenance team has two options: to perform an insert proof test for DU-failure or not. If an insert proof test is performed, it is necessary to decide whether the next scheduled proof test should be postponed or performed at the scheduled time. This paper analyzes the effects of different testing strategies on the safety performance of a single channel of a SIS. The safety performance is analyzed by Petri nets and by approximation formulas and the results obtained by the two approaches are compared. It is shown that insert testing improves the safety performance of the channel, but the feasibility and cost of the strategy may be a hindrance to recommend insert testing. - Highlights: • Identify the tests induced by detected failures. • Model the testing strategies following DD-failures. • Propose analytical formulas for effects of strategies. • Simulate and verify the proposed models.

  20. On the importance of controlling film architecture in detecting prostate specific antigen

    Science.gov (United States)

    Graça, Juliana Santos; Miyazaki, Celina Massumi; Shimizu, Flavio Makoto; Volpati, Diogo; Mejía-Salazar, J. R.; Oliveira, Osvaldo N., Jr.; Ferreira, Marystela

    2018-03-01

    Immunosensors made with nanostructured films are promising for detecting cancer biomarkers, even at early stages of the disease, but this requires control of film architecture to preserve the biological activity of immobilized antibodies. In this study, we used electrochemical impedance spectroscopy (EIS) to detect Prostate Specific Antigen (PSA) with immunosensors produced with layer-by-layer (LbL) films containing anti-PSA antibodies in two distinct film architectures. The antibodies were either adsorbed from solutions in which they were free, or from solutions where they were incorporated into liposomes of dipalmitoyl phosphatidyl glycerol (DPPG). Incorporation into DPPG liposomes was confirmed with surface plasmon resonance experiments, while the importance of electrostatic interactions on the electrical response was highlighted using the Finite Difference Time-Domain Method (FDTD). The sensitivity of both architectures was sufficient to detect the threshold value to diagnose prostate cancer (ca. 4 ng mL-1). In contrast to expectation, the sensor with the antibodies incorporated into DPPG liposomes had lower sensitivity, though the range of concentrations amenable to detection increased, according to the fitting of the EIS data using the Langmuir-Freundlich adsorption model. The performance of the two film architectures was compared qualitatively by plotting the data with a multidimensional projection technique, which constitutes a generic approach for optimizing immunosensors and other types of sensors.

  1. In-line 3D print failure detection using computer vision

    DEFF Research Database (Denmark)

    Lyngby, Rasmus Ahrenkiel; Wilm, Jakob; Eiríksson, Eyþór Rúnar

    2017-01-01

    Here we present our findings on a novel real-time vision system that allows for automatic detection of failure conditions that are considered outside of nominal operation. These failure modes include warping, build plate delamination and extrusion failure. Our system consists of a calibrated came...

  2. Incipient failure detection of space shuttle main engine turbopump bearings using vibration envelope detection

    Science.gov (United States)

    Hopson, Charles B.

    1987-01-01

    The results of an analysis performed on seven successive Space Shuttle Main Engine (SSME) static test firings, utilizing envelope detection of external accelerometer data are discussed. The results clearly show the great potential for using envelope detection techniques in SSME incipient failure detection.

  3. Orthogonal series generalized likelihood ratio test for failure detection and isolation. [for aircraft control

    Science.gov (United States)

    Hall, Steven R.; Walker, Bruce K.

    1990-01-01

    A new failure detection and isolation algorithm for linear dynamic systems is presented. This algorithm, the Orthogonal Series Generalized Likelihood Ratio (OSGLR) test, is based on the assumption that the failure modes of interest can be represented by truncated series expansions. This assumption leads to a failure detection algorithm with several desirable properties. Computer simulation results are presented for the detection of the failures of actuators and sensors of a C-130 aircraft. The results show that the OSGLR test generally performs as well as the GLR test in terms of time to detect a failure and is more robust to failure mode uncertainty. However, the OSGLR test is also somewhat more sensitive to modeling errors than the GLR test.

  4. Multidrug-resistant tuberculosis treatment failure detection depends on monitoring interval and microbiological method

    Science.gov (United States)

    White, Richard A.; Lu, Chunling; Rodriguez, Carly A.; Bayona, Jaime; Becerra, Mercedes C.; Burgos, Marcos; Centis, Rosella; Cohen, Theodore; Cox, Helen; D'Ambrosio, Lia; Danilovitz, Manfred; Falzon, Dennis; Gelmanova, Irina Y.; Gler, Maria T.; Grinsdale, Jennifer A.; Holtz, Timothy H.; Keshavjee, Salmaan; Leimane, Vaira; Menzies, Dick; Milstein, Meredith B.; Mishustin, Sergey P.; Pagano, Marcello; Quelapio, Maria I.; Shean, Karen; Shin, Sonya S.; Tolman, Arielle W.; van der Walt, Martha L.; Van Deun, Armand; Viiklepp, Piret

    2016-01-01

    Debate persists about monitoring method (culture or smear) and interval (monthly or less frequently) during treatment for multidrug-resistant tuberculosis (MDR-TB). We analysed existing data and estimated the effect of monitoring strategies on timing of failure detection. We identified studies reporting microbiological response to MDR-TB treatment and solicited individual patient data from authors. Frailty survival models were used to estimate pooled relative risk of failure detection in the last 12 months of treatment; hazard of failure using monthly culture was the reference. Data were obtained for 5410 patients across 12 observational studies. During the last 12 months of treatment, failure detection occurred in a median of 3 months by monthly culture; failure detection was delayed by 2, 7, and 9 months relying on bimonthly culture, monthly smear and bimonthly smear, respectively. Risk (95% CI) of failure detection delay resulting from monthly smear relative to culture is 0.38 (0.34–0.42) for all patients and 0.33 (0.25–0.42) for HIV-co-infected patients. Failure detection is delayed by reducing the sensitivity and frequency of the monitoring method. Monthly monitoring of sputum cultures from patients receiving MDR-TB treatment is recommended. Expanded laboratory capacity is needed for high-quality culture, and for smear microscopy and rapid molecular tests. PMID:27587552

  5. Conflict detection and resolution system architecture for unmanned aerial vehicles in civil airspace

    NARCIS (Netherlands)

    Jenie, Y.I.; van Kampen, E.J.; Ellerbroek, J.; Hoekstra, J.M.

    2015-01-01

    A novel architecture for a general Unmanned Aerial Vehicle (UAV) Conflict Detection and Resolution (CD&R) system, in the context of their integration into the civilian airspace, is proposed in this paper. The architecture consists of layers of safety approaches ,each representing a combination of

  6. Failure detection in high-performance clusters and computers using chaotic map computations

    Science.gov (United States)

    Rao, Nageswara S.

    2015-09-01

    A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

  7. Error Detection and Error Classification: Failure Awareness in Data Transfer Scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Louisiana State University; Balman, Mehmet; Kosar, Tevfik

    2010-10-27

    Data transfer in distributed environment is prone to frequent failures resulting from back-end system level problems, like connectivity failure which is technically untraceable by users. Error messages are not logged efficiently, and sometimes are not relevant/useful from users point-of-view. Our study explores the possibility of an efficient error detection and reporting system for such environments. Prior knowledge about the environment and awareness of the actual reason behind a failure would enable higher level planners to make better and accurate decisions. It is necessary to have well defined error detection and error reporting methods to increase the usability and serviceability of existing data transfer protocols and data management systems. We investigate the applicability of early error detection and error classification techniques and propose an error reporting framework and a failure-aware data transfer life cycle to improve arrangement of data transfer operations and to enhance decision making of data transfer schedulers.

  8. Detection of sensor failures in nuclear plants using analytic redundancy

    International Nuclear Information System (INIS)

    Kitamura, M.

    1980-01-01

    A method for on-line, nonperturbative detection and identification of sensor failures in nuclear power plants was studied to determine its feasibility. This method is called analytic redundancy, or functional redundancy. Sensor failure has traditionally been detected by comparing multiple signals from redundant sensors, such as in two-out-of-three logic. In analytic redundancy, with the help of an assumed model of the physical system, the signals from a set of sensors are processed to reproduce the signals from all system sensors

  9. Filter design for failure detection and isolation in the presence of modeling errors and disturbances

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, Jakob

    1996-01-01

    The design problem of filters for robust failure detection and isolation, (FDI) is addressed in this paper. The failure detection problem will be considered with respect to both modeling errors and disturbances. Both an approach based on failure detection observers as well as an approach based...

  10. Cyber-Physical Architecture Assisted by Programmable Networking

    OpenAIRE

    Rubio-Hernan, Jose; Sahay, Rishikesh; De Cicco, Luca; Garcia-Alfaro, Joaquin

    2018-01-01

    Cyber-physical technologies are prone to attacks, in addition to faults and failures. The issue of protecting cyber-physical systems should be tackled by jointly addressing security at both cyber and physical domains, in order to promptly detect and mitigate cyber-physical threats. Towards this end, this letter proposes a new architecture combining control-theoretic solutions together with programmable networking techniques to jointly handle crucial threats to cyber-physical systems. The arch...

  11. Detection of Failure in Asynchronous Motor Using Soft Computing Method

    Science.gov (United States)

    Vinoth Kumar, K.; Sony, Kevin; Achenkunju John, Alan; Kuriakose, Anto; John, Ano P.

    2018-04-01

    This paper investigates the stator short winding failure of asynchronous motor also their effects on motor current spectrums. A fuzzy logic approach i.e., model based technique possibly will help to detect the asynchronous motor failure. Actually, fuzzy logic similar to humanoid intelligent methods besides expected linguistic empowering inferences through vague statistics. The dynamic model is technologically advanced for asynchronous motor by means of fuzzy logic classifier towards investigate the stator inter turn failure in addition open phase failure. A hardware implementation was carried out with LabVIEW for the online-monitoring of faults.

  12. Fuzzy modeling of analytical redundancy for sensor failure detection

    International Nuclear Information System (INIS)

    Tsai, T.M.; Chou, H.P.

    1991-01-01

    Failure detection and isolation (FDI) in dynamic systems may be accomplished by testing the consistency of the system via analytically redundant relations. The redundant relation is basically a mathematical model relating system inputs and dissimilar sensor outputs from which information is extracted and subsequently examined for the presence of failure signatures. Performance of the approach is often jeopardized by inherent modeling error and noise interference. To mitigate such effects, techniques such as Kalman filtering, auto-regression-moving-average (ARMA) modeling in conjunction with probability tests are often employed. These conventional techniques treat the stochastic nature of uncertainties in a deterministic manner to generate best-estimated model and sensor outputs by minimizing uncertainties. In this paper, the authors present a different approach by treating the effect of uncertainties with fuzzy numbers. Coefficients in redundant relations derived from first-principle physical models are considered as fuzzy parameters and on-line updated according to system behaviors. Failure detection is accomplished by examining the possibility that a sensor signal occurred in an estimated fuzzy domain. To facilitate failure isolation, individual FDI monitors are designed for each interested sensor

  13. Self-Healing Many-Core Architecture: Analysis and Evaluation

    Directory of Open Access Journals (Sweden)

    Arezoo Kamran

    2016-01-01

    Full Text Available More pronounced aging effects, more frequent early-life failures, and incomplete testing and verification processes due to time-to-market pressure in new fabrication technologies impose reliability challenges on forthcoming systems. A promising solution to these reliability challenges is self-test and self-reconfiguration with no or limited external control. In this work a scalable self-test mechanism for periodic online testing of many-core processor has been proposed. This test mechanism facilitates autonomous detection and omission of faulty cores and makes graceful degradation of the many-core architecture possible. Several test components are incorporated in the many-core architecture that distribute test stimuli, suspend normal operation of individual processing cores, apply test, and detect faulty cores. Test is performed concurrently with the system normal operation without any noticeable downtime at the application level. Experimental results show that the proposed test architecture is extensively scalable in terms of hardware overhead and performance overhead that makes it applicable to many-cores with more than a thousand processing cores.

  14. Detection of architectural distortion in prior screening mammograms using Gabor filters, phase portraits, fractal dimension, and texture analysis

    International Nuclear Information System (INIS)

    Rangayyan, Rangaraj M.; Prajna, Shormistha; Ayres, Fabio J.; Desautels, J.E.L.

    2008-01-01

    Mammography is a widely used screening tool for the early detection of breast cancer. One of the commonly missed signs of breast cancer is architectural distortion. The purpose of this study is to explore the application of fractal analysis and texture measures for the detection of architectural distortion in screening mammograms taken prior to the detection of breast cancer. A method based on Gabor filters and phase portrait analysis was used to detect initial candidates for sites of architectural distortion. A total of 386 regions of interest (ROIs) were automatically obtained from 14 ''prior mammograms'', including 21 ROIs related to architectural distortion. From the corresponding set of 14 ''detection mammograms'', 398 ROIs were obtained, including 18 related to breast cancer. For each ROI, the fractal dimension and Haralick's texture features were computed. The fractal dimension of the ROIs was calculated using the circular average power spectrum technique. The average fractal dimension of the normal (false-positive) ROIs was significantly higher than that of the ROIs with architectural distortion (p = 0.006). For the ''prior mammograms'', the best receiver operating characteristics (ROC) performance achieved, in terms of the area under the ROC curve, was 0.80 with a Bayesian classifier using four features including fractal dimension, entropy, sum entropy, and inverse difference moment. Analysis of the performance of the methods with free-response receiver operating characteristics indicated a sensitivity of 0.79 at 8.4 false positives per image in the detection of sites of architectural distortion in the ''prior mammograms''. Fractal dimension offers a promising way to detect the presence of architectural distortion in prior mammograms. (orig.)

  15. Device for detecting imminent failure of high-dielectric stress capacitors. [Patent application

    Science.gov (United States)

    McDuff, G.G.

    1980-11-05

    A device is described for detecting imminent failure of a high-dielectric stress capacitor utilizing circuitry for detecting pulse width variations and pulse magnitude variations. Inexpensive microprocessor circuitry is utilized to make numerical calculations of digital data supplied by detection circuitry for comparison of pulse width data and magnitude data to determine if preselected ranges have been exceeded, thereby indicating imminent failure of a capacitor. Detection circuitry may be incorporated in transmission lines, pulse power circuitry, including laser pulse circuitry or any circuitry where capacitors or capacitor banks are utilized.

  16. Detection of wood failure by image processing method: influence of algorithm, adhesive and wood species

    Science.gov (United States)

    Lanying Lin; Sheng He; Feng Fu; Xiping Wang

    2015-01-01

    Wood failure percentage (WFP) is an important index for evaluating the bond strength of plywood. Currently, the method used for detecting WFP is visual inspection, which lacks efficiency. In order to improve it, image processing methods are applied to wood failure detection. The present study used thresholding and K-means clustering algorithms in wood failure detection...

  17. Confabulation Based Real-time Anomaly Detection for Wide-area Surveillance Using Heterogeneous High Performance Computing Architecture

    Science.gov (United States)

    2015-06-01

    CONFABULATION BASED REAL-TIME ANOMALY DETECTION FOR WIDE-AREA SURVEILLANCE USING HETEROGENEOUS HIGH PERFORMANCE COMPUTING ARCHITECTURE SYRACUSE...DETECTION FOR WIDE-AREA SURVEILLANCE USING HETEROGENEOUS HIGH PERFORMANCE COMPUTING ARCHITECTURE 5a. CONTRACT NUMBER FA8750-12-1-0251 5b. GRANT...processors including graphic processor units (GPUs) and Intel Xeon Phi processors. Experimental results showed significant speedups, which can enable

  18. Method of detecting fuel failure in FBR type reactor and method of estimating fuel failure position

    International Nuclear Information System (INIS)

    Sonoda, Yukio; Tamaoki, Tetsuo

    1989-01-01

    Noise components in a normal state contained in detection signals from delayed neutron monitors disposed to a coolant inlet, etc. of an intermediate heat exchanger are forecast by self-recurring model and eliminated, and resultant detection signals are monitored thereby detecting fuel failure high sensitivity. Subsequently, the reactor is controlled to a low power operation state and a new self-recurring model to the detection signals from the delayed neutron monitors are prepared. Then, noise components in this state are removed and control rods near the delayed neutron monitors are extracted in a short stroke successively to examine the change of response of the delayed neutron monitors. Accordingly, the failed position for each of the fuels can be estimated at a level of one fuel assembly or a level of several assemblies containing the above-mentioned fuel assembly. Since the fuel failure can be detected at a high sensitivity and the position can be estimated, diffusion of abnormality can be prevented and plant shutdown for fuel exchange can be minimized. (I.S.)

  19. Immunity-based detection, identification, and evaluation of aircraft sub-system failures

    Science.gov (United States)

    Moncayo, Hever Y.

    This thesis describes the design, development, and flight-simulation testing of an integrated Artificial Immune System (AIS) for detection, identification, and evaluation of a wide variety of sensor, actuator, propulsion, and structural failures/damages including the prediction of the achievable states and other limitations on performance and handling qualities. The AIS scheme achieves high detection rate and low number of false alarms for all the failure categories considered. Data collected using a motion-based flight simulator are used to define the self for an extended sub-region of the flight envelope. The NASA IFCS F-15 research aircraft model is used and represents a supersonic fighter which include model following adaptive control laws based on non-linear dynamic inversion and artificial neural network augmentation. The flight simulation tests are designed to analyze and demonstrate the performance of the immunity-based aircraft failure detection, identification and evaluation (FDIE) scheme. A general robustness analysis is also presented by determining the achievable limits for a desired performance in the presence of atmospheric perturbations. For the purpose of this work, the integrated AIS scheme is implemented based on three main components. The first component performs the detection when one of the considered failures is present in the system. The second component consists in the identification of the failure category and the classification according to the failed element. During the third phase a general evaluation of the failure is performed with the estimation of the magnitude/severity of the failure and the prediction of its effect on reducing the flight envelope of the aircraft system. Solutions and alternatives to specific design issues of the AIS scheme, such as data clustering and empty space optimization, data fusion and duplication removal, definition of features, dimensionality reduction, and selection of cluster/detector shape are also

  20. Comparison of digital mammography and digital breast tomosynthesis in the detection of architectural distortion.

    Science.gov (United States)

    Dibble, Elizabeth H; Lourenco, Ana P; Baird, Grayson L; Ward, Robert C; Maynard, A Stanley; Mainiero, Martha B

    2018-01-01

    To compare interobserver variability (IOV), reader confidence, and sensitivity/specificity in detecting architectural distortion (AD) on digital mammography (DM) versus digital breast tomosynthesis (DBT). This IRB-approved, HIPAA-compliant reader study used a counterbalanced experimental design. We searched radiology reports for AD on screening mammograms from 5 March 2012-27 November 2013. Cases were consensus-reviewed. Controls were selected from demographically matched non-AD examinations. Two radiologists and two fellows blinded to outcomes independently reviewed images from two patient groups in two sessions. Readers recorded presence/absence of AD and confidence level. Agreement and differences in confidence and sensitivity/specificity between DBT versus DM and attendings versus fellows were examined using weighted Kappa and generalised mixed modeling, respectively. There were 59 AD patients and 59 controls for 1,888 observations (59 × 2 (cases and controls) × 2 breasts × 2 imaging techniques × 4 readers). For all readers, agreement improved with DBT versus DM (0.61 vs. 0.37). Confidence was higher with DBT, p = .001. DBT achieved higher sensitivity (.59 vs. .32), p .90). DBT achieved higher positive likelihood ratio values, smaller negative likelihood ratio values, and larger ROC values. DBT decreases IOV, increases confidence, and improves sensitivity while maintaining high specificity in detecting AD. • Digital breast tomosynthesis decreases interobserver variability in the detection of architectural distortion. • Digital breast tomosynthesis increases reader confidence in the detection of architectural distortion. • Digital breast tomosynthesis improves sensitivity in the detection of architectural distortion.

  1. Reduction of false positives in the detection of architectural distortion in mammograms by using a geometrically constrained phase portrait model

    International Nuclear Information System (INIS)

    Ayres, Fabio J.; Rangayyan, Rangaraj M.

    2007-01-01

    Objective One of the commonly missed signs of breast cancer is architectural distortion. We have developed techniques for the detection of architectural distortion in mammograms, based on the analysis of oriented texture through the application of Gabor filters and a linear phase portrait model. In this paper, we propose constraining the shape of the general phase portrait model as a means to reduce the false-positive rate in the detection of architectural distortion. Material and methods The methods were tested with one set of 19 cases of architectural distortion and 41 normal mammograms, and with another set of 37 cases of architectural distortion. Results Sensitivity rates of 84% with 4.5 false positives per image and 81% with 10 false positives per image were obtained for the two sets of images. Conclusion The adoption of a constrained phase portrait model with a symmetric matrix and the incorporation of its condition number in the analysis resulted in a reduction in the false-positive rate in the detection of architectural distortion. The proposed techniques, dedicated for the detection and localization of architectural distortion, should lead to efficient detection of early signs of breast cancer. (orig.)

  2. Penstock failure detection system at the 'Valsan' hydro power plant

    International Nuclear Information System (INIS)

    Georgescu, A M; Coşoiu, C I; Alboiu, N; Hlevca, D; Tataroiu, R; Popescu, O

    2012-01-01

    'Valsan' is a small Hydro Power Plant, 5 MW, situated at about 160 km north of Bucharest, Romania, on the small 'Valsan' river in a remote mountainous area. It is equipped with a single Francis turbine. The penstock is located in the access shaft of the HPP. 'Hidroelectrica', the Romanian company that operates the HPP, was trying to implement a remote penstock failure detection system. Starting from a classic hydraulic problem, the authors of the paper derived a method for failure detection and localization on the pipe. The method assumes the existence of 2 flow meters and 2 pressure transducers at the inlet and outlet of the pressurized pipe. Calculations have to be based on experimental values measured in a permanent regime for different values of the flow rate. The method was at first tested on a pipe, in the Hydraulic Laboratory of the Technical University of Civil Engineering Bucharest. Pipe failure was modelled by opening of a valve on a tee branch of the analyzed pipe. Experimental results were found to be in good agreement with theoretical ones. The penstock of the 'Valsan' HPP, was modelled in EPANET, in order to: i) test the method at a larger scale; ii) get the right flow and pressure transducers that are needed to implement it. At the request of 'Hidroelectrica' a routine that computes the efficiency of the turbine was added to the monitoring software. After the system was implemented, another series of measurements were performed at the site in order to validate it. Failure was modelled by opening an existing valve on a branch of the penstock. Detection of the failure was correct and almost instantaneous, while failure location was accurate within 5% of the total penstock length.

  3. Penstock failure detection system at the "Valsan" hydro power plant

    Science.gov (United States)

    Georgescu, A. M.; Coşoiu, C. I.; Alboiu, N.; Hlevca, D.; Tataroiu, R.; Popescu, O.

    2012-11-01

    "Valsan" is a small Hydro Power Plant, 5 MW, situated at about 160 km north of Bucharest, Romania, on the small "Valsan" river in a remote mountainous area. It is equipped with a single Francis turbine. The penstock is located in the access shaft of the HPP. "Hidroelectrica", the Romanian company that operates the HPP, was trying to implement a remote penstock failure detection system. Starting from a classic hydraulic problem, the authors of the paper derived a method for failure detection and localization on the pipe. The method assumes the existence of 2 flow meters and 2 pressure transducers at the inlet and outlet of the pressurized pipe. Calculations have to be based on experimental values measured in a permanent regime for different values of the flow rate. The method was at first tested on a pipe, in the Hydraulic Laboratory of the Technical University of Civil Engineering Bucharest. Pipe failure was modelled by opening of a valve on a tee branch of the analyzed pipe. Experimental results were found to be in good agreement with theoretical ones. The penstock of the "Valsan" HPP, was modelled in EPANET, in order to: i) test the method at a larger scale; ii) get the right flow and pressure transducers that are needed to implement it. At the request of "Hidroelectrica" a routine that computes the efficiency of the turbine was added to the monitoring software. After the system was implemented, another series of measurements were performed at the site in order to validate it. Failure was modelled by opening an existing valve on a branch of the penstock. Detection of the failure was correct and almost instantaneous, while failure location was accurate within 5% of the total penstock length.

  4. SiC: An Agent Based Architecture for Preventing and Detecting Attacks to Ubiquitous Databases

    OpenAIRE

    Pinzón, Cristian; de Paz Santana, Yanira; Bajo Pérez, Javier; Abraham, Ajith P.; Corchado Rodríguez, Juan M.

    2009-01-01

    One of the main attacks to ubiquitous databases is the structure query language (SQL) injection attack, which causes severe damages both in the commercial aspect and in the user’s confidence. This chapter proposes the SiC architecture as a solution to the SQL injection attack problem. This is a hierarchical distributed multiagent architecture, which involves an entirely new approach with respect to existing architectures for the prevention and detection of SQL injections. SiC incorporates a k...

  5. Java Architecture for Detect and Avoid Extensibility and Modeling

    Science.gov (United States)

    Santiago, Confesor; Mueller, Eric Richard; Johnson, Marcus A.; Abramson, Michael; Snow, James William

    2015-01-01

    Unmanned aircraft will equip with a detect-and-avoid (DAA) system that enables them to comply with the requirement to "see and avoid" other aircraft, an important layer in the overall set of procedural, strategic and tactical separation methods designed to prevent mid-air collisions. This paper describes a capability called Java Architecture for Detect and Avoid Extensibility and Modeling (JADEM), developed to prototype and help evaluate various DAA technological requirements by providing a flexible and extensible software platform that models all major detect-and-avoid functions. Figure 1 illustrates JADEM's architecture. The surveillance module can be actual equipment on the unmanned aircraft or simulators that model the process by which sensors on-board detect other aircraft and provide track data to the traffic display. The track evaluation function evaluates each detected aircraft and decides whether to provide an alert to the pilot and its severity. Guidance is a combination of intruder track information, alerting, and avoidance/advisory algorithms behind the tools shown on the traffic display to aid the pilot in determining a maneuver to avoid a loss of well clear. All these functions are designed with a common interface and configurable implementation, which is critical in exploring DAA requirements. To date, JADEM has been utilized in three computer simulations of the National Airspace System, three pilot-in-the-loop experiments using a total of 37 professional UAS pilots, and two flight tests using NASA's Predator-B unmanned aircraft, named Ikhana. The data collected has directly informed the quantitative separation standard for "well clear", safety case, requirements development, and the operational environment for the DAA minimum operational performance standards. This work was performed by the Separation Assurance/Sense and Avoid Interoperability team under NASA's UAS Integration in the NAS project.

  6. Architectural design for a low cost FPGA-based traffic signal detection system in vehicles

    Science.gov (United States)

    López, Ignacio; Salvador, Rubén; Alarcón, Jaime; Moreno, Félix

    2007-05-01

    In this paper we propose an architecture for an embedded traffic signal detection system. Development of Advanced Driver Assistance Systems (ADAS) is one of the major trends of research in automotion nowadays. Examples of past and ongoing projects in the field are CHAMELEON ("Pre-Crash Application all around the vehicle" IST 1999-10108), PREVENT (Preventive and Active Safety Applications, FP6-507075, http://www.prevent-ip.org/) and AVRT in the US (Advanced Vision-Radar Threat Detection (AVRT): A Pre-Crash Detection and Active Safety System). It can be observed a major interest in systems for real-time analysis of complex driving scenarios, evaluating risk and anticipating collisions. The system will use a low cost CCD camera on the dashboard facing the road. The images will be processed by an Altera Cyclone family FPGA. The board does median and Sobel filtering of the incoming frames at PAL rate, and analyzes them for several categories of signals. The result is conveyed to the driver. The scarce resources provided by the hardware require an architecture developed for optimal use. The system will use a combination of neural networks and an adapted blackboard architecture. Several neural networks will be used in sequence for image analysis, by reconfiguring a single, generic hardware neural network in the FPGA. This generic network is optimized for speed, in order to admit several executions within the frame rate. The sequence will follow the execution cycle of the blackboard architecture. The global, blackboard architecture being developed and the hardware architecture for the generic, reconfigurable FPGA perceptron will be explained in this paper. The project is still at an early stage. However, some hardware implementation results are already available and will be offered in the paper.

  7. ATLANTIDES: An Architecture for Alert Verification in Network Intrusion Detection Systems

    NARCIS (Netherlands)

    Bolzoni, D.; Crispo, Bruno; Etalle, Sandro

    2007-01-01

    We present an architecture designed for alert verification (i.e., to reduce false positives) in network intrusion-detection systems. Our technique is based on a systematic (and automatic) anomaly-based analysis of the system output, which provides useful context information regarding the network

  8. An Energy-Efficient Multi-Tier Architecture for Fall Detection Using Smartphones.

    Science.gov (United States)

    Guvensan, M Amac; Kansiz, A Oguz; Camgoz, N Cihan; Turkmen, H Irem; Yavuz, A Gokhan; Karsligil, M Elif

    2017-06-23

    Automatic detection of fall events is vital to providing fast medical assistance to the causality, particularly when the injury causes loss of consciousness. Optimization of the energy consumption of mobile applications, especially those which run 24/7 in the background, is essential for longer use of smartphones. In order to improve energy-efficiency without compromising on the fall detection performance, we propose a novel 3-tier architecture that combines simple thresholding methods with machine learning algorithms. The proposed method is implemented on a mobile application, called uSurvive, for Android smartphones. It runs as a background service and monitors the activities of a person in daily life and automatically sends a notification to the appropriate authorities and/or user defined contacts when it detects a fall. The performance of the proposed method was evaluated in terms of fall detection performance and energy consumption. Real life performance tests conducted on two different models of smartphone demonstrate that our 3-tier architecture with feature reduction could save up to 62% of energy compared to machine learning only solutions. In addition to this energy saving, the hybrid method has a 93% of accuracy, which is superior to thresholding methods and better than machine learning only solutions.

  9. Real time failure detection in unreinforced cementitious composites with triboluminescent sensor

    International Nuclear Information System (INIS)

    Olawale, David O.; Kliewer, Kaitlyn; Okoye, Annuli; Dickens, Tarik J.; Uddin, Mohammed J.; Okoli, Okenwa I.

    2014-01-01

    The in-situ triboluminescent optical fiber (ITOF) sensor has an integrated sensing and transmission component that converts the energy from damage events like impacts and crack propagation into optical signals that are indicative of the magnitude of damage in composite structures like concrete bridges. Utilizing the triboluminescence (TL) property of ZnS:Mn, the ITOF sensor has been successfully integrated into unreinforced cementitious composite beams to create multifunctional smart structures with in-situ failure detection capabilities. The fabricated beams were tested under flexural loading, and real time failure detection was made by monitoring the TL signals generated by the integrated ITOF sensor. Tested beam samples emitted distinctive TL signals at the instance of failure. In addition, we report herein a new and promising approach to damage characterization using TL emission profiles. Analysis of TL emission profiles indicates that the ITOF sensor responds to crack propagation through the beam even when not in contact with the crack. Scanning electron microscopy analysis indicated that fracto-triboluminescence was responsible for the TL signals observed at the instance of beam failure. -- Highlights: • Developed a new approach to triboluminescence (TL)-based sensing with ZnS:Mn. • Damage-induced excitation of ZnS:Mn enabled real time damage detection in composite. • Based on sensor position, correlation exists between TL signal and failure stress. • Introduced a new approach to damage characterization with TL profile analysis

  10. Development of Uranium-Carrying Ball method for calibration of fuel element failure detecting systems

    International Nuclear Information System (INIS)

    Liu Yupu; Bao Wanping; Lu Cungang

    1988-01-01

    A Uranium-Carrying Ball method used for the determination of sensitivity, stability of the fuel element failure detecting systems is developed. A special facility for transporting the ball can be carried out by the flow of the cooling water, so that the failure signal can be simulated. Five different types of the Uranium-Carrying Ball have been developed. Type-I to Type-IV may provide failure signal in terms of uranium quantity or exposure area of uranium. Type-V can be used to simulate micro-flaw and examine the detectability of various detective methods for this kind of defect, at the same time it is difficult for the delayed neutron detector to detect micro-flaw. The results of long-time irradiation and washing test show that the working life of the balls is satisfactory. Using the experimentel facility with the balls, detailed study of the capability of various fuel failure detecting systems have been conducted successfully. The operation is easy and safe, the accuracy of this method is higher than that of other methods, the nuclear fuel consumption as well as the radioactive contamination is low. At present, the research on the failure mechanism is being conducted by means of this method

  11. Impact of Material and Architecture Model Parameters on the Failure of Woven Ceramic Matrix Composites (CMCs) via the Multiscale Generalized Method of Cells

    Science.gov (United States)

    Liu, Kuang C.; Arnold, Steven M.

    2011-01-01

    It is well known that failure of a material is a locally driven event. In the case of ceramic matrix composites (CMCs), significant variations in the microstructure of the composite exist and their significance on both deformation and life response need to be assessed. Examples of these variations include changes in the fiber tow shape, tow shifting/nesting and voids within and between tows. In the present work, the effects of many of these architectural parameters and material scatter of woven ceramic composite properties at the macroscale (woven RUC) will be studied to assess their sensitivity. The recently developed Multiscale Generalized Method of Cells methodology is used to determine the overall deformation response, proportional elastic limit (first matrix cracking), and failure under tensile loading conditions. The macroscale responses investigated illustrate the effect of architectural and material parameters on a single RUC representing a five harness satin weave fabric. Results shows that the most critical architectural parameter is weave void shape and content with other parameters being less in severity. Variation of the matrix material properties was also studied to illustrate the influence of the material variability on the overall features of the composite stress-strain response.

  12. Performance evaluation of canny edge detection on a tiled multicore architecture

    Science.gov (United States)

    Brethorst, Andrew Z.; Desai, Nehal; Enright, Douglas P.; Scrofano, Ronald

    2011-01-01

    In the last few years, a variety of multicore architectures have been used to parallelize image processing applications. In this paper, we focus on assessing the parallel speed-ups of different Canny edge detection parallelization strategies on the Tile64, a tiled multicore architecture developed by the Tilera Corporation. Included in these strategies are different ways Canny edge detection can be parallelized, as well as differences in data management. The two parallelization strategies examined were loop-level parallelism and domain decomposition. Loop-level parallelism is achieved through the use of OpenMP,1 and it is capable of parallelization across the range of values over which a loop iterates. Domain decomposition is the process of breaking down an image into subimages, where each subimage is processed independently, in parallel. The results of the two strategies show that for the same number of threads, programmer implemented, domain decomposition exhibits higher speed-ups than the compiler managed, loop-level parallelism implemented with OpenMP.

  13. An Architecture for Automated Fire Detection Early Warning System Based on Geoprocessing Service Composition

    Science.gov (United States)

    Samadzadegan, F.; Saber, M.; Zahmatkesh, H.; Joze Ghazi Khanlou, H.

    2013-09-01

    Rapidly discovering, sharing, integrating and applying geospatial information are key issues in the domain of emergency response and disaster management. Due to the distributed nature of data and processing resources in disaster management, utilizing a Service Oriented Architecture (SOA) to take advantages of workflow of services provides an efficient, flexible and reliable implementations to encounter different hazardous situation. The implementation specification of the Web Processing Service (WPS) has guided geospatial data processing in a Service Oriented Architecture (SOA) platform to become a widely accepted solution for processing remotely sensed data on the web. This paper presents an architecture design based on OGC web services for automated workflow for acquisition, processing remotely sensed data, detecting fire and sending notifications to the authorities. A basic architecture and its building blocks for an automated fire detection early warning system are represented using web-based processing of remote sensing imageries utilizing MODIS data. A composition of WPS processes is proposed as a WPS service to extract fire events from MODIS data. Subsequently, the paper highlights the role of WPS as a middleware interface in the domain of geospatial web service technology that can be used to invoke a large variety of geoprocessing operations and chaining of other web services as an engine of composition. The applicability of proposed architecture by a real world fire event detection and notification use case is evaluated. A GeoPortal client with open-source software was developed to manage data, metadata, processes, and authorities. Investigating feasibility and benefits of proposed framework shows that this framework can be used for wide area of geospatial applications specially disaster management and environmental monitoring.

  14. AN ARCHITECTURE FOR AUTOMATED FIRE DETECTION EARLY WARNING SYSTEM BASED ON GEOPROCESSING SERVICE COMPOSITION

    Directory of Open Access Journals (Sweden)

    F. Samadzadegan

    2013-09-01

    Full Text Available Rapidly discovering, sharing, integrating and applying geospatial information are key issues in the domain of emergency response and disaster management. Due to the distributed nature of data and processing resources in disaster management, utilizing a Service Oriented Architecture (SOA to take advantages of workflow of services provides an efficient, flexible and reliable implementations to encounter different hazardous situation. The implementation specification of the Web Processing Service (WPS has guided geospatial data processing in a Service Oriented Architecture (SOA platform to become a widely accepted solution for processing remotely sensed data on the web. This paper presents an architecture design based on OGC web services for automated workflow for acquisition, processing remotely sensed data, detecting fire and sending notifications to the authorities. A basic architecture and its building blocks for an automated fire detection early warning system are represented using web-based processing of remote sensing imageries utilizing MODIS data. A composition of WPS processes is proposed as a WPS service to extract fire events from MODIS data. Subsequently, the paper highlights the role of WPS as a middleware interface in the domain of geospatial web service technology that can be used to invoke a large variety of geoprocessing operations and chaining of other web services as an engine of composition. The applicability of proposed architecture by a real world fire event detection and notification use case is evaluated. A GeoPortal client with open-source software was developed to manage data, metadata, processes, and authorities. Investigating feasibility and benefits of proposed framework shows that this framework can be used for wide area of geospatial applications specially disaster management and environmental monitoring.

  15. Analysis of arrhythmic events is useful to detect lead failure earlier in patients followed by remote monitoring.

    Science.gov (United States)

    Nishii, Nobuhiro; Miyoshi, Akihito; Kubo, Motoki; Miyamoto, Masakazu; Morimoto, Yoshimasa; Kawada, Satoshi; Nakagawa, Koji; Watanabe, Atsuyuki; Nakamura, Kazufumi; Morita, Hiroshi; Ito, Hiroshi

    2018-03-01

    Remote monitoring (RM) has been advocated as the new standard of care for patients with cardiovascular implantable electronic devices (CIEDs). RM has allowed the early detection of adverse clinical events, such as arrhythmia, lead failure, and battery depletion. However, lead failure was often identified only by arrhythmic events, but not impedance abnormalities. To compare the usefulness of arrhythmic events with conventional impedance abnormalities for identifying lead failure in CIED patients followed by RM. CIED patients in 12 hospitals have been followed by the RM center in Okayama University Hospital. All transmitted data have been analyzed and summarized. From April 2009 to March 2016, 1,873 patients have been followed by the RM center. During the mean follow-up period of 775 days, 42 lead failure events (atrial lead 22, right ventricular pacemaker lead 5, implantable cardioverter defibrillator [ICD] lead 15) were detected. The proportion of lead failures detected only by arrhythmic events, which were not detected by conventional impedance abnormalities, was significantly higher than that detected by impedance abnormalities (arrhythmic event 76.2%, 95% CI: 60.5-87.9%; impedance abnormalities 23.8%, 95% CI: 12.1-39.5%). Twenty-seven events (64.7%) were detected without any alert. Of 15 patients with ICD lead failure, none has experienced inappropriate therapy. RM can detect lead failure earlier, before clinical adverse events. However, CIEDs often diagnose lead failure as just arrhythmic events without any warning. Thus, to detect lead failure earlier, careful human analysis of arrhythmic events is useful. © 2017 Wiley Periodicals, Inc.

  16. Development of failure detection system for gas-cooled reactor

    International Nuclear Information System (INIS)

    Feirreira, M.P.

    1990-01-01

    This work presents several kinds of Failure Detection Systems for Fuel Elements, stressing their functional principles and major applications. A comparative study indicates that the method of electrostatic precipitation of the fission gases Kr and Xe is the most efficient for fuel failure detection in gas-cooled reactors. A detailed study of the physical phenomena involved in electrostatic precipitation led to the derivation of an equation for the measured counting rate. The emission of fission products from the fuel and the ion recombination inside the chamber are evaluated. A computer program, developed to simulate the complete operation of the system, relates the counting rate to the concentration of Kr and Xe isotopes. The project of a mock-up is then presented. Finally, the program calculations are compared to experimental data, available from the literature, yielding a close agreement. (author)

  17. Flexible feature-space-construction architecture and its VLSI implementation for multi-scale object detection

    Science.gov (United States)

    Luo, Aiwen; An, Fengwei; Zhang, Xiangyu; Chen, Lei; Huang, Zunkai; Jürgen Mattausch, Hans

    2018-04-01

    Feature extraction techniques are a cornerstone of object detection in computer-vision-based applications. The detection performance of vison-based detection systems is often degraded by, e.g., changes in the illumination intensity of the light source, foreground-background contrast variations or automatic gain control from the camera. In order to avoid such degradation effects, we present a block-based L1-norm-circuit architecture which is configurable for different image-cell sizes, cell-based feature descriptors and image resolutions according to customization parameters from the circuit input. The incorporated flexibility in both the image resolution and the cell size for multi-scale image pyramids leads to lower computational complexity and power consumption. Additionally, an object-detection prototype for performance evaluation in 65 nm CMOS implements the proposed L1-norm circuit together with a histogram of oriented gradients (HOG) descriptor and a support vector machine (SVM) classifier. The proposed parallel architecture with high hardware efficiency enables real-time processing, high detection robustness, small chip-core area as well as low power consumption for multi-scale object detection.

  18. Software architecture analysis tool : software architecture metrics collection

    NARCIS (Netherlands)

    Muskens, J.; Chaudron, M.R.V.; Westgeest, R.

    2002-01-01

    The Software Engineering discipline lacks the ability to evaluate software architectures. Here we describe a tool for software architecture analysis that is based on metrics. Metrics can be used to detect possible problems and bottlenecks in software architectures. Even though metrics do not give a

  19. Sensor failure detection in dynamical systems by Kalman filtering methodology

    International Nuclear Information System (INIS)

    Ciftcioglu, O.

    1991-03-01

    Design of a sensor failure detection system by Kalman filtering methodology is described. The method models the process systems in state-space form, the information on each state being provided by relevant sensors present in the process system. Since the measured states are usually subject to noise, the estimation of the states optimally is an essential requirement. To this end the detection system comprises Kalman estimation filters, the number of which is equal to the number of states concerned. The estimated state of a particular signal in each filter is compared with the corresponding measured signal and difference beyond a predetermined bound is identified as failure, the sensor being identified/isolated as faulty. (author). 19 refs.; 8 figs.; 1 tab

  20. A New FPGA Architecture of FAST and BRIEF Algorithm for On-Board Corner Detection and Matching.

    Science.gov (United States)

    Huang, Jingjin; Zhou, Guoqing; Zhou, Xiang; Zhang, Rongting

    2018-03-28

    Although some researchers have proposed the Field Programmable Gate Array (FPGA) architectures of Feature From Accelerated Segment Test (FAST) and Binary Robust Independent Elementary Features (BRIEF) algorithm, there is no consideration of image data storage in these traditional architectures that will result in no image data that can be reused by the follow-up algorithms. This paper proposes a new FPGA architecture that considers the reuse of sub-image data. In the proposed architecture, a remainder-based method is firstly designed for reading the sub-image, a FAST detector and a BRIEF descriptor are combined for corner detection and matching. Six pairs of satellite images with different textures, which are located in the Mentougou district, Beijing, China, are used to evaluate the performance of the proposed architecture. The Modelsim simulation results found that: (i) the proposed architecture is effective for sub-image reading from DDR3 at a minimum cost; (ii) the FPGA implementation is corrected and efficient for corner detection and matching, such as the average value of matching rate of natural areas and artificial areas are approximately 67% and 83%, respectively, which are close to PC's and the processing speed by FPGA is approximately 31 and 2.5 times faster than those by PC processing and by GPU processing, respectively.

  1. Bearing failure detection of micro wind turbine via power spectral density analysis for stator current signals spectrum

    Science.gov (United States)

    Mahmood, Faleh H.; Kadhim, Hussein T.; Resen, Ali K.; Shaban, Auday H.

    2018-05-01

    The failure such as air gap weirdness, rubbing, and scrapping between stator and rotor generator arise unavoidably and may cause extremely terrible results for a wind turbine. Therefore, we should pay more attention to detect and identify its cause-bearing failure in wind turbine to improve the operational reliability. The current paper tends to use of power spectral density analysis method of detecting internal race and external race bearing failure in micro wind turbine by estimation stator current signal of the generator. The failure detector method shows that it is well suited and effective for bearing failure detection.

  2. Clad failure detection in G 3 - operational feedback

    International Nuclear Information System (INIS)

    Plisson, J.

    1964-01-01

    After briefly reviewing the role and the principles of clad failure detection, the author describes the working conditions and the conclusions reached after 4 years operation of this installation on the reactor G 3. He mentions also the modifications made to the original installation as well as the tests carried out and the experiments under way. (author) [fr

  3. Development and testing of an algorithm to detect implantable cardioverter-defibrillator lead failure.

    Science.gov (United States)

    Gunderson, Bruce D; Gillberg, Jeffrey M; Wood, Mark A; Vijayaraman, Pugazhendhi; Shepard, Richard K; Ellenbogen, Kenneth A

    2006-02-01

    Implantable cardioverter-defibrillator (ICD) lead failures often present as inappropriate shock therapy. An algorithm that can reliably discriminate between ventricular tachyarrhythmias and noise due to lead failure may prevent patient discomfort and anxiety and avoid device-induced proarrhythmia by preventing inappropriate ICD shocks. The goal of this analysis was to test an ICD tachycardia detection algorithm that differentiates noise due to lead failure from ventricular tachyarrhythmias. We tested an algorithm that uses a measure of the ventricular intracardiac electrogram baseline to discriminate the sinus rhythm isoelectric line from the right ventricular coil-can (i.e., far-field) electrogram during oversensing of noise caused by a lead failure. The baseline measure was defined as the product of the sum (mV) and standard deviation (mV) of the voltage samples for a 188-ms window centered on each sensed electrogram. If the minimum baseline measure of the last 12 beats was algorithm to detect lead failures. The minimum baseline measure for the 24 lead failure episodes (0.28 +/- 0.34 mV-mV) was smaller than the 135 ventricular tachycardia (40.8 +/- 43.0 mV-mV, P <.0001) and 55 ventricular fibrillation episodes (19.1 +/- 22.8 mV-mV, P <.05). A minimum baseline <0.35 mV-mV threshold had a sensitivity of 83% (20/24) with a 100% (190/190) specificity. A baseline measure of the far-field electrogram had a high sensitivity and specificity to detect lead failure noise compared with ventricular tachycardia or fibrillation.

  4. A new method for detecting pressure tube failures in Indian PHWRs

    International Nuclear Information System (INIS)

    Sharma, V.K.; Gupta, V.K.

    2000-01-01

    For the annulus gas system (AGS) of the standardised Indian pressurised heavy water reactor, an elaborate pressure tube (PT) crack monitoring and detection system is envisaged to ensure safety through leak-before-break. The parameters that are monitored relate to the detection of D 2 O moisture leaking in from the primary heat transport (PHT) system through a cracked PT. Since a slow build-up of moisture in the AGS may also occur for reasons other than PT failure, it is desirable that a diverse measurement technique should be available. This paper suggests such a technique, based on the observation that a small reference concentration of fission gases is normally present in the annulus gas. This concentration would change sharply upon PT failure, when the heavy water from the leaking PHT system releases the dissolved fission gas content into the annulus. This paper presents a theoretical study of the parameters that influence the build-up of fission product noble gases in the AGS and shows that leakage rates as low as 10 g h -1 from a PT crack can be detected in a few tens of minutes by this method. This is expected to substantially increase the available time between the leak detection and the PT failure, thus serving as an important tool in meeting the leak-before-break criterion of a critical component in PHWRs. (orig.)

  5. Failure detection by adaptive lattice modelling using Kalman filtering methodology : application to NPP

    International Nuclear Information System (INIS)

    Ciftcioglu, O.

    1991-03-01

    Detection of failure in the operational status of a NPP is described. The method uses lattice form of the signal modelling established by means of Kalman filtering methodology. In this approach each lattice parameter is considered to be a state and the minimum variance estimate of the states is performed adaptively by optimal parameter estimation together with fast convergence and favourable statistical properties. In particular, the state covariance is also the covariance of the error committed by that estimate of the state value and the Mahalanobis distance formed for pattern comparison takes x 2 distribution for normally distributed signals. The failure detection is performed after a decision making process by probabilistic assessments based on the statistical information provided. The failure detection system is implemented in multi-channel signal environment of Borssele NPP and its favourable features are demonstrated. (author). 29 refs.; 7 figs

  6. LIDeA: A Distributed Lightweight Intrusion Detection Architecture for Sensor Networks

    DEFF Research Database (Denmark)

    Giannetsos, Athanasios; Krontiris, Ioannis; Dimitriou, Tassos

    2008-01-01

    to achieve a more autonomic and complete defense mechanism, even against attacks that have not been anticipated in advance. In this paper, we present a lightweight intrusion detection system, called LIDeA, designed for wireless sensor networks. LIDeA is based on a distributed architecture, in which nodes......Wireless sensor networks are vulnerable to adversaries as they are frequently deployed in open and unattended environments. Preventive mechanisms can be applied to protect them from an assortment of attacks. However, more sophisticated methods, like intrusion detection systems, are needed...

  7. System to detect and protect a failure during start-up of pumping-up; Yosui shidochu no jikokenshutsu hogo hoshiki

    Energy Technology Data Exchange (ETDEWEB)

    Hagiwara, H. [Kansai Electric Power Co. Inc., Osaka (Japan)

    1999-12-10

    Development was made on a method to detect and protect a failure in a power generation motor operating at a rotation speed other than the rated speed, such as in start-up of pumping-up. Such a failure may not be detected from the ordinary frequency characteristics because the failure current due to short circuit or earth failure is lower than the rated frequency. An idea was conceived to detect flow of low frequency failure current on the armature side by detecting change in the field current being a phenomenal change in the armature current. To realize the idea, the simulation circuits of the parent and child motors of the power generation motors were modeled and verified. The following results were obtained from the verification of a three-phase short circuit failure: when the field current is constant, the failure current is nearly constant regardless of the operating frequency; the failure current flows into the field circuit; because the trajectory of the low frequency overcurrent in the main circuit flowing in case of operation in a low frequency zone presents the similar trajectory to that in 60-Hz operation, the failure may be detected from variation in the field current; and the protection activity after the failure detection is to release the excitation, and attenuate the failure current. (NEDO)

  8. A New FPGA Architecture of FAST and BRIEF Algorithm for On-Board Corner Detection and Matching

    Directory of Open Access Journals (Sweden)

    Jingjin Huang

    2018-03-01

    Full Text Available Although some researchers have proposed the Field Programmable Gate Array (FPGA architectures of Feature From Accelerated Segment Test (FAST and Binary Robust Independent Elementary Features (BRIEF algorithm, there is no consideration of image data storage in these traditional architectures that will result in no image data that can be reused by the follow-up algorithms. This paper proposes a new FPGA architecture that considers the reuse of sub-image data. In the proposed architecture, a remainder-based method is firstly designed for reading the sub-image, a FAST detector and a BRIEF descriptor are combined for corner detection and matching. Six pairs of satellite images with different textures, which are located in the Mentougou district, Beijing, China, are used to evaluate the performance of the proposed architecture. The Modelsim simulation results found that: (i the proposed architecture is effective for sub-image reading from DDR3 at a minimum cost; (ii the FPGA implementation is corrected and efficient for corner detection and matching, such as the average value of matching rate of natural areas and artificial areas are approximately 67% and 83%, respectively, which are close to PC’s and the processing speed by FPGA is approximately 31 and 2.5 times faster than those by PC processing and by GPU processing, respectively.

  9. Dielectric Spectroscopic Detection of Early Failures in 3-D Integrated Circuits.

    Science.gov (United States)

    Obeng, Yaw; Okoro, C A; Ahn, Jung-Joon; You, Lin; Kopanski, Joseph J

    The commercial introduction of three dimensional integrated circuits (3D-ICs) has been hindered by reliability challenges, such as stress related failures, resistivity changes, and unexplained early failures. In this paper, we discuss a new RF-based metrology, based on dielectric spectroscopy, for detecting and characterizing electrically active defects in fully integrated 3D devices. These defects are traceable to the chemistry of the insolation dielectrics used in the through silicon via (TSV) construction. We show that these defects may be responsible for some of the unexplained early reliability failures observed in TSV enabled 3D devices.

  10. Performance and sensitivity analysis of the generalized likelihood ratio method for failure detection. M.S. Thesis

    Science.gov (United States)

    Bueno, R. A.

    1977-01-01

    Results of the generalized likelihood ratio (GLR) technique for the detection of failures in aircraft application are presented, and its relationship to the properties of the Kalman-Bucy filter is examined. Under the assumption that the system is perfectly modeled, the detectability and distinguishability of four failure types are investigated by means of analysis and simulations. Detection of failures is found satisfactory, but problems in identifying correctly the mode of a failure may arise. These issues are closely examined as well as the sensitivity of GLR to modeling errors. The advantages and disadvantages of this technique are discussed, and various modifications are suggested to reduce its limitations in performance and computational complexity.

  11. A formal language to describe a wide class of failure detection and signal validation procedures

    Energy Technology Data Exchange (ETDEWEB)

    Racz, A. [Hungarian Academy of Sciences, Budapest (Hungary). Atomic Energy Research Inst.

    1996-01-01

    In the present article we make the first step towards the implementation of a user-friendly, object-oriented system devoted to failure detection and signal validation purposes. After overviewing different signal modelling, residual making and hypothesis testing procedures, a mathematical tool is suggested to describe a general failure detection problem. Three different levels of the abstraction are distinguished; direct examination, preliminary decision support mechanism and indirect examination. Possible scenarios are introduced depending both on the objective properties of the investigated signal and the particular requirements prescribed by the expert himself. Finally it is showed how to build up systematically a complete, general failure detection procedure. (author).

  12. Damage and failure detection of composites using optical fiber vibration sensor

    International Nuclear Information System (INIS)

    Yang, Y. C.; Han, K. S.

    2001-01-01

    An intensity-based optical fiber vibration sensor is applied to detect and evaluate damages and fiber failure of composites. The optical fiber vibration sensor is constructed by placing two cleaved fiber end, one of which is cantilevered in a hollow glass tube. The movement of the cantilevered section lags behind the rest of the sensor in response to an applied vibration and the amount of light coupled between the two fibers is thereby modulated. Vibration characteristics of the optical fiber vibration sensor are investigated. Surface mounted optical fiber vibration sensor is used in tensile and indentation test. Experimental results show that the optical fiber sensor can detect damages and fiber failure of composites correctly

  13. Computer architecture fundamentals and principles of computer design

    CERN Document Server

    Dumas II, Joseph D

    2005-01-01

    Introduction to Computer ArchitectureWhat is Computer Architecture?Architecture vs. ImplementationBrief History of Computer SystemsThe First GenerationThe Second GenerationThe Third GenerationThe Fourth GenerationModern Computers - The Fifth GenerationTypes of Computer SystemsSingle Processor SystemsParallel Processing SystemsSpecial ArchitecturesQuality of Computer SystemsGenerality and ApplicabilityEase of UseExpandabilityCompatibilityReliabilitySuccess and Failure of Computer Architectures and ImplementationsQuality and the Perception of QualityCost IssuesArchitectural Openness, Market Timi

  14. Survivable architectures for time and wavelength division multiplexed passive optical networks

    Science.gov (United States)

    Wong, Elaine

    2014-08-01

    The increased network reach and customer base of next-generation time and wavelength division multiplexed PON (TWDM-PONs) have necessitated rapid fault detection and subsequent restoration of services to its users. However, direct application of existing solutions for conventional PONs to TWDM-PONs is unsuitable as these schemes rely on the loss of signal (LOS) of upstream transmissions to trigger protection switching. As TWDM-PONs are required to potentially use sleep/doze mode optical network units (ONU), the loss of upstream transmission from a sleeping or dozing ONU could erroneously trigger protection switching. Further, TWDM-PONs require its monitoring modules for fiber/device fault detection to be more sensitive than those typically deployed in conventional PONs. To address the above issues, three survivable architectures that are compliant with TWDM-PON specifications are presented in this work. These architectures combine rapid detection and protection switching against multipoint failure, and most importantly do not rely on upstream transmissions for LOS activation. Survivability analyses as well as evaluations of the additional costs incurred to achieve survivability are performed and compared to the unprotected TWDM-PON. Network parameters that impact the maximum achievable network reach, maximum split ratio, connection availability, fault impact, and the incremental reliability costs for each proposed survivable architecture are highlighted.

  15. Development of a system for automatic detection of pellet failures

    International Nuclear Information System (INIS)

    Lavagnino, C.E.

    1996-01-01

    Nowadays, the failure controls in UO 2 pellets for Atucha and Embalse reactors are performed visually. In this work it is presented the first stage of the development of a system that allows an automatic approach to the task. For this purpose, the problem has been subdivided in three jobs: choosing the illumination environment, finding the algorithm that detects failures with user-defined tolerance and engineering the mechanic system that supports the desired manipulations of the pellets. In this paper, the former two are developed. a) Finding the illumination conditions that allow subtracting the failure from the normal element surface, knowing, in first place, the cylindrical characteristics of it and, as a consequence, the differences in the light reflection direction and, in second place, the texture differences in relation to the rectification type of the pellet. b) Writing a fast and simple algorithm that allows the identification of the failure following the production specifications. Examples of the developed algorithm are shown. (author). 4 refs

  16. Failure detection system risk reduction assessment

    Science.gov (United States)

    Aguilar, Robert B. (Inventor); Huang, Zhaofeng (Inventor)

    2012-01-01

    A process includes determining a probability of a failure mode of a system being analyzed reaching a failure limit as a function of time to failure limit, determining a probability of a mitigation of the failure mode as a function of a time to failure limit, and quantifying a risk reduction based on the probability of the failure mode reaching the failure limit and the probability of the mitigation.

  17. Filter Design for Failure Detection and Isolation in the Presence of Modeling Erros and Disturbances

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, Hans Henrik

    1996-01-01

    The design problem of filters for robust Failure Detectionand Isolation, (FDI) is addressed in this paper. The failure detectionproblem will be considered with respect to both modeling errors anddisturbances. Both an approach based on failure detection observes aswell as an approach based...

  18. Fault Detection and Location of IGBT Short-Circuit Failure in Modular Multilevel Converters

    Directory of Open Access Journals (Sweden)

    Bin Jiang

    2018-06-01

    Full Text Available A single fault detection and location for Modular Multilevel Converter (MMC is of great significance, as numbers of sub-modules (SMs in MMC are connected in series. In this paper, a novel fault detection and location method is proposed for MMC in terms of the Insulated Gate Bipolar Translator (IGBT short-circuit failure in SM. The characteristics of IGBT short-circuit failures are analyzed, based on which a Differential Comparison Low-Voltage Detection Method (DCLVDM is proposed to detect the short-circuit fault. Lastly, the faulty IGBT is located based on the capacitor voltage of the faulty SM by Continuous Wavelet Transform (CWT. Simulations have been done in the simulation software PSCAD/EMTDC and the results confirm the validity and reliability of the proposed method.

  19. Software Architectures – Present and Visions

    Directory of Open Access Journals (Sweden)

    Catalin STRIMBEI

    2015-01-01

    Full Text Available Nowadays, architectural software systems are increasingly important because they can determine the success of the entire system. In this article we intend to rigorously analyze the most common types of systems architectures and present a personal opinion about the specifics of the university architecture. After analyzing monolithic architectures, SOA architecture and those of the micro- based services, we present specific issues and specific criteria for the university software systems. Each type of architecture is rundown and analyzed according to specific academic challenges. During the analysis, we took into account the factors that determine the success of each architecture and also the common causes of failure. At the end of the article, we objectively decide which architecture is best suited to be implemented in the university area.

  20. An architectural model for software reliability quantification: sources of data

    International Nuclear Information System (INIS)

    Smidts, C.; Sova, D.

    1999-01-01

    Software reliability assessment models in use today treat software as a monolithic block. An aversion towards 'atomic' models seems to exist. These models appear to add complexity to the modeling, to the data collection and seem intrinsically difficult to generalize. In 1997, we introduced an architecturally based software reliability model called FASRE. The model is based on an architecture derived from the requirements which captures both functional and nonfunctional requirements and on a generic classification of functions, attributes and failure modes. The model focuses on evaluation of failure mode probabilities and uses a Bayesian quantification framework. Failure mode probabilities of functions and attributes are propagated to the system level using fault trees. It can incorporate any type of prior information such as results of developers' testing, historical information on a specific functionality and its attributes, and, is ideally suited for reusable software. By building an architecture and deriving its potential failure modes, the model forces early appraisal and understanding of the weaknesses of the software, allows reliability analysis of the structure of the system, provides assessments at a functional level as well as at a systems' level. In order to quantify the probability of failure (or the probability of success) of a specific element of our architecture, data are needed. The term element of the architecture is used here in its broadest sense to mean a single failure mode or a higher level of abstraction such as a function. The paper surveys the potential sources of software reliability data available during software development. Next the mechanisms for incorporating these sources of relevant data to the FASRE model are identified

  1. Detecting failure events in buildings: a numerical and experimental analysis

    OpenAIRE

    Heckman, V. M.; Kohler, M. D.; Heaton, T. H.

    2010-01-01

    A numerical method is used to investigate an approach for detecting the brittle fracture of welds associated with beam -column connections in instrumented buildings in real time through the use of time-reversed Green’s functions and wave propagation reciprocity. The approach makes use of a prerecorded catalog of Green’s functions for an instrumented building to detect failure events in the building during a later seismic event by screening continuous data for the presence of wavef...

  2. Classifier models and architectures for EEG-based neonatal seizure detection

    International Nuclear Information System (INIS)

    Greene, B R; Marnane, W P; Lightbody, G; Reilly, R B; Boylan, G B

    2008-01-01

    Neonatal seizures are the most common neurological emergency in the neonatal period and are associated with a poor long-term outcome. Early detection and treatment may improve prognosis. This paper aims to develop an optimal set of parameters and a comprehensive scheme for patient-independent multi-channel EEG-based neonatal seizure detection. We employed a dataset containing 411 neonatal seizures. The dataset consists of multi-channel EEG recordings with a mean duration of 14.8 h from 17 neonatal patients. Early-integration and late-integration classifier architectures were considered for the combination of information across EEG channels. Three classifier models based on linear discriminants, quadratic discriminants and regularized discriminants were employed. Furthermore, the effect of electrode montage was considered. The best performing seizure detection system was found to be an early integration configuration employing a regularized discriminant classifier model. A referential EEG montage was found to outperform the more standard bipolar electrode montage for automated neonatal seizure detection. A cross-fold validation estimate of the classifier performance for the best performing system yielded 81.03% of seizures correctly detected with a false detection rate of 3.82%. With post-processing, the false detection rate was reduced to 1.30% with 59.49% of seizures correctly detected. These results represent a comprehensive illustration that robust reliable patient-independent neonatal seizure detection is possible using multi-channel EEG

  3. System of the sensor failure detection and isolation system using Kalman filter

    International Nuclear Information System (INIS)

    Assumpcao Filho, E.O.; Nakata, H.

    1991-01-01

    The present work work summarizes the development of the sensor failure detection and isolation system (FDIS) suitable to be implemented in nuclear plant control systems. The methodology is based on the extended Kalman filter applied to a PWR pressurizer simplified model. The simulation of the most representative failure types showed the great reliability and fast response capability of the FDIS developed allowing the sizable savings in computational and economic expenditures. (author)

  4. Filtering technique for detection and identification of measurement failures in nuclear power plants

    International Nuclear Information System (INIS)

    Racz, A.

    1989-11-01

    The basic requirement of the safe operation of nuclear power plants (NPP) is to have reliable information on all quantities that can be measured, monitored or controlled during the operation. Kalman filtering techniques have been applied for prompt detection and identification of failures in the measurement systems used in NPPs. Mathematical basis of Kalman filtering and various models applied to failure detection are overviewed. The applicability of some models are evaluated by real results of NPP measurements. A sample system for an NPP is suggested, based on several numerical tests. (R.P.) 23 refs.; 40 figs.; 2 tabs

  5. Analytical Study of different types Of network failure detection and possible remedies

    Science.gov (United States)

    Saxena, Shikha; Chandra, Somnath

    2012-07-01

    Faults in a network have various causes,such as the failure of one or more routers, fiber-cuts, failure of physical elements at the optical layer, or extraneous causes like power outages. These faults are usually detected as failures of a set of dependent logical entities and the links affected by the failed components. A reliable control plane plays a crucial role in creating high-level services in the next-generation transport network based on the Generalized Multiprotocol Label Switching (GMPLS) or Automatically Switched Optical Networks (ASON) model. In this paper, approaches to control-plane survivability, based on protection and restoration mechanisms, are examined. Procedures for the control plane state recovery are also discussed, including link and node failure recovery and the concepts of monitoring paths (MPs) and monitoring cycles (MCs) for unique localization of shared risk linked group (SRLG) failures in all-optical networks. An SRLG failure is a failure of multiple links due to a failure of a common resource. MCs (MPs) start and end at same (distinct) monitoring location(s). They are constructed such that any SRLG failure results in the failure of a unique combination of paths and cycles. We derive necessary and sufficient conditions on the set of MCs and MPs needed for localizing an SRLG failure in an arbitrary graph. Procedure of Protection and Restoration of the SRLG failure by backup re-provisioning algorithm have also been discussed.

  6. Skin-Spar Failure Detection of a Composite Winglet Using FBG Sensors

    Directory of Open Access Journals (Sweden)

    Ciminello Monica

    2017-09-01

    Full Text Available Winglets are introduced into modern aircraft to reduce wing aerodynamic drag and to consequently optimize the fuel burn per mission. In order to be aerodynamically effective, these devices are installed at the wing tip section; this wing region is generally characterized by relevant oscillations induced by flights maneuvers and gust. The present work is focused on the validation of a continuous monitoring system based on fiber Bragg grating sensors and frequency domain analysis to detect physical condition of a skin-spar bonding failure in a composite winglet for in-service purposes. Optical fibers are used as deformation sensors. Short Time Fast Fourier Transform (STFT analysis is applied to analyze the occurrence of structural response deviations on the base of strain data. Obtained results showed high accuracy in estimating static and dynamic deformations and great potentials in detecting structural failure occurrences.

  7. Real-time sensor failure detection by dynamic modelling of a PWR plant

    International Nuclear Information System (INIS)

    Turkcan, E.; Ciftcioglu, O.

    1992-06-01

    Signal validation and sensor failure detection is an important problem in real-time nuclear power plant (NPP) surveillance. Although conventional sensor redundancy, in a way, is a solution, identification of faulty sensor is necessary for further preventive actions to be taken. A comprehensive solution for the system so that any sensory reading is verified by its model based estimated counterpart, in real-time. Such a realization is accomplished by means of dynamic system's states estimation methodology using Kalman filter modelling technique. The method is investigated by means of real-time data of the steam generator of Borssele nuclear power plant and the method has proved to be satisfactory for real-time sensor failure detection as well as model validation verification. (author). 5 refs.; 6 figs.; 1 tab

  8. A scalable architecture for online anomaly detection of WLCG batch jobs

    Science.gov (United States)

    Kuehn, E.; Fischer, M.; Giffels, M.; Jung, C.; Petzold, A.

    2016-10-01

    For data centres it is increasingly important to monitor the network usage, and learn from network usage patterns. Especially configuration issues or misbehaving batch jobs preventing a smooth operation need to be detected as early as possible. At the GridKa data and computing centre we therefore operate a tool BPNetMon for monitoring traffic data and characteristics of WLCG batch jobs and pilots locally on different worker nodes. On the one hand local information itself are not sufficient to detect anomalies for several reasons, e.g. the underlying job distribution on a single worker node might change or there might be a local misconfiguration. On the other hand a centralised anomaly detection approach does not scale regarding network communication as well as computational costs. We therefore propose a scalable architecture based on concepts of a super-peer network.

  9. Aircraft control surface failure detection and isolation using the OSGLR test. [orthogonal series generalized likelihood ratio

    Science.gov (United States)

    Bonnice, W. F.; Motyka, P.; Wagner, E.; Hall, S. R.

    1986-01-01

    The performance of the orthogonal series generalized likelihood ratio (OSGLR) test in detecting and isolating commercial aircraft control surface and actuator failures is evaluated. A modification to incorporate age-weighting which significantly reduces the sensitivity of the algorithm to modeling errors is presented. The steady-state implementation of the algorithm based on a single linear model valid for a cruise flight condition is tested using a nonlinear aircraft simulation. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection and isolation performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling on dynamic pressure and flap deflection is examined. Based on this testing, the OSGLR algorithm should be capable of detecting control surface failures that would affect the safe operation of a commercial aircraft. Isolation may be difficult if there are several surfaces which produce similar effects on the aircraft. Extending the algorithm over the entire operating envelope of a commercial aircraft appears feasible.

  10. Architectural constraints in IEC 61508: Do they have the intended effect?

    International Nuclear Information System (INIS)

    Lundteigen, Mary Ann; Rausand, Marvin

    2009-01-01

    The standards IEC 61508 and IEC 61511 employ architectural constraints to avoid that quantitative assessments alone are used to determine the hardware layout of safety instrumented systems (SIS). This article discusses the role of the architectural constraints, and particularly the safe failure fraction (SFF) as a design parameter to determine the hardware fault tolerance (HFT) and the redundancy level for SIS. The discussion is based on examples from the offshore oil and gas industry, but should be relevant for all applications of SIS. The article concludes that architectural constraints may be required to compensate for systematic failures, but the architectural constraints should not be determined based on the SFF. The SFF is considered to be an unnecessary concept

  11. Expanded envelope concepts for aircraft control-element failure detection and identification

    Science.gov (United States)

    Weiss, Jerold L.; Hsu, John Y.

    1988-01-01

    The purpose of this effort was to develop and demonstrate concepts for expanding the envelope of failure detection and isolation (FDI) algorithms for aircraft-path failures. An algorithm which uses analytic-redundancy in the form of aerodynamic force and moment balance equations was used. Because aircraft-path FDI uses analytical models, there is a tradeoff between accuracy and the ability to detect and isolate failures. For single flight condition operation, design and analysis methods are developed to deal with this robustness problem. When the departure from the single flight condition is significant, algorithm adaptation is necessary. Adaptation requirements for the residual generation portion of the FDI algorithm are interpreted as the need for accurate, large-motion aero-models, over a broad range of velocity and altitude conditions. For the decision-making part of the algorithm, adaptation may require modifications to filtering operations, thresholds, and projection vectors that define the various hypothesis tests performed in the decision mechanism. Methods of obtaining and evaluating adequate residual generation and decision-making designs have been developed. The application of the residual generation ideas to a high-performance fighter is demonstrated by developing adaptive residuals for the AFTI-F-16 and simulating their behavior under a variety of maneuvers using the results of a NASA F-16 simulation.

  12. On-line detection of key radionuclides for fuel-rod failure in a pressurized water reactor.

    Science.gov (United States)

    Qin, Guoxiu; Chen, Xilin; Guo, Xiaoqing; Ni, Ning

    2016-08-01

    For early on-line detection of fuel rod failure, the key radionuclides useful in monitoring must leak easily from failing rods. Yield, half-life, and mass share of fission products that enter the primary coolant also need to be considered in on-line analyses. From all the nuclides that enter the primary coolant during fuel-rod failure, (135)Xe and (88)Kr were ultimately chosen as crucial for on-line monitoring of fuel-rod failure. A monitoring system for fuel-rod failure detection for pressurized water reactor (PWR) based on the LaBr3(Ce) detector was assembled and tested. The samples of coolant from the PWR were measured using the system as well as a HPGe γ-ray spectrometer. A comparison showed the method was feasible. Finally, the γ-ray spectra of primary coolant were measured under normal operations and during fuel-rod failure. The two peaks of (135)Xe (249.8keV) and (88)Kr (2392.1keV) were visible, confirming that the method is capable of monitoring fuel-rod failure on-line. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Can enterprise architectures reduce failure in development projects?

    NARCIS (Netherlands)

    Janssen, M.F.W.H.A.; Klievink, B.

    2012-01-01

    Purpose: Scant attention has been given to the role of enterprise architecture (EA) in relationship to risk management in information system development projects. Even less attention has been given to the inter-organizational setting. The aim of this paper is to better understand this relationship.

  14. An experimental vital signs detection radar using low-IF heterodyne architecture and single-sideband transmission

    DEFF Research Database (Denmark)

    Jensen, Brian Sveistrup; Johansen, Tom Keinicke; Yan, Lei

    2013-01-01

    In this paper an experimental X-band radar system, called DTU-VISDAM, developed for the detection and monitoring of human vital signs is described. The DTU-VISDAM radar exploits a low intermediate frequency (IF) heterodyne RF front-end architecture and single-sideband (SSB) transmission for easier...... and more reliable extraction of the vital signs. The hardware implementation of the proposed low-IF RF front-end architecture and associated IF circuitry is discussed. Furthermore, the signal processing and calibration steps necessary to extract the vital signs information measured on a human subject...

  15. Program for generating tests for the detection of failures in combinatorial logic systems

    International Nuclear Information System (INIS)

    Mansour, Mounir

    1972-01-01

    A method for generating test sequences for detecting failures in combinatorial logic systems, is described. It relies on: the splitting of these systems into elements of NOR and NAND circuits, the propagation of the failure state from the input to the output. Test sequences generation is achieved in two steps: a first one called chaining during which are investigated the propagation paths of an input state able to show off failures, a second one called consistency during which the global state of the circuit related to this input configuration is held to the wanted state so that the propagation takes place. (author) [fr

  16. Brain architecture: a design for natural computation.

    Science.gov (United States)

    Kaiser, Marcus

    2007-12-15

    Fifty years ago, John von Neumann compared the architecture of the brain with that of the computers he invented and which are still in use today. In those days, the organization of computers was based on concepts of brain organization. Here, we give an update on current results on the global organization of neural systems. For neural systems, we outline how the spatial and topological architecture of neuronal and cortical networks facilitates robustness against failures, fast processing and balanced network activation. Finally, we discuss mechanisms of self-organization for such architectures. After all, the organization of the brain might again inspire computer architecture.

  17. Advanced Information Processing System (AIPS)-based fault tolerant avionics architecture for launch vehicles

    Science.gov (United States)

    Lala, Jaynarayan H.; Harper, Richard E.; Jaskowiak, Kenneth R.; Rosch, Gene; Alger, Linda S.; Schor, Andrei L.

    1990-01-01

    An avionics architecture for the advanced launch system (ALS) that uses validated hardware and software building blocks developed under the advanced information processing system program is presented. The AIPS for ALS architecture defined is preliminary, and reliability requirements can be met by the AIPS hardware and software building blocks that are built using the state-of-the-art technology available in the 1992-93 time frame. The level of detail in the architecture definition reflects the level of detail available in the ALS requirements. As the avionics requirements are refined, the architecture can also be refined and defined in greater detail with the help of analysis and simulation tools. A useful methodology is demonstrated for investigating the impact of the avionics suite to the recurring cost of the ALS. It is shown that allowing the vehicle to launch with selected detected failures can potentially reduce the recurring launch costs. A comparative analysis shows that validated fault-tolerant avionics built out of Class B parts can result in lower life-cycle-cost in comparison to simplex avionics built out of Class S parts or other redundant architectures.

  18. [Early detection, prevention and management of renal failure in liver transplantation].

    Science.gov (United States)

    Castells, Lluís; Baliellas, Carme; Bilbao, Itxarone; Cantarell, Carme; Cruzado, Josep Maria; Esforzado, Núria; García-Valdecasas, Juan Carlos; Lladó, Laura; Rimola, Antoni; Serón, Daniel; Oppenheimer, Federico

    2014-10-01

    Renal failure is a frequent complication in liver transplant recipients and is associated with increased morbidity and mortality. A variety of risk factors for the development of renal failure in the pre- and post-transplantation periods have been described, as well as at the time of surgery. To reduce the negative impact of renal failure in this population, an active approach is required for the identification of those patients with risk factors, the implementation of preventive strategies, and the early detection of progressive deterioration of renal function. Based on published evidence and on clinical experience, this document presents a series of recommendations on monitoring RF in LT recipients, as well as on the prevention and management of acute and chronic renal failure after LT and referral of these patients to the nephrologist. In addition, this document also provides an update of the various immunosuppressive regimens tested in this population for the prevention and control of post-transplantation deterioration of renal function. Copyright © 2013 Elsevier España, S.L.U. and AEEH y AEG. All rights reserved.

  19. Fault tolerant architecture for artificial olfactory system

    International Nuclear Information System (INIS)

    Lotfivand, Nasser; Hamidon, Mohd Nizar; Abdolzadeh, Vida

    2015-01-01

    In this paper, to cover and mask the faults that occur in the sensing unit of an artificial olfactory system, a novel architecture is offered. The proposed architecture is able to tolerate failures in the sensors of the array and the faults that occur are masked. The proposed architecture for extracting the correct results from the output of the sensors can provide the quality of service for generated data from the sensor array. The results of various evaluations and analysis proved that the proposed architecture has acceptable performance in comparison with the classic form of the sensor array in gas identification. According to the results, achieving a high odor discrimination based on the suggested architecture is possible. (paper)

  20. Convolutional neural networks for event-related potential detection: impact of the architecture.

    Science.gov (United States)

    Cecotti, H

    2017-07-01

    The detection of brain responses at the single-trial level in the electroencephalogram (EEG) such as event-related potentials (ERPs) is a difficult problem that requires different processing steps to extract relevant discriminant features. While most of the signal and classification techniques for the detection of brain responses are based on linear algebra, different pattern recognition techniques such as convolutional neural network (CNN), as a type of deep learning technique, have shown some interests as they are able to process the signal after limited pre-processing. In this study, we propose to investigate the performance of CNNs in relation of their architecture and in relation to how they are evaluated: a single system for each subject, or a system for all the subjects. More particularly, we want to address the change of performance that can be observed between specifying a neural network to a subject, or by considering a neural network for a group of subjects, taking advantage of a larger number of trials from different subjects. The results support the conclusion that a convolutional neural network trained on different subjects can lead to an AUC above 0.9 by using an appropriate architecture using spatial filtering and shift invariant layers.

  1. Method of detecting a fuel element failure

    International Nuclear Information System (INIS)

    Cohen, P.

    1975-01-01

    A method is described for detecting a fuel element failure in a liquid-sodium-cooled fast breeder reactor consisting of equilibrating a sample of the coolant with a molten salt consisting of a mixture of barium iodide and strontium iodide (or other iodides) whereby a large fraction of any radioactive iodine present in the liquid sodium coolant exchanges with the iodine present in the salt; separating the molten salt and sodium; if necessary, equilibrating the molten salt with nonradioactive sodium and separating the molten salt and sodium; and monitoring the molten salt for the presence of iodine, the presence of iodine indicating that the cladding of a fuel element has failed. (U.S.)

  2. Fuel failure detection and location in LMFBRs

    International Nuclear Information System (INIS)

    Jacobi, S.

    1982-06-01

    The Specialists' Meeting on 'Fuel Failure Detection and Location in LMFBRs' was held at the Kernforschungszentrum Karlsruhe, Federal Republic of Germany, on 11-14 May 1981. The meeting was sponsored by the International Atomic Energy Agency (IAEA) on the recommendation of the International Working Group on Fast Reactors (IWGFR).The purpose of the meeting was to review and discuss methods and experience in the detection and location of failed fuel elements and to recommend future development. The technical sessions were divided into five topical sessions as follows: 1. Reactor Intrumentation, 2. Experience Gained from LMFBRs, 3. In-pile Experiments, 4. Models and Codes, 5. Future Programs. During the meeting papers were presented by the participants on behalf of their countries or organizations. Each presentation was followed by an open discussion in the subject covered by the presentation. After the formal sessions were completed, a final discussion session was held and general conclusions and recommendationswere reached. Session summaries, general conclusions and recommendations, the agenda of the meeting and the list of participants are given. (orig./RW)

  3. HDR IMAGING FOR FEATURE DETECTION ON DETAILED ARCHITECTURAL SCENES

    Directory of Open Access Journals (Sweden)

    G. Kontogianni

    2015-02-01

    Full Text Available 3D reconstruction relies on accurate detection, extraction, description and matching of image features. This is even truer for complex architectural scenes that pose needs for 3D models of high quality, without any loss of detail in geometry or color. Illumination conditions influence the radiometric quality of images, as standard sensors cannot depict properly a wide range of intensities in the same scene. Indeed, overexposed or underexposed pixels cause irreplaceable information loss and degrade digital representation. Images taken under extreme lighting environments may be thus prohibitive for feature detection/extraction and consequently for matching and 3D reconstruction. High Dynamic Range (HDR images could be helpful for these operators because they broaden the limits of illumination range that Standard or Low Dynamic Range (SDR/LDR images can capture and increase in this way the amount of details contained in the image. Experimental results of this study prove this assumption as they examine state of the art feature detectors applied both on standard dynamic range and HDR images.

  4. A Novel Technique for Rotor Bar Failure Detection in Single-Cage Induction Motor Using FEM and MATLAB/SIMULINK

    Directory of Open Access Journals (Sweden)

    Seyed Abbas Taher

    2011-01-01

    Full Text Available In this article, a new fault detection technique is proposed for squirrel cage induction motor (SCIM based on detection of rotor bar failure. This type of fault detection is commonly carried out, while motor continues to work at a steady-state regime. Recently, several methods have been presented for rotor bar failure detection based on evaluation of the start-up transient current. The proposed method here is capable of fault detection immediately after bar breakage, where a three-phase SCIM is modelled in finite element method (FEM using Maxwell2D software. Broken rotor bars are then modelled by the corresponding outer rotor impedance obtained by GA, thereby presenting an analogue model extracted from FEM to be simulated in a flexible environment such as MATLAB/SIMULINK. To improve the failure recognition, the stator current signal was analysed using discrete wavelet transform (DWT.

  5. The effects of fibre architecture on fatigue life-time of composite materials

    Energy Technology Data Exchange (ETDEWEB)

    Zangenberg Hansen, J.

    2013-09-15

    Wind turbine rotor blades are among the largest composite structures manufactured of fibre reinforced polymer. During the service life of a wind turbine rotor blade, it is subjected to cyclic loading that potentially can lead to material failure, also known as fatigue. With reference to glass fibre reinforced composites used for the main laminate of a wind turbine rotor blade, the problem addressed in the present work is the effect of the fibre and fabric architecture on the fatigue life-time under tension-tension loading. Fatigue of composite materials has been a central research topic for the last decades; however, a clear answer to what causes the material to degrade, has not been given yet. Even for the simplest kind of fibre reinforced composites, the axially loaded unidirectional material, the fatigue failure modes are complex, and require advanced experimental techniques and characterisation methodologies in order to be assessed. Furthermore, numerical evaluation and predictions of the fatigue damage evolution are decisive in order to make future improvements. The present work is focused around two central themes: fibre architecture and fatigue failure. The fibre architecture is characterised using real material samples and numerical simulations. Experimental fatigue tests identify, quantify, and analyse the cause of failure. Different configurations of the fibre architecture are investigated in order to determine and understand the tension-tension fatigue failure mechanisms. A numerical study is used to examine the onset of fatigue failure. Topics treated include: experimental fatigue investigations, scanning electron microscopy, numerical simulations, advanced measurements techniques (micro computed tomography and thermovision), design of test specimens and preforms, and advanced materials characterisation. The results of the present work show that the fibre radii distribution has limited effect on the fibre architecture. This raises the question of which

  6. Periodic Application of Concurrent Error Detection in Processor Array Architectures. PhD. Thesis -

    Science.gov (United States)

    Chen, Paul Peichuan

    1993-01-01

    Processor arrays can provide an attractive architecture for some applications. Featuring modularity, regular interconnection and high parallelism, such arrays are well-suited for VLSI/WSI implementations, and applications with high computational requirements, such as real-time signal processing. Preserving the integrity of results can be of paramount importance for certain applications. In these cases, fault tolerance should be used to ensure reliable delivery of a system's service. One aspect of fault tolerance is the detection of errors caused by faults. Concurrent error detection (CED) techniques offer the advantage that transient and intermittent faults may be detected with greater probability than with off-line diagnostic tests. Applying time-redundant CED techniques can reduce hardware redundancy costs. However, most time-redundant CED techniques degrade a system's performance.

  7. On-Board Particulate Filter Failure Prevention and Failure Diagnostics Using Radio Frequency Sensing

    Energy Technology Data Exchange (ETDEWEB)

    Sappok, Alex [Filter Sensing Technologies; Ragaller, Paul [Filter Sensing Technologies; Herman, Andrew [CTS Corporation; Bromberg, L. [Massachusetts Institute of Technology (MIT); Prikhodko, Vitaly Y [ORNL; Parks, II, James E [ORNL; Storey, John Morse [ORNL

    2017-01-01

    The increasing use of diesel and gasoline particulate filters requires advanced on-board diagnostics (OBD) to prevent and detect filter failures and malfunctions. Early detection of upstream (engine-out) malfunctions is paramount to preventing irreversible damage to downstream aftertreatment system components. Such early detection can mitigate the failure of the particulate filter resulting in the escape of emissions exceeding permissible limits and extend the component life. However, despite best efforts at early detection and filter failure prevention, the OBD system must also be able to detect filter failures when they occur. In this study, radio frequency (RF) sensors were used to directly monitor the particulate filter state of health for both gasoline particulate filter (GPF) and diesel particulate filter (DPF) applications. The testing included controlled engine dynamometer evaluations, which characterized soot slip from various filter failure modes, as well as on-road fleet vehicle tests. The results show a high sensitivity to detect conditions resulting in soot leakage from the particulate filter, as well as potential for direct detection of structural failures including internal cracks and melted regions within the filter media itself. Furthermore, the measurements demonstrate, for the first time, the capability to employ a direct and continuous monitor of particulate filter diagnostics to both prevent and detect potential failure conditions in the field.

  8. Post-mortem cardiac diffusion tensor imaging: detection of myocardial infarction and remodeling of myofiber architecture.

    Science.gov (United States)

    Winklhofer, Sebastian; Stoeck, Christian T; Berger, Nicole; Thali, Michael; Manka, Robert; Kozerke, Sebastian; Alkadhi, Hatem; Stolzmann, Paul

    2014-11-01

    To investigate the accuracy of post-mortem diffusion tensor imaging (DTI) for the detection of myocardial infarction (MI) and to demonstrate the feasibility of helix angle (HA) calculation to study remodelling of myofibre architecture. Cardiac DTI was performed in 26 deceased subjects prior to autopsy for medicolegal reasons. Fractional anisotropy (FA) and mean diffusivity (MD) were determined. Accuracy was calculated on per-segment (AHA classification), per-territory, and per-patient basis, with pathology as reference standard. HAs were calculated and compared between healthy segments and those with MI. Autopsy demonstrated MI in 61/440 segments (13.9 %) in 12/26 deceased subjects. Healthy myocardial segments had significantly higher FA (p Analysis of HA distribution demonstrated remodelling of myofibre architecture, with significant differences between healthy segments and segments with chronic (p  0.05). Post-mortem cardiac DTI enables differentiation between healthy and infarcted myocardial segments by means of FA and MD. HA assessment allows for the demonstration of remodelling of myofibre architecture following chronic MI. • DTI enables post-mortem detection of myocardial infarction with good accuracy. • A decrease in right-handed helical fibre indicates myofibre remodelling following chronic myocardial infarction. • DTI allows for ruling out myocardial infarction by means of FA. • Post-mortem DTI may represent a valuable screening tool in forensic investigations.

  9. Achieving Critical System Survivability Through Software Architectures

    National Research Council Canada - National Science Library

    Knight, John C; Strunk, Elisabeth A

    2006-01-01

    .... In a system with a survivability architecture, under adverse conditions such as system damage or software failures, some desirable function will be eliminated but critical services will be retained...

  10. Evaluation of a Kalman filter based power pressurizer instrument failure detection system implemented on a nuclear power plant training simulator

    International Nuclear Information System (INIS)

    Seegmiller, D.S.

    1984-01-01

    The usefulness of a nuclear power plant training simulator for developing and testing modern estimation and control applications for nuclear power plants is demonstrated. A Kalman filter based instrument failure detection technique for a pressurized water reactor pressurizer is implemented on the Department of Energy N Reactor Training Simulator. This real-time failure detection method computes the first two moments (mean and variance) of each element of a normalized filter innovations vector. Failed pressurizer instrumentation can be detected by comparing these moments to the known statistical properties of the steady state, linear Kalman fitler innovations sequence. The capabilities of the detection system are evaluated using simulated plant transients and instrument failures

  11. A Methodology for Making Early Comparative Architecture Performance Evaluations

    Science.gov (United States)

    Doyle, Gerald S.

    2010-01-01

    Complex and expensive systems' development suffers from a lack of method for making good system-architecture-selection decisions early in the development process. Failure to make a good system-architecture-selection decision increases the risk that a development effort will not meet cost, performance and schedule goals. This research provides a…

  12. Research on high availability architecture of SQL and NoSQL

    Science.gov (United States)

    Wang, Zhiguo; Wei, Zhiqiang; Liu, Hao

    2017-03-01

    With the advent of the era of big data, amount and importance of data have increased dramatically. SQL database develops in performance and scalability, but more and more companies tend to use NoSQL database as their databases, because NoSQL database has simpler data model and stronger extension capacity than SQL database. Almost all database designers including SQL database and NoSQL database aim to improve performance and ensure availability by reasonable architecture which can reduce the effects of software failures and hardware failures, so that they can provide better experiences for their customers. In this paper, I mainly discuss the architectures of MySQL, MongoDB, and Redis, which are high available and have been deployed in practical application environment, and design a hybrid architecture.

  13. Fault-tolerant architecture: Evaluation methodology

    International Nuclear Information System (INIS)

    Battle, R.E.; Kisner, R.A.

    1992-08-01

    The design and reliability of four fault-tolerant architectures that may be used in nuclear power plant control systems were evaluated. Two architectures are variations of triple-modular-redundant (TMR) systems, and two are variations of dual redundant systems. The evaluation includes a review of methods of implementing fault-tolerant control, the importance of automatic recovery from failures, methods of self-testing diagnostics, block diagrams of typical fault-tolerant controllers, review of fault-tolerant controllers operating in nuclear power plants, and fault tree reliability analyses of fault-tolerant systems

  14. Optimally Robust Redundancy Relations for Failure Detection in Uncertain Systems,

    Science.gov (United States)

    1983-04-01

    particular applications. While the general methods provide the basis for what in principle should be a widely applicable failure detection methodology...modifications to this result which overcome them at no fundmental increase in complexity. 4.1 Scaling A critical problem with the criteria of the preceding...criterion which takes scaling into account L 2 s[ (45) As in (38), we can multiply the C. by positive scalars to take into account unequal weightings on

  15. Architecture of Brazil 1900-1990

    CERN Document Server

    Segawa, Hugo

    2013-01-01

    Architecture of Brazil: 1900-1990 examines the processes that underpin modern Brazilian architecture under various influences and characterizes different understandings of modernity, evident in the chapter topics of this book. Accordingly, the author does not give overall preference to particular architects nor works, with the exception of a few specific works and architects, including Warchavchik, Niemeyer, Lucio Costa, and Vilanova Artigas. In summary, this book: Meticulously examines the controversies, achievements, and failures in constructing spaces, buildings, and cities in a dynamic country Gives a broad view of Brazilian architecture in the twentieth century Proposes a reinterpretation of the varied approaches of the modern movement up to the Second World War Analyzes ideological impacts of important Brazilian architects including Oscar Niemeyer, Lucio Costa and Vilanova Artigas Discusses work of expatriate architects in Brazil Features over 140 illustrations In Architecture of Brazil: 1900-1990, S...

  16. Failure mitigation in software defined networking employing load type prediction

    KAUST Repository

    Bouacida, Nader

    2017-07-31

    The controller is a critical piece of the SDN architecture, where it is considered as the mastermind of SDN networks. Thus, its failure will cause a significant portion of the network to fail. Overload is one of the common causes of failure since the controller is frequently invoked by new flows. Even through SDN controllers are often replicated, the significant recovery time can be an overkill for the availability of the entire network. In order to overcome the problem of the overloaded controller failure in SDN, this paper proposes a novel controller offload solution for failure mitigation based on a prediction module that anticipates the presence of a harmful long-term load. In fact, the long-standing load would eventually overwhelm the controller leading to a possible failure. To predict whether the load in the controller is short-term or long-term load, we used three different classification algorithms: Support Vector Machine, k-Nearest Neighbors, and Naive Bayes. Our evaluation results demonstrate that Support Vector Machine algorithm is applicable for detecting the type of load with an accuracy of 97.93% in a real-time scenario. Besides, our scheme succeeded to offload the controller by switching between the reactive and proactive mode in response to the prediction module output.

  17. Evaluation of digital fault-tolerant architectures for nuclear power plant control systems

    International Nuclear Information System (INIS)

    Battle, R.E.

    1990-01-01

    This paper reports on four fault-tolerant architectures that were evaluated for their potential reliability in service as control systems of nuclear power plants. The reliability analyses showed that human- and software-related common cause failures and single points of failure in the output modules are dominant contributors to system unreliability. The four architectures are triple-modular-redundant, both synchronous and asynchronous, and also dual synchronous and asynchronous. The evaluation includes a review of design features, an analysis of the importance of coverage, and reliability analyses of fault-tolerant systems. Reliability analyses based on data from several industries that have fault-tolerant controllers were used to estimate the mean-time-between-failures of fault-tolerant controllers and to predict those failure modes that may be important in nuclear power plants

  18. Tree-based server-middleman-client architecture: improving scalability and reliability for voting-based network games in ad hoc wireless networks

    Science.gov (United States)

    Guo, Y.; Fujinoki, H.

    2006-10-01

    The concept of a new tree-based architecture for networked multi-player games was proposed by Matuszek to improve scalability in network traffic at the same time to improve reliability. The architecture (we refer it as "Tree-Based Server- Middlemen-Client architecture") will solve the two major problems in ad-hoc wireless networks: frequent link failures and significance in battery power consumption at wireless transceivers by using two new techniques, recursive aggregation of client messages and subscription-based propagation of game state. However, the performance of the TBSMC architecture has never been quantitatively studied. In this paper, the TB-SMC architecture is compared with the client-server architecture using simulation experiments. We developed an event driven simulator to evaluate the performance of the TB-SMC architecture. In the network traffic scalability experiments, the TB-SMC architecture resulted in less than 1/14 of the network traffic load for 200 end users. In the reliability experiments, the TB-SMC architecture improved the number of successfully delivered players' votes by 31.6, 19.0, and 12.4% from the clientserver architecture at high (failure probability of 90%), moderate (50%) and low (10%) failure probability.

  19. Model-based failure detection for cylindrical shells from noisy vibration measurements.

    Science.gov (United States)

    Candy, J V; Fisher, K A; Guidry, B L; Chambers, D H

    2014-12-01

    Model-based processing is a theoretically sound methodology to address difficult objectives in complex physical problems involving multi-channel sensor measurement systems. It involves the incorporation of analytical models of both physical phenomenology (complex vibrating structures, noisy operating environment, etc.) and the measurement processes (sensor networks and including noise) into the processor to extract the desired information. In this paper, a model-based methodology is developed to accomplish the task of online failure monitoring of a vibrating cylindrical shell externally excited by controlled excitations. A model-based processor is formulated to monitor system performance and detect potential failure conditions. The objective of this paper is to develop a real-time, model-based monitoring scheme for online diagnostics in a representative structural vibrational system based on controlled experimental data.

  20. Using pattern analysis methods to do fast detection of manufacturing pattern failures

    Science.gov (United States)

    Zhao, Evan; Wang, Jessie; Sun, Mason; Wang, Jeff; Zhang, Yifan; Sweis, Jason; Lai, Ya-Chieh; Ding, Hua

    2016-03-01

    At the advanced technology node, logic design has become extremely complex and is getting more challenging as the pattern geometry size decreases. The small sizes of layout patterns are becoming very sensitive to process variations. Meanwhile, the high pressure of yield ramp is always there due to time-to-market competition. The company that achieves patterning maturity earlier than others will have a great advantage and a better chance to realize maximum profit margins. For debugging silicon failures, DFT diagnostics can identify which nets or cells caused the yield loss. But normally, a long time period is needed with many resources to identify which failures are due to one common layout pattern or structure. This paper will present a new yield diagnostic flow, based on preliminary EFA results, to show how pattern analysis can more efficiently detect pattern related systematic defects. Increased visibility on design pattern related failures also allows more precise yield loss estimation.

  1. Brain architecture: A design for natural computation

    OpenAIRE

    Kaiser, Marcus

    2008-01-01

    Fifty years ago, John von Neumann compared the architecture of the brain with that of computers that he invented and which is still in use today. In those days, the organisation of computers was based on concepts of brain organisation. Here, we give an update on current results on the global organisation of neural systems. For neural systems, we outline how the spatial and topological architecture of neuronal and cortical networks facilitates robustness against failures, fast processing, and ...

  2. NATO Symposium on Human Detection and Diagnosis of System Failures

    CERN Document Server

    Rouse, William

    1981-01-01

    This book includes all of the papers presented at the NATO Symposium on Human Detection and Diagnosis of System Failures held at Roskilde, Denmark on August 4-8, 1980. The Symposium was sponsored by the Scientific Affairs Division of NATO and the Rise National Laboratory of Denmark. The goal of the Symposium was to continue the tradition initiated by the NATO Symposium on Monitoring Behavior and Supervisory Control held in Berchtesgaden, F .R. Germany in 1976 and the NATO Symposium on Theory and Measurement of Mental Workload held in Mati, Greece in 1977. To this end, a group of 85 psychologists and engineers coming from industry, government, and academia convened to discuss, and to generate a "state-of-the-art" consensus of the problems and solutions associated with the human IS ability to cope with the increasing scale of consequences of failures within complex technical systems. The Introduction of this volume reviews their findings. The Symposium was organized to include brief formal presentations of pape...

  3. Sophisticated Calculation of the 1oo4-architecture for Safety-related Systems Conforming to IEC61508

    International Nuclear Information System (INIS)

    Hayek, A; Al Bokhaiti, M; Schwarz, M H; Boercsoek, J

    2012-01-01

    With the publication and enforcement of the standard IEC 61508 of safety related systems, recent system architectures have been presented and evaluated. Among a number of techniques and measures to the evaluation of safety integrity level (SIL) for safety-related systems, several measures such as reliability block diagrams and Markov models are used to analyze the probability of failure on demand (PFD) and mean time to failure (MTTF) which conform to IEC 61508. The current paper deals with the quantitative analysis of the novel 1oo4-architecture (one out of four) presented in recent work. Therefore sophisticated calculations for the required parameters are introduced. The provided 1oo4-architecture represents an advanced safety architecture based on on-chip redundancy, which is 3-failure safe. This means that at least one of the four channels have to work correctly in order to trigger the safety function.

  4. Evolutionary neural network modeling for software cumulative failure time prediction

    International Nuclear Information System (INIS)

    Tian Liang; Noore, Afzel

    2005-01-01

    An evolutionary neural network modeling approach for software cumulative failure time prediction based on multiple-delayed-input single-output architecture is proposed. Genetic algorithm is used to globally optimize the number of the delayed input neurons and the number of neurons in the hidden layer of the neural network architecture. Modification of Levenberg-Marquardt algorithm with Bayesian regularization is used to improve the ability to predict software cumulative failure time. The performance of our proposed approach has been compared using real-time control and flight dynamic application data sets. Numerical results show that both the goodness-of-fit and the next-step-predictability of our proposed approach have greater accuracy in predicting software cumulative failure time compared to existing approaches

  5. Failure mitigation in software defined networking employing load type prediction

    KAUST Repository

    Bouacida, Nader; Alghadhban, Amer Mohammad JarAlla; Alalmaei, Shiyam Mohammed Abdullah; Mohammed, Haneen; Shihada, Basem

    2017-01-01

    The controller is a critical piece of the SDN architecture, where it is considered as the mastermind of SDN networks. Thus, its failure will cause a significant portion of the network to fail. Overload is one of the common causes of failure since

  6. Link failure detection in a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Megerian, Mark G.; Smith, Brian E.

    2010-11-09

    Methods, apparatus, and products are disclosed for link failure detection in a parallel computer including compute nodes connected in a rectangular mesh network, each pair of adjacent compute nodes in the rectangular mesh network connected together using a pair of links, that includes: assigning each compute node to either a first group or a second group such that adjacent compute nodes in the rectangular mesh network are assigned to different groups; sending, by each of the compute nodes assigned to the first group, a first test message to each adjacent compute node assigned to the second group; determining, by each of the compute nodes assigned to the second group, whether the first test message was received from each adjacent compute node assigned to the first group; and notifying a user, by each of the compute nodes assigned to the second group, whether the first test message was received.

  7. Real Time Fire Reconnaissance Satellite Monitoring System Failure Model

    Science.gov (United States)

    Nino Prieto, Omar Ariosto; Colmenares Guillen, Luis Enrique

    2013-09-01

    In this paper the Real Time Fire Reconnaissance Satellite Monitoring System is presented. This architecture is a legacy of the Detection System for Real-Time Physical Variables which is undergoing a patent process in Mexico. The methodologies for this design are the Structured Analysis for Real Time (SA- RT) [8], and the software is carried out by LACATRE (Langage d'aide à la Conception d'Application multitâche Temps Réel) [9,10] Real Time formal language. The system failures model is analyzed and the proposal is based on the formal language for the design of critical systems and Risk Assessment; AltaRica. This formal architecture uses satellites as input sensors and it was adapted from the original model which is a design pattern for physical variation detection in Real Time. The original design, whose task is to monitor events such as natural disasters and health related applications, or actual sickness monitoring and prevention, as the Real Time Diabetes Monitoring System, among others. Some related work has been presented on the Mexican Space Agency (AEM) Creation and Consultation Forums (2010-2011), and throughout the International Mexican Aerospace Science and Technology Society (SOMECYTA) international congress held in San Luis Potosí, México (2012). This Architecture will allow a Real Time Fire Satellite Monitoring, which will reduce the damage and danger caused by fires which consumes the forests and tropical forests of Mexico. This new proposal, permits having a new system that impacts on disaster prevention, by combining national and international technologies and cooperation for the benefit of humankind.

  8. Mathematical modeling of a new satellite thermal architecture system connecting the east and west radiator panels and flight performance prediction

    International Nuclear Information System (INIS)

    Torres, Alejandro; Mishkinis, Donatas; Kaya, Tarik

    2014-01-01

    An entirely novel satellite thermal architecture, connecting the east and west radiators of a geostationary telecommunications satellite via loop heat pipes (LHPs), is proposed. The LHP operating temperature is regulated by using pressure regulating valves (PRVs). A transient numerical model is developed to simulate the thermal dynamic behavior of the proposed system. The details of the proposed architecture and mathematical model are presented. The model is used to analyze a set of critical design cases to identify potential failure modes prior to the qualification and in-orbit tests. The mathematical model results for critical cases are presented and discussed. The model results demonstrated the robustness and versatility of the proposed architecture under the predicted worst-case conditions. - Highlights: •We developed a mathematical model of a novel satellite thermal architecture. •We provided the dimensioning cases to design the thermal architecture. •We provided the failure mode cases to verify the thermal architecture. •We provided the results of the corresponding dimensioning and failure cases

  9. Adaptive and technology-independent architecture for fault-tolerant distributed AAL solutions.

    Science.gov (United States)

    Schmidt, Michael; Obermaisser, Roman

    2018-04-01

    Today's architectures for Ambient Assisted Living (AAL) must cope with a variety of challenges like flawless sensor integration and time synchronization (e.g. for sensor data fusion) while abstracting from the underlying technologies at the same time. Furthermore, an architecture for AAL must be capable to manage distributed application scenarios in order to support elderly people in all situations of their everyday life. This encompasses not just life at home but in particular the mobility of elderly people (e.g. when going for a walk or having sports) as well. Within this paper we will introduce a novel architecture for distributed AAL solutions whose design follows a modern Microservices approach by providing small core services instead of a monolithic application framework. The architecture comprises core services for sensor integration, and service discovery while supporting several communication models (periodic, sporadic, streaming). We extend the state-of-the-art by introducing a fault-tolerance model for our architecture on the basis of a fault-hypothesis describing the fault-containment regions (FCRs) with their respective failure modes and failure rates in order to support safety-critical AAL applications. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Architectural Analysis of Dynamically Reconfigurable Systems

    Science.gov (United States)

    Lindvall, Mikael; Godfrey, Sally; Ackermann, Chris; Ray, Arnab; Yonkwa, Lyly

    2010-01-01

    oTpics include: the problem (increased flexibility of architectural styles decrease analyzability, behavior emerges and varies depending on the configuration, does the resulting system run according to the intended design, and architectural decisions can impede or facilitate testing); top down approach to architecture analysis, detection of defects and deviations, and architecture and its testability; currently targeted projects GMSEC and CFS; analyzing software architectures; analyzing runtime events; actual architecture recognition; GMPUB in Dynamic SAVE; sample output from new approach; taking message timing delays into account; CFS examples of architecture and testability; some recommendations for improved testablity; and CFS examples of abstract interfaces and testability; CFS example of opening some internal details.

  11. Evaluation of digital fault-tolerant architectures for nuclear power plant control systems

    International Nuclear Information System (INIS)

    Battle, R.E.

    1990-01-01

    Four fault tolerant architectures were evaluated for their potential reliability in service as control systems of nuclear power plants. The reliability analyses showed that human- and software-related common cause failures and single points of failure in the output modules are dominant contributors to system unreliability. The four architectures are triple-modular-redundant (TMR), both synchronous and asynchronous, and also dual synchronous and asynchronous. The evaluation includes a review of design features, an analysis of the importance of coverage, and reliability analyses of fault tolerant systems. An advantage of fault-tolerant controllers over those not fault tolerant, is that fault-tolerant controllers continue to function after the occurrence of most single hardware faults. However, most fault-tolerant controllers have single hardware components that will cause system failure, almost all controllers have single points of failure in software, and all are subject to common cause failures. Reliability analyses based on data from several industries that have fault-tolerant controllers were used to estimate the mean-time-between-failures of fault-tolerant controllers and to predict those failures modes that may be important in nuclear power plants. 7 refs., 4 tabs

  12. Assessing the Impact of CAAD Design Tool on Architectural Design Education

    Science.gov (United States)

    Al-Matarneh, Rana; Fethi, Ihsan

    2017-01-01

    The current concept of architectural design education in most schools of architecture in Jordan is a blend between manual and digital approaches. However, the disconnection between these two methods has resulted in the students' failure to transfer skills learnt through traditional methods to the digital method of CAAD. The objective of this study…

  13. Modification of fuel failure detection system at multi-purpose reactor RSG-GAS, BATAN

    International Nuclear Information System (INIS)

    Haruyama, Mitsuo; Shitomi, Hajime; Nakamura, Kiyoshi

    2003-03-01

    As one of the technical cooperation activity based on the Annex III, the Cooperation in the Area of Reactor Physics and Technology, of the Arrangement between the National Energy Agency (BATAN) and the Japan Atomic Energy Research Institute (JAERI), the modification of the Fuel Failure Detection System (FFDS) was carried out by the joint work at the Multi-purpose Reactor RSG-G.A. Siwabessy (RSG-GAS). The system takes the delayed neutron detection method. In normal state, as the background, it measures the gloss delayed neutron concentration emitted in the primary coolant from the fission product (FP) nuclides, which are resulted from a very small amount of fissile material contamination on the fuel plate surface at the fabrication process. When a failure happened at fuel cladding, FP leaks from the fuel meat into the primary coolant. The system shows so higher indication than at normal state, then, the fuel failure can be detected at the early stage and be minimized the damages to the reactor facility and to the environment. The system has been installed at first since November 1994 and applied for reactor operation. However, recently it is not easy to maintain the system for aging degradation and shortage of the spare units and the parts difficult to find in the markets. The modification of FFDS is required for safe and steady reactor operation. The design requirements of the modification are, - To save the system units currently used and the spares on hand as long as practicable, and/or - To replace the system units with those easy to maintain or to obtain at the markets. The modified system obtained around twice of higher sensitivity for delayed neutron detection than before and more reliable monitoring possibility with redundancy. The specification, installation, adjustment methods and characteristics of the modified system and the modus operandi of FFDS at high power reactor operation are described in this paper. (author)

  14. Early Detection of Plant Equipment Failures: A Case Study in Just-in-Time Maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Parlos, Alexander G.; Kim, Kyusung; Bharadwaj, Raj M.

    2001-06-17

    The development and testing of a model-based fault detection system for electric motors is briefly presented. The fault detection system was developed using only motor nameplate information. The fault detection results presented utilize only motor voltage and current sensor information, minimizing the need for expensive or intrusive sensors. Dynamic recurrent neural networks are used to predict the input-output response of a three-phase induction motor while using an estimate of the motor speed signal. Multiresolution (or wavelet) signal-processing techniques are used in combination with more traditional methods to estimate fault features for use in winding insulation and motor mechanical and electromechanical failure detection.

  15. Early Detection of Plant Equipment Failures: A Case Study in Just-in-Time Maintenance

    International Nuclear Information System (INIS)

    Parlos, Alexander G.; Kim, Kyusung; Bharadwaj, Raj M.

    2001-01-01

    The development and testing of a model-based fault detection system for electric motors is briefly presented. The fault detection system was developed using only motor nameplate information. The fault detection results presented utilize only motor voltage and current sensor information, minimizing the need for expensive or intrusive sensors. Dynamic recurrent neural networks are used to predict the input-output response of a three-phase induction motor while using an estimate of the motor speed signal. Multiresolution (or wavelet) signal-processing techniques are used in combination with more traditional methods to estimate fault features for use in winding insulation and motor mechanical and electromechanical failure detection

  16. Modeling safety instrumented systems with MooN voting architectures addressing system reconfiguration for testing

    International Nuclear Information System (INIS)

    Torres-Echeverria, A.C.; Martorell, S.; Thompson, H.A.

    2011-01-01

    This paper addresses the modeling of probability of dangerous failure on demand and spurious trip rate of safety instrumented systems that include MooN voting redundancies in their architecture. MooN systems are a special case of k-out-of-n systems. The first part of the article is devoted to the development of a time-dependent probability of dangerous failure on demand model with capability of handling MooN systems. The model is able to model explicitly common cause failure and diagnostic coverage, as well as different test frequencies and strategies. It includes quantification of both detected and undetected failures, and puts emphasis on the quantification of common cause failure to the system probability of dangerous failure on demand as an additional component. In order to be able to accommodate changes in testing strategies, special treatment is devoted to the analysis of system reconfiguration (including common cause failure) during test of one of its components, what is then included in the model. Another model for spurious trip rate is also analyzed and extended under the same methodology in order to empower it with similar capabilities. These two models are powerful enough, but at the same time simple, to be suitable for handling of dependability measures in multi-objective optimization of both system design and test strategies for safety instrumented systems. The level of modeling detail considered permits compliance with the requirements of the standard IEC 61508. The two models are applied to brief case studies to demonstrate their effectiveness. The results obtained demonstrated that the first model is adequate to quantify time-dependent PFD of MooN systems during different system states (i.e. full operation, test and repair) and different MooN configurations, which values are averaged to obtain the PFD avg . Also, it was demonstrated that the second model is adequate to quantify STR including spurious trips induced by internal component failure and

  17. Automatic crack detection method for loaded coal in vibration failure process.

    Directory of Open Access Journals (Sweden)

    Chengwu Li

    Full Text Available In the coal mining process, the destabilization of loaded coal mass is a prerequisite for coal and rock dynamic disaster, and surface cracks of the coal and rock mass are important indicators, reflecting the current state of the coal body. The detection of surface cracks in the coal body plays an important role in coal mine safety monitoring. In this paper, a method for detecting the surface cracks of loaded coal by a vibration failure process is proposed based on the characteristics of the surface cracks of coal and support vector machine (SVM. A large number of cracked images are obtained by establishing a vibration-induced failure test system and industrial camera. Histogram equalization and a hysteresis threshold algorithm were used to reduce the noise and emphasize the crack; then, 600 images and regions, including cracks and non-cracks, were manually labelled. In the crack feature extraction stage, eight features of the cracks are extracted to distinguish cracks from other objects. Finally, a crack identification model with an accuracy over 95% was trained by inputting the labelled sample images into the SVM classifier. The experimental results show that the proposed algorithm has a higher accuracy than the conventional algorithm and can effectively identify cracks on the surface of the coal and rock mass automatically.

  18. Non-Destructive Failure Detection and Visualization of Artificially and Naturally Aged PV Modules

    Directory of Open Access Journals (Sweden)

    Gabriele C. Eder

    2018-04-01

    Full Text Available Several series of six-cell photovoltaic test-modules—intact and with deliberately generated failures (micro-cracks, cell cracks, glass breakage and connection defects—were artificially and naturally aged. They were exposed to various stress conditions (temperature, humidity and irradiation in different climate chambers in order to identify (i the stress-induced effects; (ii the potential propagation of the failures and (iii their influence on the performance. For comparison, one set of test-modules was also aged in an outdoor test site. All photovoltaic (PV modules were thoroughly electrically characterized by electroluminescence and performance measurements before and after the accelerated ageing and the outdoor test. In addition, the formation of fluorescence effects in the encapsulation of the test modules in the course of the accelerated ageing tests was followed over time using UV-fluorescence imaging measurements. It was found that the performance of PV test modules with mechanical module failures was rather unaffected upon storage under various stress conditions. However, numerous micro-cracks led to a higher rate of degradation. The polymeric encapsulate of the PV modules showed the build-up of distinctive fluorescence effects with increasing lifetime as the encapsulant material degraded under the influence of climatic stress factors (mainly irradiation by sunlight and elevated temperature by forming fluorophores. The induction period for the fluorescence effects of the polymeric encapsulant to be detectable was ~1 year of outdoor weathering (in middle Europe and 300 h of artificial irradiation (with 1000 W/m2 artificial sunlight 300–2500 nm. In the presence of irradiation, oxygen—which permeated into the module through the polymeric backsheet—bleached the fluorescence of the encapsulant top layer between the cells, above cell cracks and micro-cracks. Thus, UV-F imaging is a perfect tool for on-site detection of module failures

  19. WebSpy: An Architecture for Monitoring Web Server Availability in a Multi-Platform Environment

    Directory of Open Access Journals (Sweden)

    Madhan Mohan Thirukonda

    2002-01-01

    Full Text Available For an electronic business (e-business, customer satisfaction can be the difference between long-term success and short-term failure. Customer satisfaction is highly impacted by Web server availability, as customers expect a Web site to be available twenty-four hours a day and seven days a week. Unfortunately, unscheduled Web server downtime is often beyond the control of the organization. What is needed is an effective means of identifying and recovering from Web server downtime in order to minimize the negative impact on the customer. An automated architecture, called WebSpy, has been developed to notify administration and to take immediate action when Web server downtime is detected. This paper describes the WebSpy architecture and differentiates it from other popular Web monitoring tools. The results of a case study are presented as a means of demonstrating WebSpy's effectiveness in monitoring Web server availability.

  20. Real-time instrument-failure detection in the LOFT pressurizer using functional redundancy

    International Nuclear Information System (INIS)

    Tylee, J.L.

    1982-07-01

    The functional redundancy approach to detecting instrument failures in a pressurized water reactor (PWR) pressurizer is described and evaluated. This real-time method uses a bank of Kalman filters (one for each instrument) to generate optimal estimates of the pressurizer state. By performing consistency checks between the output of each filter, failed instruments can be identified. Simulation results and actual pressurizer data are used to demonstrate the capabilities of the technique

  1. Selection of an optimal neural network architecture for computer-aided detection of microcalcifications - Comparison of automated optimization techniques

    International Nuclear Information System (INIS)

    Gurcan, Metin N.; Sahiner, Berkman; Chan Heangping; Hadjiiski, Lubomir; Petrick, Nicholas

    2001-01-01

    Many computer-aided diagnosis (CAD) systems use neural networks (NNs) for either detection or classification of abnormalities. Currently, most NNs are 'optimized' by manual search in a very limited parameter space. In this work, we evaluated the use of automated optimization methods for selecting an optimal convolution neural network (CNN) architecture. Three automated methods, the steepest descent (SD), the simulated annealing (SA), and the genetic algorithm (GA), were compared. We used as an example the CNN that classifies true and false microcalcifications detected on digitized mammograms by a prescreening algorithm. Four parameters of the CNN architecture were considered for optimization, the numbers of node groups and the filter kernel sizes in the first and second hidden layers, resulting in a search space of 432 possible architectures. The area A z under the receiver operating characteristic (ROC) curve was used to design a cost function. The SA experiments were conducted with four different annealing schedules. Three different parent selection methods were compared for the GA experiments. An available data set was split into two groups with approximately equal number of samples. By using the two groups alternately for training and testing, two different cost surfaces were evaluated. For the first cost surface, the SD method was trapped in a local minimum 91% (392/432) of the time. The SA using the Boltzman schedule selected the best architecture after evaluating, on average, 167 architectures. The GA achieved its best performance with linearly scaled roulette-wheel parent selection; however, it evaluated 391 different architectures, on average, to find the best one. The second cost surface contained no local minimum. For this surface, a simple SD algorithm could quickly find the global minimum, but the SA with the very fast reannealing schedule was still the most efficient. The same SA scheme, however, was trapped in a local minimum on the first cost

  2. Sensitivity of the probability of failure to probability of detection curve regions

    International Nuclear Information System (INIS)

    Garza, J.; Millwater, H.

    2016-01-01

    Non-destructive inspection (NDI) techniques have been shown to play a vital role in fracture control plans, structural health monitoring, and ensuring availability and reliability of piping, pressure vessels, mechanical and aerospace equipment. Probabilistic fatigue simulations are often used in order to determine the efficacy of an inspection procedure with the NDI method modeled as a probability of detection (POD) curve. These simulations can be used to determine the most advantageous NDI method for a given application. As an aid to this process, a first order sensitivity method of the probability-of-failure (POF) with respect to regions of the POD curve (lower tail, middle region, right tail) is developed and presented here. The sensitivity method computes the partial derivative of the POF with respect to a change in each region of a POD or multiple POD curves. The sensitivities are computed at no cost by reusing the samples from an existing Monte Carlo (MC) analysis. A numerical example is presented considering single and multiple inspections. - Highlights: • Sensitivities of probability-of-failure to a region of probability-of-detection curve. • The sensitivities are computed with negligible cost. • Sensitivities identify the important region of a POD curve. • Sensitivities can be used as a guide to selecting the optimal POD curve.

  3. Understanding failures in petascale computers

    International Nuclear Information System (INIS)

    Schroeder, Bianca; Gibson, Garth A

    2007-01-01

    With petascale computers only a year or two away there is a pressing need to anticipate and compensate for a probable increase in failure and application interruption rates. Researchers, designers and integrators have available to them far too little detailed information on the failures and interruptions that even smaller terascale computers experience. The information that is available suggests that application interruptions will become far more common in the coming decade, and the largest applications may surrender large fractions of the computer's resources to taking checkpoints and restarting from a checkpoint after an interruption. This paper reviews sources of failure information for compute clusters and storage systems, projects failure rates and the corresponding decrease in application effectiveness, and discusses coping strategies such as application-level checkpoint compression and system level process-pairs fault-tolerance for supercomputing. The need for a public repository for detailed failure and interruption records is particularly concerning, as projections from one architectural family of machines to another are widely disputed. To this end, this paper introduces the Computer Failure Data Repository and issues a call for failure history data to publish in it

  4. Post-mortem cardiac diffusion tensor imaging: detection of myocardial infarction and remodeling of myofiber architecture

    International Nuclear Information System (INIS)

    Winklhofer, Sebastian; Berger, Nicole; Stolzmann, Paul; Stoeck, Christian T.; Kozerke, Sebastian; Thali, Michael; Manka, Robert; Alkadhi, Hatem

    2014-01-01

    To investigate the accuracy of post-mortem diffusion tensor imaging (DTI) for the detection of myocardial infarction (MI) and to demonstrate the feasibility of helix angle (HA) calculation to study remodelling of myofibre architecture. Cardiac DTI was performed in 26 deceased subjects prior to autopsy for medicolegal reasons. Fractional anisotropy (FA) and mean diffusivity (MD) were determined. Accuracy was calculated on per-segment (AHA classification), per-territory, and per-patient basis, with pathology as reference standard. HAs were calculated and compared between healthy segments and those with MI. Autopsy demonstrated MI in 61/440 segments (13.9 %) in 12/26 deceased subjects. Healthy myocardial segments had significantly higher FA (p 0.05). Post-mortem cardiac DTI enablesdifferentiation between healthy and infarcted myocardial segments by means of FA and MD. HA assessment allows for the demonstration of remodelling of myofibre architecture following chronic MI. (orig.)

  5. Analysis Method of Common Cause Failure on Non-safety Digital Control System

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yun Goo; Oh, Eun Gse [KHNP, Daejeon (Korea, Republic of)

    2014-08-15

    The effects of common cause failure on safety digital instrumentation and control system had been considered in defense in depth analysis with safety analysis method. However, the effects of common cause failure on non-safety digital instrumentation and control system also should be evaluated. The common cause failure can be included in credible failure on the non-safety system. In the I and C architecture of nuclear power plant, many design feature has been applied for the functional integrity of control system. One of that is segmentation. Segmentation defenses the propagation of faults in the I and C architecture. Some of effects from common cause failure also can be limited by segmentation. Therefore, in this paper there are two type of failure mode, one is failures in one control group which is segmented, and the other is failures in multiple control group because that the segmentation cannot defense all effects from common cause failure. For each type, the worst failure scenario is needed to be determined, so the analysis method has been proposed in this paper. The evaluation can be qualitative when there is sufficient justification that the effects are bounded in previous safety analysis. When it is not bounded in previous safety analysis, additional analysis should be done with conservative assumptions method of previous safety analysis or best estimation method with realistic assumptions.

  6. Studies on fuel failure detection in Rikkyo Research Reactor

    International Nuclear Information System (INIS)

    Matsuura, T.; Hayashi, S.H.; Harasawa, S.; Tomura, K.

    1992-01-01

    Studies on fuel failure detection have been made since 1986 in Rikkyo Research Reactor. One of the methods is the monitoring of the trace concentration of fission products appearing in the air on the surface of the water tank of the reactor. The interested radionuclides here are 89 Rb and 138 Cs, which are the daughter nuclides of the FP rare gas nuclides, 89 Kr and 138 Xe, respectively and have the half lives of 15.2 min and 32.2 min respectively. They are detected on a filter paper attached on a conventional dust sampler, by sucking the air of the surface of the water for 15 ∼ 30 min during reactor operation (100 kW). In this presentation are reported the results of an attempt to increase the sensitivity of detecting these nuclides by introducing nitrogen gas bubbles into the water. The bubbling of the gas increased the sensitivity as much as several times compared with the case without bubbling. These measurements are giving us the 'background' concentration, the order of which is almost unchanged for these several years, --in 10 -6 Bq/cm 3 . The origin of these nuclides is considered to be not from the fuel but from the uranium contained as an impurity in the reactor material in the core. (author)

  7. Fission product release modelling for application of fuel-failure monitoring and detection - An overview

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, B.J., E-mail: lewibre@gmail.com [Department of Chemistry and Chemical Engineering, Royal Military College of Canada, Kingston, Ontario, K7K 7B4 (Canada); Chan, P.K.; El-Jaby, A. [Department of Chemistry and Chemical Engineering, Royal Military College of Canada, Kingston, Ontario, K7K 7B4 (Canada); Iglesias, F.C.; Fitchett, A. [Candesco Division of Kinectrics Inc., 26 Wellington Street East, 3rd Floor, Toronto, Ontario M5E 1S2 (Canada)

    2017-06-15

    A review of fission product release theory is presented in support of fuel-failure monitoring analysis for the characterization and location of defective fuel. This work is used to describe: (i) the development of the steady-state Visual-DETECT code for coolant activity analysis to characterize failures in the core and the amount of tramp uranium; (ii) a generalization of this model in the STAR code for prediction of the time-dependent release of iodine and noble gas fission products to the coolant during reactor start-up, steady-state, shutdown, and bundle-shifting manoeuvres; (iii) an extension of the model to account for the release of fission products that are delayed-neutron precursors for assessment of fuel-failure location; and (iv) a simplification of the steady-state model to assess the methodology proposed by WANO for a fuel reliability indicator for water-cooled reactors.

  8. Sustainable architecture and the Passive House concept: achievements and failures on energy matters

    OpenAIRE

    Van Moeseke, Geoffrey; PLEA2011 Conference

    2011-01-01

    The Passive House approach meets increasing success and will continue to spread in coming years. But conceptual thinking about sustainable architecture also goes forward. This paper intends to confront the Passive House approach with 5 principles of sustainability. These principles are based on de Myttenaere’s attempt to give a holistic definition of sustainable architecture. Although sustainability is a very large concern, only energy related matters are examined. This paper concludes on rec...

  9. Supervision and prognosis architecture based on dynamical classification method for the predictive maintenance of dynamical evolving systems

    International Nuclear Information System (INIS)

    Traore, M.; Chammas, A.; Duviella, E.

    2015-01-01

    In this paper, we are concerned by the improvement of the safety, availability and reliability of dynamical systems’ components subjected to slow degradations (slow drifts). We propose an architecture for efficient Predictive Maintenance (PM) according to the real time estimate of the future state of the components. The architecture is built on supervision and prognosis tools. The prognosis method is based on an appropriated supervision technique that consists in drift tracking of the dynamical systems using AUDyC (AUto-adaptive and Dynamical Clustering), that is an auto-adaptive dynamical classifier. Thus, due to the complexity and the dynamical of the considered systems, the Failure Mode Effect and Criticity Analysis (FMECA) is used to identify the key components of the systems. A component is defined as an element of the system that can be impacted by only one failure. A failure of a key component causes a long downtime of the system. From the FMECA, a Fault Tree Analysis (FTA) of the system are built to determine the propagation laws of a failure on the system by using a deductive method. The proposed architecture is implemented for the PM of a thermoregulator. The application on this real system highlights the interests and the performances of the proposed architecture

  10. Extended prediction rule to optimise early detection of heart failure in older persons with non-acute shortness of breath : A cross-sectional study

    NARCIS (Netherlands)

    Van Riet, Evelien E S; Hoes, Arno W.; Limburg, Alexander; Landman, Marcel A J; Kemperman, Hans; Rutten, Frans H.

    2016-01-01

    Objectives: There is a need for a practical tool to aid general practitioners in early detection of heart failure in the elderly with shortness of breath. In this study, such a screening rule was developed based on an existing rule for detecting heart failure in older persons with a diagnosis of

  11. Probability of failure of the watershed algorithm for peak detection in comprehensive two-dimensional chromatography

    NARCIS (Netherlands)

    Vivó-Truyols, G.; Janssen, H.-G.

    2010-01-01

    The watershed algorithm is the most common method used for peak detection and integration In two-dimensional chromatography However, the retention time variability in the second dimension may render the algorithm to fail A study calculating the probabilities of failure of the watershed algorithm was

  12. Metaiodobenzylguanidine [131I] scintigraphy detects impaired myocardial sympathetic neuronal transport function of canine mechanical-overload heart failure

    International Nuclear Information System (INIS)

    Rabinovitch, M.A.; Rose, C.P.; Rouleau, J.L.

    1987-01-01

    In heart failure secondary to chronic mechanical overload, cardiac sympathetic neurons demonstrate depressed catecholamine synthetic and transport function. To assess the potential of sympathetic neuronal imaging for detection of depressed transport function, serial scintigrams were acquired after the intravenous administration of metaiodobenzylguanidine [ 131 I] to 13 normal dogs, 3 autotransplanted (denervated) dogs, 5 dogs with left ventricular failure, and 5 dogs with compensated left ventricular hypertrophy due to a surgical arteriovenous shunt. Nine dogs were killed at 14 hours postinjection for determination of metaiodobenzylguanidine [ 131 I] and endogenous norepinephrine content in left atrium, left ventricle, liver, and spleen. By 4 hours postinjection, autotransplanted dogs had a 39% reduction in mean left ventricular tracer accumulation, reflecting an absent intraneuronal tracer pool. Failure dogs demonstrated an accelerated early mean left ventricular tracer efflux rate (26.0%/hour versus 13.7%/hour in normals), reflecting a disproportionately increased extraneuronal tracer pool. They also showed reduced late left ventricular and left atrial concentrations of tracer, consistent with a reduced intraneuronal tracer pool. By contrast, compensated hypertrophy dogs demonstrated a normal early mean left ventricular tracer efflux rate (16.4%/hour) and essentially normal late left ventricular and left atrial concentrations of tracer. Metaiodobenzylguanidine [ 131 I] scintigraphic findings reflect the integrity of the cardiac sympathetic neuronal transport system in canine mechanical-overload heart failure. Metaiodobenzylguanidine [ 123 I] scintigraphy should be explored as a means of early detection of mechanical-overload heart failure in patients

  13. A neuro-fuzzy inference system for sensor failure detection using wavelet denoising, PCA and SPRT

    International Nuclear Information System (INIS)

    Na, Man Gyun

    2001-01-01

    In this work, a neuro-fuzzy inference system combined with the wavelet denoising, PCA(principal component analysis) and SPRT (sequential probability ratio test) methods is developed to detect the relevant sensor failure using other sensor signals. The wavelet denoising technique is applied to remove noise components in input signals into the neuro-fuzzy system. The PCA is used to reduce the dimension of an input space without losing a significant amount of information, The PCA makes easy the selection of the input signals into the neuro-fuzzy system. Also, a lower dimensional input space usually reduces the time necessary to train a neuro-fuzzy system. The parameters of the neuro-fuzzy inference system which estimates the relevant sensor signal are optimized by a genetic algorithm and a least-squares algorithm. The residuals between the estimated signals and the measured signals are used to detect whether the sensors are failed or not. The SPRT is used in this failure detection algorithm. The proposed sensor-monitoring algorithm was verified through applications to the pressurizer water level and the hot-leg flowrate sensors in pressurized water reactors

  14. Noise diagnosis - a method for early detection of failures in a nuclear plant

    International Nuclear Information System (INIS)

    Brinckmann, H.F.

    1981-01-01

    Noise diagnosis constitutes one method for early detection of plant failures. The method is based on the fact that nearly all undesired processes in a nuclear power plant make a measurable contribution to the noise portion of signals. Well-known examples of undesired processes in pressurized water reactors include core-barrel movement, the vibration of control elements, the appearance of loose parts in the coolant flow, and the process of coolant boiling. Each of these processes has been implicated in past nuclear plant failures. In the German Democratic Republic (GDR) P. Liewers and his colleagues have introduced noise analysis systems into the primary circuit of WWER-440 pressurized water reactors (PWR). The most progressive version (RAS-II) has become a prototype for research and routine investigations. This system is described. (author)

  15. Clad failure detection in G 3 - operational feedback; Detection de rupture de gaines G 3 - experience d'exploitation

    Energy Technology Data Exchange (ETDEWEB)

    Plisson, J [CEA Marcoule, Centre de Production de Plutonium, 30 (France)

    1964-07-01

    After briefly reviewing the role and the principles of clad failure detection, the author describes the working conditions and the conclusions reached after 4 years operation of this installation on the reactor G 3. He mentions also the modifications made to the original installation as well as the tests carried out and the experiments under way. (author) [French] Apres un rappel succinct du role et des principes de la detection de rupture de gaines, l'auteur fait un expose des conditions de fonctionnement et de l'experience tiree de 4 annees d'exploitation de cette installation sur le reacteur G 3. Il signale au passage les modifications apportees a l'installation d'origine, ainsi que les essais effectues, et les experiences en cours.

  16. Dependability analysis of proposed I and C architecture for safety systems of a large PWR

    International Nuclear Information System (INIS)

    Kabra, Ashutosh; Karmakar, G.; Tiwari, A.P.; Manoj Kumar; Marathe, P.P.

    2014-01-01

    Instrumentation and Control (I and C) systems in a reactor provide protection against unsafe operation during steady-state and transient power operations. Indian reactors traditionally adopted 2-out-of-3 (2oo3) architecture for safety systems. But, contemporary reactor safety systems are employing 2-out-of-4 (2oo4) architecture in spite of the increased cost due to the additional channel. This motivated us to carry out a comparative study of 2oo3 and 2oo4 architecture, especially for their dependability attributes - safety and availability. Quantitative estimation of safety and availability has been used to adjudge the worthiness of adopting 2oo4 architecture in I and C safety systems of a large PWR. Our analysis using Markov model shows that 2oo4 architecture, even with lower diagnostic coverage and longer proof test interval, can provide better safety and availability in comparison of 2oo3 architecture. This reduces total life cycle cost of system during development phase and complexity and frequency of surveillance test during operational phase. The paper also describes the proposed architecture for Reactor Protection System (RPS), a representative safety system, and determines its dependability using Markov analysis and Failure Mode Effect Analysis (FMEA). The proposed I and C safety system architecture also has been qualitatively analyzed for their effectiveness against common cause failures (CCFs). (author)

  17. Detection of failures of axle-bearings of railway vehicles

    Directory of Open Access Journals (Sweden)

    Bižić Milan B.

    2016-01-01

    Full Text Available The failure of axle-bearing is one of the most common causes of derailments of railway vehicles which are usually accompanied by huge material damage and human casualties. Modern railways are working intensively on the development and implementation of appropriate systems for early detection of axlebearing malfunctions, which are typically manifested by increasing of its temperature. The most common approach is based on the use of wayside systems or checkpoints located in certain places along the track. There is also an innovative approach that involves using the system for continuous measuring and online monitoring of axle-boxes temperature. The main aim is to provide early detection of malfunctions of the axle-bearing and prevention of the potential derailment. This paper analyses the existing solutions for the detection of axle-bearings malfunctions with special emphasis on the working principle and the main advantages and disadvantages. The paper presents the basics of the one newly developed wireless measuring system for on-line monitoring of axle-boxes temperature. The measuring system was tested in real conditions and can be successfully applied to the commercial railway vehicles. The main conclusion is that systems for on-line monitoring of axle-bearings temperatures are far more efficient than wayside systems. Obtained results may be important for those who deal with these and similar problems, problems of development, exploitation and maintenance of railway vehicles, strategies, regulations, etc.

  18. Post-mortem cardiac diffusion tensor imaging: detection of myocardial infarction and remodeling of myofiber architecture

    Energy Technology Data Exchange (ETDEWEB)

    Winklhofer, Sebastian; Berger, Nicole; Stolzmann, Paul [University Hospital Zurich, Institute of Diagnostic and Interventional Radiology, Zurich (Switzerland); University of Zurich, Department of Forensic Medicine and Radiology, Institute of Forensic Medicine, Zurich (Switzerland); Stoeck, Christian T.; Kozerke, Sebastian [Institute for Biomedical Engineering University and ETH Zurich, Zurich (Switzerland); Thali, Michael [University of Zurich, Department of Forensic Medicine and Radiology, Institute of Forensic Medicine, Zurich (Switzerland); Manka, Robert [University Hospital Zurich, Institute of Diagnostic and Interventional Radiology, Zurich (Switzerland); Institute for Biomedical Engineering University and ETH Zurich, Zurich (Switzerland); University Hospital Zurich, Clinic for Cardiology, Zurich (Switzerland); Alkadhi, Hatem [University Hospital Zurich, Institute of Diagnostic and Interventional Radiology, Zurich (Switzerland)

    2014-11-15

    To investigate the accuracy of post-mortem diffusion tensor imaging (DTI) for the detection of myocardial infarction (MI) and to demonstrate the feasibility of helix angle (HA) calculation to study remodelling of myofibre architecture. Cardiac DTI was performed in 26 deceased subjects prior to autopsy for medicolegal reasons. Fractional anisotropy (FA) and mean diffusivity (MD) were determined. Accuracy was calculated on per-segment (AHA classification), per-territory, and per-patient basis, with pathology as reference standard. HAs were calculated and compared between healthy segments and those with MI. Autopsy demonstrated MI in 61/440 segments (13.9 %) in 12/26 deceased subjects. Healthy myocardial segments had significantly higher FA (p < 0.01) and lower MD (p < 0.001) compared to segments with MI. Multivariate logistic regression demonstrated that FA (p < 0.10) and MD (p = 0.01) with the covariate post-mortem time (p < 0.01) predicted MI with an accuracy of 0.73. Analysis of HA distribution demonstrated remodelling of myofibre architecture, with significant differences between healthy segments and segments with chronic (p < 0.001) but not with acute MI (p > 0.05). Post-mortem cardiac DTI enablesdifferentiation between healthy and infarcted myocardial segments by means of FA and MD. HA assessment allows for the demonstration of remodelling of myofibre architecture following chronic MI. (orig.)

  19. Advanced information processing system: The Army fault tolerant architecture conceptual study. Volume 2: Army fault tolerant architecture design and analysis

    Science.gov (United States)

    Harper, R. E.; Alger, L. S.; Babikyan, C. A.; Butler, B. P.; Friend, S. A.; Ganska, R. J.; Lala, J. H.; Masotto, T. K.; Meyer, A. J.; Morton, D. P.

    1992-01-01

    Described here is the Army Fault Tolerant Architecture (AFTA) hardware architecture and components and the operating system. The architectural and operational theory of the AFTA Fault Tolerant Data Bus is discussed. The test and maintenance strategy developed for use in fielded AFTA installations is presented. An approach to be used in reducing the probability of AFTA failure due to common mode faults is described. Analytical models for AFTA performance, reliability, availability, life cycle cost, weight, power, and volume are developed. An approach is presented for using VHSIC Hardware Description Language (VHDL) to describe and design AFTA's developmental hardware. A plan is described for verifying and validating key AFTA concepts during the Dem/Val phase. Analytical models and partial mission requirements are used to generate AFTA configurations for the TF/TA/NOE and Ground Vehicle missions.

  20. Plan of studies on fuel failure detection in Rikkyo Research Reactor

    International Nuclear Information System (INIS)

    Matsuura, T.; Nagahara, T.; Hattori, M.; Kawaguchi, K.

    1987-01-01

    Studies on fuel failure detection in Rikkyo Research Reactor have recently been begun in the following four approaches. (1) Accumulation of the data on the concentration of the short-lived radioactivity originating from FP rare gases contained in the air on the water surface of the reactor tank. (2) Accumulation of the data on the concentration of FP (especially 131 I) in the water of the reactor tank. (3) Design and preparation of a ''sniffer'' by which the location of the failed fuel element can be detected, when some anomaly is found in the above two routine measurements. (4) Design and preparation of a vessel containing a fuel element, which can be useful both for ''sipping'' inspection of the fuel element and for storage of the damaged fuel element. In this paper, an outline of the above approaches and the results of some preliminary experiments are reported. (author)

  1. Characterising the loading direction sensitivity of 3D woven composites: Effect of z-binder architecture

    KAUST Repository

    Saleh, Mohamed Nasr

    2016-08-29

    Three different architectures of 3D carbon fibre woven composites (orthogonal, ORT; layer-to-layer, LTL; angle interlock, AI) were tested in quasi-static uniaxial tension. Mechanical tests (tensile in on-axis of warp and weft directions as well as 45 degrees off-axis) were carried out with the aim to study the loading direction sensitivity of these 3D woven composites. The z-binder architecture (the through-thickness reinforcement) has an effect on void content, directional fibre volume fraction, mechanical properties (on-axis and off-axis), failure mechanisms, energy absorption and fibre rotation angle in off-axis tested specimens. Out of all the examined architectures, 3D orthogonal woven composites (ORT) demonstrated a superior behaviour, especially when they were tested in 45 degrees off-axis direction, indicated by high strain to failure (similar to 23%) and high translaminar energy absorption (similar to 40 MJ/m(3)). The z-binder yarns in ORT architecture suppress the localised damage and allow larger fibre rotation during the fibre

  2. Peer-to-peer architectures for exascale computing : LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Vorobeychik, Yevgeniy; Mayo, Jackson R.; Minnich, Ronald G.; Armstrong, Robert C.; Rudish, Donald W.

    2010-09-01

    The goal of this research was to investigate the potential for employing dynamic, decentralized software architectures to achieve reliability in future high-performance computing platforms. These architectures, inspired by peer-to-peer networks such as botnets that already scale to millions of unreliable nodes, hold promise for enabling scientific applications to run usefully on next-generation exascale platforms ({approx} 10{sup 18} operations per second). Traditional parallel programming techniques suffer rapid deterioration of performance scaling with growing platform size, as the work of coping with increasingly frequent failures dominates over useful computation. Our studies suggest that new architectures, in which failures are treated as ubiquitous and their effects are considered as simply another controllable source of error in a scientific computation, can remove such obstacles to exascale computing for certain applications. We have developed a simulation framework, as well as a preliminary implementation in a large-scale emulation environment, for exploration of these 'fault-oblivious computing' approaches. High-performance computing (HPC) faces a fundamental problem of increasing total component failure rates due to increasing system sizes, which threaten to degrade system reliability to an unusable level by the time the exascale range is reached ({approx} 10{sup 18} operations per second, requiring of order millions of processors). As computer scientists seek a way to scale system software for next-generation exascale machines, it is worth considering peer-to-peer (P2P) architectures that are already capable of supporting 10{sup 6}-10{sup 7} unreliable nodes. Exascale platforms will require a different way of looking at systems and software because the machine will likely not be available in its entirety for a meaningful execution time. Realistic estimates of failure rates range from a few times per day to more than once per hour for these

  3. Predictors of treatment failure and time to detection and switching in HIV-infected Ethiopian children receiving first line anti-retroviral therapy

    Directory of Open Access Journals (Sweden)

    Bacha Tigist

    2012-08-01

    Full Text Available Abstract Background The emergence of resistance to first line antiretroviral therapy (ART regimen leads to the need for more expensive and less tolerable second line drugs. Hence, it is essential to identify and address factors associated with an increased probability of first line ART regimen failure. The objective of this article is to report on the predictors of first line ART regimen failure, the detection rate of ART regime failure, and the delay in switching to second line ART drugs. Methods A retrospective cohort study was conducted from 2005 to 2011. All HIV infected children under the age of 15 who took first line ART for at least six months at the four major hospitals of Addis Ababa, Ethiopia were included. Data were collected, entered and analyzed using Epi info/ENA version 3.5.1 and SPSS version 16. The Cox proportional-hazard model was used to assess the predictors of first line ART failure. Results Data of 1186 children were analyzed. Five hundred seventy seven (48.8% were males with a mean age of 6.22 (SD = 3.10 years. Of the 167(14.1% children who had treatment failure, 70 (5.9% had only clinical failure, 79 (6.7% had only immunologic failure, and 18 (1.5% had both clinical and immunologic failure. Patients who had height for age in the third percentile or less at initiation of ART were found to have higher probability of ART treatment failure [Adjusted Hazard Ratio (AHR, 3.25 95% CI, 1.00-10.58]. Patients who were less than three years old [AHR, 1.85 95% CI, 1.24-2.76], chronic diarrhea after initiation of antiretroviral treatment [AHR, 3.44 95% CI, 1.37-8.62], ART drug substitution [AHR, 1.70 95% CI, 1.05-2.73] and base line CD4 count below 50 cells/mm3 [AHR, 2.30 95% CI, 1.28-4.14] were also found to be at higher risk of treatment failure. Of all the 167 first line ART failure cases, only 24 (14.4% were switched to second line ART with a mean delay of 24 (SD = 11.67 months. The remaining 143 (85.6% cases were diagnosed

  4. Heterogeneous computing architecture for fast detection of SNP-SNP interactions.

    Science.gov (United States)

    Sluga, Davor; Curk, Tomaz; Zupan, Blaz; Lotric, Uros

    2014-06-25

    The extent of data in a typical genome-wide association study (GWAS) poses considerable computational challenges to software tools for gene-gene interaction discovery. Exhaustive evaluation of all interactions among hundreds of thousands to millions of single nucleotide polymorphisms (SNPs) may require weeks or even months of computation. Massively parallel hardware within a modern Graphic Processing Unit (GPU) and Many Integrated Core (MIC) coprocessors can shorten the run time considerably. While the utility of GPU-based implementations in bioinformatics has been well studied, MIC architecture has been introduced only recently and may provide a number of comparative advantages that have yet to be explored and tested. We have developed a heterogeneous, GPU and Intel MIC-accelerated software module for SNP-SNP interaction discovery to replace the previously single-threaded computational core in the interactive web-based data exploration program SNPsyn. We report on differences between these two modern massively parallel architectures and their software environments. Their utility resulted in an order of magnitude shorter execution times when compared to the single-threaded CPU implementation. GPU implementation on a single Nvidia Tesla K20 runs twice as fast as that for the MIC architecture-based Xeon Phi P5110 coprocessor, but also requires considerably more programming effort. General purpose GPUs are a mature platform with large amounts of computing power capable of tackling inherently parallel problems, but can prove demanding for the programmer. On the other hand the new MIC architecture, albeit lacking in performance reduces the programming effort and makes it up with a more general architecture suitable for a wider range of problems.

  5. Nonparametric method for failures detection and localization in the actuating subsystem of aircraft control system

    Science.gov (United States)

    Karpenko, S. S.; Zybin, E. Yu; Kosyanchuk, V. V.

    2018-02-01

    In this paper we design a nonparametric method for failures detection and localization in the aircraft control system that uses the measurements of the control signals and the aircraft states only. It doesn’t require a priori information of the aircraft model parameters, training or statistical calculations, and is based on algebraic solvability conditions for the aircraft model identification problem. This makes it possible to significantly increase the efficiency of detection and localization problem solution by completely eliminating errors, associated with aircraft model uncertainties.

  6. Instrument failure detection of flow measurement in the feedwater system of the Paks Nuclear Power Plant, Hungary

    International Nuclear Information System (INIS)

    Racz, A.

    1990-12-01

    The applicability of two different methods for early detection of instrument failures of the flow measurement in feedwater systems are investigated. Both methods are based on Kalman filtering technique of stochastic processes. The reliability of the model for description of a feedwater system is checked by comparing calculated values with measured data. Possible instrument failures are simulated in order to show the capability of the proposed procedures. A practical measurement system arrangement is suggested. (author) 10 refs.; 16 figs.; 4 tabs

  7. LYAPUNOV-Based Sensor Failure Detection and Recovery for the Reverse Water Gas Shift Process

    Science.gov (United States)

    Haralambous, Michael G.

    2002-01-01

    Livingstone, a model-based AI software system, is planned for use in the autonomous fault diagnosis, reconfiguration, and control of the oxygen-producing reverse water gas shift (RWGS) process test-bed located in the Applied Chemistry Laboratory at KSC. In this report the RWGS process is first briefly described and an overview of Livingstone is given. Next, a Lyapunov-based approach for detecting and recovering from sensor failures, differing significantly from that used by Livingstone, is presented. In this new method, models used are in t e m of the defining differential equations of system components, thus differing from the qualitative, static models used by Livingstone. An easily computed scalar inequality constraint, expressed in terms of sensed system variables, is used to determine the existence of sensor failures. In the event of sensor failure, an observer/estimator is used for determining which sensors have failed. The theory underlying the new approach is developed. Finally, a recommendation is made to use the Lyapunov-based approach to complement the capability of Livingstone and to use this combination in the RWGS process.

  8. Ultra-Sensitive NT-proBNP Quantification for Early Detection of Risk Factors Leading to Heart Failure

    Directory of Open Access Journals (Sweden)

    Keum-Soo Song

    2017-09-01

    Full Text Available Cardiovascular diseases such as acute myocardial infarction and heart failure accounted for the death of 17.5 million people (31% of all global deaths in 2015. Monitoring the level of circulating N-terminal proBNP (NT-proBNP is crucial for the detection of people at risk of heart failure. In this article, we describe a novel ultra-sensitive NT-proBNP test (us-NT-proBNP that allows the quantification of circulating NT-proBNP in 30 min at 25 °C in the linear detection range of 7.0–600 pg/mL. It is a first report on the application of a fluorescence bead labeled detection antibody, DNA-guided detection method, and glass fiber membrane platform for the quantification of NT-proBNP in clinical samples. Limit of blank, limit of detection, and limit of quantification were 2.0 pg/mL, 3.7 pg/mL, and 7 pg/mL, respectively. The coefficient of variation was found to be less than 10% in the entire detection range of 7–600 pg/mL. The test demonstrated specificity for NT-proBNP without interferences from bilirubin, intra-lipid, biotin, and hemoglobin. The serial dilution test for plasma samples containing various NT-proBNP levels showed the linear decrement in concentration with the regression coefficient of 0.980–0.998. These results indicate that us-NT-proBNP test does not suffer from the interference of the plasma components for the measurement of NT-proBNP in clinical samples.

  9. Fuel failure detection device

    International Nuclear Information System (INIS)

    Katagiri, Masaki.

    1979-01-01

    Purpose: To improve the SN ratio in the detection. Constitution: Improved precipitator method is provided. Scintillation detectors of a same function are provided respectively by each one before and after a gas reservoir for depositing fission products in the cover gas to detecting wires. The outputs from the two detectors (output from the wire not deposited with the fission products and the output from the wire after deposition) are compared to eliminate background noises resulted from not-decayed nucleides. A subtraction circuit is provided for the elimination. Since the background noises of the detecting wire can thus be measured and corrected on every detection, the SN ratio can be increased. (Ikeda, J.)

  10. Ultrasensitive electrochemical aptasensor based on sandwich architecture for selective label-free detection of colorectal cancer (CT26) cells.

    Science.gov (United States)

    Hashkavayi, Ayemeh Bagheri; Raoof, Jahan Bakhsh; Ojani, Reza; Kavoosian, Saeid

    2017-06-15

    Colorectal cancer is one of the most common cancers in the world and has no effective treatment. Therefore, development of new methods for early diagnosis is instantly required. Biological recognition probes such as synthetic receptor and aptamer is one of the candidate recognition layers to detect important biomolecules. In this work, an electrochemical aptasensor was developed by fabricating an aptamer-cell-aptamer sandwich architecture on an SBA-15-3-aminopropyltriethoxysilane (SBA-15-pr-NH 2 ) and Au nanoparticles (AuNPs) modified graphite screen printed electrode (GSPE) surface for the selective, label-free detection of CT26 cancer cells. Based on the incubation of the thiolated aptamer with CT26 cells, the electron-transfer resistance of Fe (CN) 6 3-/4- redox couple increased considerably on the aptasensor surface. The results obtained from cyclic voltammetry and electrochemical impedance spectroscopy studies showed that the fabricated aptasensor can specifically identify CT26 cells in the concentration ranges of 10-1.0×10 5 cells/mL and 1.0×10 5 -6.0×10 6 cells/mL, respectively, with a detection limit of 2cells/mL. Applying the thiol terminated aptamer (5TR1) as a recognition layer led to a sensor with high affinity for CT26 cancer cells, compared to control cancer cells of AGS cells, VERO Cells, PC3 cells and SKOV-3 cells. Therefore a simple, rapid, label free, inexpensive, excellent, sensitive and selective electrochemical aptasensor based on sandwich architecture was developed for detection of CT26 Cells. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Rolled-up magnetic sensor: nanomembrane architecture for in-flow detection of magnetic objects.

    Science.gov (United States)

    Mönch, Ingolf; Makarov, Denys; Koseva, Radinka; Baraban, Larysa; Karnaushenko, Daniil; Kaiser, Claudia; Arndt, Karl-Friedrich; Schmidt, Oliver G

    2011-09-27

    Detection and analysis of magnetic nanoobjects is a crucial task in modern diagnostic and therapeutic techniques applied to medicine and biology. Accomplishment of this task calls for the development and implementation of electronic elements directly in fluidic channels, which still remains an open and nontrivial issue. Here, we present a novel concept based on rolled-up nanotechnology for fabrication of multifunctional devices, which can be straightforwardly integrated into existing fluidic architectures. We apply strain engineering to roll-up a functional nanomembrane consisting of a magnetic sensor element based on [Py/Cu](30) multilayers, revealing giant magnetoresistance (GMR). The comparison of the sensor's characteristics before and after the roll-up process is found to be similar, allowing for a reliable and predictable method to fabricate high-quality ultracompact GMR devices. The performance of the rolled-up magnetic sensor was optimized to achieve high sensitivity to weak magnetic fields. We demonstrate that the rolled-up tube itself can be efficiently used as a fluidic channel, while the integrated magnetic sensor provides an important functionality to detect and respond to a magnetic field. The performance of the rolled-up magnetic sensor for the in-flow detection of ferromagnetic CrO(2) nanoparticles embedded in a biocompatible polymeric hydrogel shell is highlighted. © 2011 American Chemical Society

  12. Avoidance Behavior against Positive Allergens Detected with a Multiple Allergen Simultaneous Test Immunoblot Assay in Patients with Urticaria: Factors Associated with Avoidance Success/Failure.

    Science.gov (United States)

    Lee, Min Kyung; Kwon, In Ho; Kim, Han Su; Kim, Heung Yeol; Cho, Eun Byul; Bae, Youin; Park, Gyeong Hun; Park, Eun Joo; Kim, Kwang Ho; Kim, Kwang Joong

    2016-02-01

    Avoidance behavior against positive allergens detected by using multiple allergen simultaneous test (MAST)-immunoblot assay in patients with urticaria has been rarely reported. We aimed to assess the avoidance behavior of patients with urticaria against positive allergens detected with a MAST. One hundred and one urticaria patients who showed positivity to at least one allergen on a MAST completed a questionnaire regarding their test results. The avoidance behavior of the patients was evaluated, and relevant determining factors of avoidance success/failure were statistically assessed. We detected 144 different data (n=51, food allergens; n=17, pollen allergens; and n=76, aeroallergens) from 101 patients with urticaria. The avoidance failure rates were 33.3% for food allergens, 70.6% for pollen allergens, and 30.3% for aeroallergens. The pollen group showed a significantly higher avoidance failure rate than the food and aeroallergen groups (psuccessfully avoid allergens (psuccess or failure against allergens in patients with urticaria when clinicians conduct allergen-specific immunoglobulin E tests.

  13. Feature recognition and detection for ancient architecture based on machine vision

    Science.gov (United States)

    Zou, Zheng; Wang, Niannian; Zhao, Peng; Zhao, Xuefeng

    2018-03-01

    Ancient architecture has a very high historical and artistic value. The ancient buildings have a wide variety of textures and decorative paintings, which contain a lot of historical meaning. Therefore, the research and statistics work of these different compositional and decorative features play an important role in the subsequent research. However, until recently, the statistics of those components are mainly by artificial method, which consumes a lot of labor and time, inefficiently. At present, as the strong support of big data and GPU accelerated training, machine vision with deep learning as the core has been rapidly developed and widely used in many fields. This paper proposes an idea to recognize and detect the textures, decorations and other features of ancient building based on machine vision. First, classify a large number of surface textures images of ancient building components manually as a set of samples. Then, using the convolution neural network to train the samples in order to get a classification detector. Finally verify its precision.

  14. Rogue AP Detection in the Wireless LAN for Large Scale Deployment

    Directory of Open Access Journals (Sweden)

    Sang-Eon Kim

    2006-10-01

    Full Text Available The wireless LAN standard, also known as WiFi, has begun to use commercial purposes. This paper describes access network architecture of wireless LAN for large scale deployment to provide public service. A metro Ethernet and digital subscriber line access network can be used for wireless LAN with access point. In this network architecture, access point plays interface between wireless node and network infrastructure. It is important to maintain access point without any failure and problems to public users. This paper proposes definition of rogue access point and classifies based on functional problem to access the Internet. After that, rogue access point detection scheme is described based on classification over the wireless LAN. The rogue access point detector can greatly improve the network availability to network service provider of wireless LAN.

  15. Architecture of absurd (forms, positions, apposition

    Directory of Open Access Journals (Sweden)

    Fedorov Viktor Vladimirovich

    2014-04-01

    Full Text Available In everyday life we constantly face absurd things, which seem to lack common sense. The notion of the absurd acts as: a an aesthetic category; b an element of logic; c a metaphysical phenomenon. The opportunity of its overcoming is achieved through the understanding of the situation, the faith in the existence of sense and hope for his understanding. The architecture of absurd should be considered as a loss of sense of a part of architectural landscape (urban environment. The ways of organization of the architecture of absurd: the exaggerated forms and proportions, the unnatural position and apposition of various objects. These are usually small-scale facilities that have local spatial and temporary value. There are no large absurd architectural spaces, as the natural architectural environment dampens the perturbation of sense-sphere. The architecture of absurd is considered «pathology» of the environment. «Nonsense» objects and hope (or even faith to detect sense generate a fruitful paradox of architecture of absurd presence in the world.

  16. Toward a Fault Tolerant Architecture for Vital Medical-Based Wearable Computing.

    Science.gov (United States)

    Abdali-Mohammadi, Fardin; Bajalan, Vahid; Fathi, Abdolhossein

    2015-12-01

    Advancements in computers and electronic technologies have led to the emergence of a new generation of efficient small intelligent systems. The products of such technologies might include Smartphones and wearable devices, which have attracted the attention of medical applications. These products are used less in critical medical applications because of their resource constraint and failure sensitivity. This is due to the fact that without safety considerations, small-integrated hardware will endanger patients' lives. Therefore, proposing some principals is required to construct wearable systems in healthcare so that the existing concerns are dealt with. Accordingly, this paper proposes an architecture for constructing wearable systems in critical medical applications. The proposed architecture is a three-tier one, supporting data flow from body sensors to cloud. The tiers of this architecture include wearable computers, mobile computing, and mobile cloud computing. One of the features of this architecture is its high possible fault tolerance due to the nature of its components. Moreover, the required protocols are presented to coordinate the components of this architecture. Finally, the reliability of this architecture is assessed by simulating the architecture and its components, and other aspects of the proposed architecture are discussed.

  17. Shoreline Erosion and Slope Failure Detection over Southwest Lakeshore Michigan using Temporal Radar and Digital Elevation Model

    Science.gov (United States)

    Sataer, G.; Sultan, M.; Yellich, J. A.; Becker, R.; Emil, M. K.; Palaseanu, M.

    2017-12-01

    Throughout the 20th century and into the 21st century, significant losses of residential, commercial and governmental property were reported along the shores of the Great Lakes region due to one or more of the following factors: high lake levels, wave actions, groundwater discharge. A collaborative effort (Western Michigan University, University of Toledo, Michigan Geological Survey [MGS], United States Geological Survey [USGS], National Oceanographic and Atmospheric Administration [NOAA]) is underway to examine the temporal topographic variations along the shoreline and the adjacent bluff extending from the City of South Haven in the south to the City of Saugatuck in the north within the Allegan County. Our objectives include two main tasks: (1) identification of the timing of, and the areas, witnessing slope failure and shoreline erosion, and (2) investigating the factors causing the observed failures and erosion. This is being accomplished over the study area by: (1) detecting and measuring slope subsidence rates (velocities along line of site) and failures using radar interferometric persistent scatter (PS) techniques applied to ESA's European Remote Sensing (ERS) satellites, ERS-1 and -2 (spatial resolution: 25 m) that were acquired in 1995 to 2007, (2) extracting temporal high resolution (20 cm) digital elevation models (DEM) for the study area from temporal imagery acquired by Unmanned Aerial Vehicles (UAVs), and applying change detection techniques to the extracted DEMs, (3) detecting change in elevation and slope profiles extracted from two LIDAR Coastal National Elevation Database (CoNED) DEMs (spatial resolution: 0.5m), acquired on 2008 and 2012, and (4) spatial and temporal correlation of the detected changes in elevation with relevant data sets (e.g., lake levels, precipitation, groundwater levels) in search of causal effects.

  18. Using recurrent neural network models for early detection of heart failure onset.

    Science.gov (United States)

    Choi, Edward; Schuetz, Andy; Stewart, Walter F; Sun, Jimeng

    2017-03-01

    We explored whether use of deep learning to model temporal relations among events in electronic health records (EHRs) would improve model performance in predicting initial diagnosis of heart failure (HF) compared to conventional methods that ignore temporality. Data were from a health system's EHR on 3884 incident HF cases and 28 903 controls, identified as primary care patients, between May 16, 2000, and May 23, 2013. Recurrent neural network (RNN) models using gated recurrent units (GRUs) were adapted to detect relations among time-stamped events (eg, disease diagnosis, medication orders, procedure orders, etc.) with a 12- to 18-month observation window of cases and controls. Model performance metrics were compared to regularized logistic regression, neural network, support vector machine, and K-nearest neighbor classifier approaches. Using a 12-month observation window, the area under the curve (AUC) for the RNN model was 0.777, compared to AUCs for logistic regression (0.747), multilayer perceptron (MLP) with 1 hidden layer (0.765), support vector machine (SVM) (0.743), and K-nearest neighbor (KNN) (0.730). When using an 18-month observation window, the AUC for the RNN model increased to 0.883 and was significantly higher than the 0.834 AUC for the best of the baseline methods (MLP). Deep learning models adapted to leverage temporal relations appear to improve performance of models for detection of incident heart failure with a short observation window of 12-18 months. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  19. A method for reduction of Acoustic Emission (AE) data with application in machine failure detection and diagnosis

    Science.gov (United States)

    Vicuña, Cristián Molina; Höweler, Christoph

    2017-12-01

    The use of AE in machine failure diagnosis has increased over the last years. Most AE-based failure diagnosis strategies use digital signal processing and thus require the sampling of AE signals. High sampling rates are required for this purpose (e.g. 2 MHz or higher), leading to streams of large amounts of data. This situation is aggravated if fine resolution and/or multiple sensors are required. These facts combine to produce bulky data, typically in the range of GBytes, for which sufficient storage space and efficient signal processing algorithms are required. This situation probably explains why, in practice, AE-based methods consist mostly in the calculation of scalar quantities such as RMS and Kurtosis, and the analysis of their evolution in time. While the scalar-based approach offers the advantage of maximum data reduction; it has the disadvantage that most part of the information contained in the raw AE signal is lost unrecoverably. This work presents a method offering large data reduction, while keeping the most important information conveyed by the raw AE signal, useful for failure detection and diagnosis. The proposed method consist in the construction of a synthetic, unevenly sampled signal which envelopes the AE bursts present on the raw AE signal in a triangular shape. The constructed signal - which we call TriSignal - also permits the estimation of most scalar quantities typically used for failure detection. But more importantly, it contains the information of the time of occurrence of the bursts, which is key for failure diagnosis. Lomb-Scargle normalized periodogram is used to construct the TriSignal spectrum, which reveals the frequency content of the TriSignal and provides the same information as the classic AE envelope. The paper includes application examples in planetary gearbox and low-speed rolling element bearing.

  20. Clinical significance of detection of plasma natriuretic peptide in the diagnosis of patients with heart failure

    International Nuclear Information System (INIS)

    Song Chunli; Liu Haihong; Zhao Ning; Li Jie; Huang Jianmin

    2009-01-01

    To explore the clinical significance of plasma natriuretic peptide in the diagnosis of patients with heart failure (HF), the plasma atrial natriuretic peptide (ANP), brain natriuretic peptide (BNP), NT-pro brain natriuretic peptide (NT-proBNP) levels in 129 patients with heart failure and 30 healthy controls were detected by RIA and ELISA. The results showed that the plasma ANP, BNP, NT-proBNP levels in patients with heart failure were significantly higher than the healthy controls. As the cardiac function deteriorated from NYHA I to IV, the BNP and NT-proBNP levels increased consecutively with significant differences from each other. There was a negative correlation between the plasma ANP and NT-proBNP levels and LVEF. The determination of plasma ANP, BNP and NT-proBNP levels in patients with HF were helpful to the study of the severity and diagnosis of disease. (authors)

  1. Sensitivity of probability-of-failure estimates with respect to probability of detection curve parameters

    Energy Technology Data Exchange (ETDEWEB)

    Garza, J. [University of Texas at San Antonio, Mechanical Engineering, 1 UTSA circle, EB 3.04.50, San Antonio, TX 78249 (United States); Millwater, H., E-mail: harry.millwater@utsa.edu [University of Texas at San Antonio, Mechanical Engineering, 1 UTSA circle, EB 3.04.50, San Antonio, TX 78249 (United States)

    2012-04-15

    A methodology has been developed and demonstrated that can be used to compute the sensitivity of the probability-of-failure (POF) with respect to the parameters of inspection processes that are simulated using probability of detection (POD) curves. The formulation is such that the probabilistic sensitivities can be obtained at negligible cost using sampling methods by reusing the samples used to compute the POF. As a result, the methodology can be implemented for negligible cost in a post-processing non-intrusive manner thereby facilitating implementation with existing or commercial codes. The formulation is generic and not limited to any specific random variables, fracture mechanics formulation, or any specific POD curve as long as the POD is modeled parametrically. Sensitivity estimates for the cases of different POD curves at multiple inspections, and the same POD curves at multiple inspections have been derived. Several numerical examples are presented and show excellent agreement with finite difference estimates with significant computational savings. - Highlights: Black-Right-Pointing-Pointer Sensitivity of the probability-of-failure with respect to the probability-of-detection curve. Black-Right-Pointing-Pointer The sensitivities are computed with negligible cost using Monte Carlo sampling. Black-Right-Pointing-Pointer The change in the POF due to a change in the POD curve parameters can be easily estimated.

  2. Sensitivity of probability-of-failure estimates with respect to probability of detection curve parameters

    International Nuclear Information System (INIS)

    Garza, J.; Millwater, H.

    2012-01-01

    A methodology has been developed and demonstrated that can be used to compute the sensitivity of the probability-of-failure (POF) with respect to the parameters of inspection processes that are simulated using probability of detection (POD) curves. The formulation is such that the probabilistic sensitivities can be obtained at negligible cost using sampling methods by reusing the samples used to compute the POF. As a result, the methodology can be implemented for negligible cost in a post-processing non-intrusive manner thereby facilitating implementation with existing or commercial codes. The formulation is generic and not limited to any specific random variables, fracture mechanics formulation, or any specific POD curve as long as the POD is modeled parametrically. Sensitivity estimates for the cases of different POD curves at multiple inspections, and the same POD curves at multiple inspections have been derived. Several numerical examples are presented and show excellent agreement with finite difference estimates with significant computational savings. - Highlights: ► Sensitivity of the probability-of-failure with respect to the probability-of-detection curve. ►The sensitivities are computed with negligible cost using Monte Carlo sampling. ► The change in the POF due to a change in the POD curve parameters can be easily estimated.

  3. From green architecture to architectural green

    DEFF Research Database (Denmark)

    Earon, Ofri

    2011-01-01

    that describes the architectural exclusivity of this particular architecture genre. The adjective green expresses architectural qualities differentiating green architecture from none-green architecture. Currently, adding trees and vegetation to the building’s facade is the main architectural characteristics...... they have overshadowed the architectural potential of green architecture. The paper questions how a green space should perform, look like and function. Two examples are chosen to demonstrate thorough integrations between green and space. The examples are public buildings categorized as pavilions. One......The paper investigates the topic of green architecture from an architectural point of view and not an energy point of view. The purpose of the paper is to establish a debate about the architectural language and spatial characteristics of green architecture. In this light, green becomes an adjective...

  4. Architecture on Architecture

    DEFF Research Database (Denmark)

    Olesen, Karen

    2016-01-01

    that is not scientific or academic but is more like a latent body of data that we find embedded in existing works of architecture. This information, it is argued, is not limited by the historical context of the work. It can be thought of as a virtual capacity – a reservoir of spatial configurations that can...... correlation between the study of existing architectures and the training of competences to design for present-day realities.......This paper will discuss the challenges faced by architectural education today. It takes as its starting point the double commitment of any school of architecture: on the one hand the task of preserving the particular knowledge that belongs to the discipline of architecture, and on the other hand...

  5. Aerobot Autonomy Architecture

    Science.gov (United States)

    Elfes, Alberto; Hall, Jeffery L.; Kulczycki, Eric A.; Cameron, Jonathan M.; Morfopoulos, Arin C.; Clouse, Daniel S.; Montgomery, James F.; Ansar, Adnan I.; Machuzak, Richard J.

    2009-01-01

    An architecture for autonomous operation of an aerobot (i.e., a robotic blimp) to be used in scientific exploration of planets and moons in the Solar system with an atmosphere (such as Titan and Venus) is undergoing development. This architecture is also applicable to autonomous airships that could be flown in the terrestrial atmosphere for scientific exploration, military reconnaissance and surveillance, and as radio-communication relay stations in disaster areas. The architecture was conceived to satisfy requirements to perform the following functions: a) Vehicle safing, that is, ensuring the integrity of the aerobot during its entire mission, including during extended communication blackouts. b) Accurate and robust autonomous flight control during operation in diverse modes, including launch, deployment of scientific instruments, long traverses, hovering or station-keeping, and maneuvers for touch-and-go surface sampling. c) Mapping and self-localization in the absence of a global positioning system. d) Advanced recognition of hazards and targets in conjunction with tracking of, and visual servoing toward, targets, all to enable the aerobot to detect and avoid atmospheric and topographic hazards and to identify, home in on, and hover over predefined terrain features or other targets of scientific interest. The architecture is an integrated combination of systems for accurate and robust vehicle and flight trajectory control; estimation of the state of the aerobot; perception-based detection and avoidance of hazards; monitoring of the integrity and functionality ("health") of the aerobot; reflexive safing actions; multi-modal localization and mapping; autonomous planning and execution of scientific observations; and long-range planning and monitoring of the mission of the aerobot. The prototype JPL aerobot (see figure) has been tested extensively in various areas in the California Mojave desert.

  6. A Novel Thermal-Mechanical Detection System for Reactor Pressure Vessel Bottom Failure Monitoring in Severe Accidents

    International Nuclear Information System (INIS)

    Bi, Daowei; Bu, Jiangtao; Xu, Dongling

    2013-06-01

    Following the Fukushima Daiichi nuclear accident in Japan, there is an increased need of enhanced capabilities for severe accident management (SAM) program. Among others, a reliable method for detecting reactor pressure vessel (RPV) bottom failure has been evaluated as imperative by many utility owners. Though radiation and/or temperature measurement are potential solutions by tradition, there are some limitations for them to function desirably in such severe accident as that in Japan. To provide reliable information for assessment of accident progress in SAM program, in this paper we propose a novel thermal-mechanical detection system (TMDS) for RPV bottom failure monitoring in severe accidents. The main components of TMDS include thermally sensitive element, metallic cables, tension controlled switch and main control room annunciation device. With TMDS installed, there shall be a reliable means of keeping SAM decision-makers informed whether the RPV bottom has indeed failed. Such assurance definitely guarantees enhancement of severe accident management performance and significantly improve nuclear safety and thus protect the society and people. (authors)

  7. Signal detection theory, the exclusion failure paradigm and weak consciousness--evidence for the access/phenomenal distinction?

    Science.gov (United States)

    Irvine, Elizabeth

    2009-06-01

    Block [Block, N. (2005). Two neural correlates of consciousness. Trends in Cognitive Science, 9, 46-52] and Snodgrass (2006) claim that a signal detection theory (SDT) analysis of qualitative difference paradigms, in particular the exclusion failure paradigm, reveals cases of phenomenal consciousness without access consciousness. This claim is unwarranted on several grounds. First, partial cognitive access rather than a total lack of cognitive access can account for exclusion failure results. Second, Snodgrass's Objective Threshold/Strategic (OT/S) model of perception relies on a problematic 'enable' approach to perception that denies the possibility of intentional control of unconscious perception and any effect of following different task instructions on the presence/absence of phenomenal consciousness. Many of Block's purported examples of phenomenal consciousness without cognitive access also rely on this problematic approach. Third, qualitative difference paradigms may index only a subset of access consciousness. Thus, qualitative difference paradigms like exclusion failure cannot be used to isolate phenomenal consciousness, any attempt to do so still faces serious methodological problems.

  8. Fault Tolerant Control Architecture Design for Mobile Manipulation in Scientific Facilities

    Directory of Open Access Journals (Sweden)

    Mohammad M. Aref

    2015-01-01

    Full Text Available This paper describes one of the challenging issues implied by scientific infrastructures on a mobile robot cognition architecture. For a generally applicable cognition architecture, we study the dependencies and logical relations between several tasks and subsystems. The overall view of the software modules is described, including their relationship with a fault management module that monitors the consistency of the data flow among the modules. The fault management module is the solution of the deliberative architecture for the single point failures, and the safety anchor is the reactive solution for the faults by redundant equipment. In addition, a hardware architecture is proposed to ensure safe robot movement as a redundancy for the cognition of the robot. The method is designed for a four-wheel steerable (4WS mobile manipulator (iMoro as a case study.

  9. Unavailability of repairable components with failures detectable upon demand: Remarks on a work of Caldarola

    International Nuclear Information System (INIS)

    Souza Borges, W. de; Silva Pagetti, P. da

    1987-01-01

    In this paper an exact expression has been obtained for the asymptotic mean unavailability in time domain of components with failures detected upon demand. The model is more general than those proposed in the literature since it allows the use of general distributions for component life times, repair times and inter-demand times. Expressions for the special case of exponential life times have also been derived. (orig.)

  10. Temperature noise analysis and sodium boiling detection in the fuel failure mockup

    International Nuclear Information System (INIS)

    Sides, W.H. Jr.; Fry, D.N.; Leavell, W.H.; Mathis, M.V.; Saxe, R.F.

    1976-01-01

    Sodium temperature noise was measured at the exit of simulated, fast-reactor fuel subassemblies in the Fuel Failure Mockup (FFM) to determine the feasibility of using temperature noise monitors to detect flow blockages in fast reactors. Also, acoustic noise was measured to determine whether sodium boiling in the FFM could be detected acoustically and whether noncondensable gas entrained in the sodium coolant would affect the sensitivity of the acoustic noise detection system. Information from these studies would be applied to the design of safety systems for operating liquid-metal fast breeder reactors (LMFBRs). It was determined that the statistical properties of temperature noise are dependent on the shape of temperature profiles across the subassemblies, and that a blockage upstream of a thermocouple that increases the gradient of the profile near the blockage will also increase the temperature noise at the thermocouple. Amplitude probability analysis of temperature noise shows a skewed amplitude density function about the mean temperature that varies with the location of the thermocouple with respect to the blockage location. It was concluded that sodium boiling in the FFM could be detected acoustically. However, entrained noncondensable gas in the sodium coolant at void fractions greater than 0.4 percent attenuated the acoustic signals sufficiently that boiling was not detected. At a void fraction of 0.1 percent, boiling was indicated only by the two acoustic detectors closest to the boiling site

  11. Analysis of Damage in Laminated Architectural Glazing Subjected to Wind Loading and Windborne Debris Impact

    Directory of Open Access Journals (Sweden)

    Daniel S. Stutts

    2013-05-01

    Full Text Available Wind loading and windborne debris (missile impact are the two primary mechanisms that result in window glazing damage during hurricanes. Wind-borne debris is categorized into two types: small hard missiles; such as roof gravel; and large soft missiles representing lumber from wood-framed buildings. Laminated architectural glazing (LAG may be used in buildings where impact resistance is needed. The glass plies in LAG undergo internal damage before total failure. The bulk of the published work on this topic either deals with the stress and dynamic analyses of undamaged LAG or the total failure of LAG. The pre-failure damage response of LAG due to the combination of wind loading and windborne debris impact is studied. A continuum damage mechanics (CDM based constitutive model is developed and implemented via an axisymmetric finite element code to study the failure and damage behavior of laminated architectural glazing subjected to combined loading of wind and windborne debris impact. The effect of geometric and material properties on the damage pattern is studied parametrically.

  12. A new efficient algorithmic-based SEU tolerant system architecture

    International Nuclear Information System (INIS)

    Blaquiere, Y.; Gagne, G.; Savaria, Y.; Evequoz, C.

    1995-01-01

    A new ABFT architecture is proposed to tolerate multiple SEU with low overheads. This architecture memorizes operands on a stack upon error detection and it corrects errors by recomputing. This allows uninterrupted input data stream to be processed without data loss

  13. Peer-to-peer Cooperative Scheduling Architecture for National Grid Infrastructure

    Science.gov (United States)

    Matyska, Ludek; Ruda, Miroslav; Toth, Simon

    For some ten years, the Czech National Grid Infrastructure MetaCentrum uses a single central PBSPro installation to schedule jobs across the country. This centralized approach keeps a full track about all the clusters, providing support for jobs spanning several sites, implementation for the fair-share policy and better overall control of the grid environment. Despite a steady progress in the increased stability and resilience to intermittent very short network failures, growing number of sites and processors makes this architecture, with a single point of failure and scalability limits, obsolete. As a result, a new scheduling architecture is proposed, which relies on higher autonomy of clusters. It is based on a peer to peer network of semi-independent schedulers for each site or even cluster. Each scheduler accepts jobs for the whole infrastructure, cooperating with other schedulers on implementation of global policies like central job accounting, fair-share, or submission of jobs across several sites. The scheduling system is integrated with the Magrathea system to support scheduling of virtual clusters, including the setup of their internal network, again eventually spanning several sites. On the other hand, each scheduler is local to one of several clusters and is able to directly control and submit jobs to them even if the connection of other scheduling peers is lost. In parallel to the change of the overall architecture, the scheduling system itself is being replaced. Instead of PBSPro, chosen originally for its declared support of large scale distributed environment, the new scheduling architecture is based on the open-source Torque system. The implementation and support for the most desired properties in PBSPro and Torque are discussed and the necessary modifications to Torque to support the MetaCentrum scheduling architecture are presented, too.

  14. Instrumentation Standard Architectures for Future High Availability Control Systems

    International Nuclear Information System (INIS)

    Larsen, R.S.

    2005-01-01

    Architectures for next-generation modular instrumentation standards should aim to meet a requirement of High Availability, or robustness against system failure. This is particularly important for experiments both large and small mounted on production accelerators and light sources. New standards should be based on architectures that (1) are modular in both hardware and software for ease in repair and upgrade; (2) include inherent redundancy at internal module, module assembly and system levels; (3) include modern high speed serial inter-module communications with robust noise-immune protocols; and (4) include highly intelligent diagnostics and board-management subsystems that can predict impending failure and invoke evasive strategies. The simple design principles lead to fail-soft systems that can be applied to any type of electronics system, from modular instruments to large power supplies to pulsed power modulators to entire accelerator systems. The existing standards in use are briefly reviewed and compared against a new commercial standard which suggests a powerful model for future laboratory standard developments. The past successes of undertaking such projects through inter-laboratory engineering-physics collaborations will be briefly summarized

  15. Advanced Sensor Platform to Evaluate Manloads For Exploration Suit Architectures

    Science.gov (United States)

    McFarland, Shane; Pierce, Gregory

    2016-01-01

    Space suit manloads are defined as the outer bounds of force that the human occupant of a suit is able to exert onto the suit during motion. They are defined on a suit-component basis as a unit of maximum force that the suit component in question must withstand without failure. Existing legacy manloads requirements are specific to the suit architecture of the EMU and were developed in an iterative fashion; however, future exploration needs dictate a new suit architecture with bearings, load paths, and entry capability not previously used in any flight suit. No capability currently exists to easily evaluate manloads imparted by a suited occupant, which would be required to develop requirements for a flight-rated design. However, sensor technology has now progressed to the point where an easily-deployable, repeatable and flexible manloads measuring technique could be developed leveraging recent advances in sensor technology. INNOVATION: This development positively impacts schedule, cost and safety risk associated with new suit exploration architectures. For a final flight design, a comprehensive and accurate man loads requirements set must be communicated to the contractor; failing that, a suit design which does not meet necessary manloads limits is prone to failure during testing or worse, during an EVA, which could cause catastrophic failure of the pressure garment posing risk to the crew. This work facilitates a viable means of developing manloads requirements using a range of human sizes & strengths. OUTCOME / RESULTS: Performed sensor market research. Highlighted three viable options (primary, secondary, and flexible packaging option). Designed/fabricated custom bracket to evaluate primary option on a single suit axial. Manned suited manload testing completed and general approach verified.

  16. The effects of fibre architecture on fatigue life-time of composite materials

    DEFF Research Database (Denmark)

    Hansen, Jens Zangenberg; Østergaard, Rasmus

    Wind turbine rotor blades are among the largest composite structures manufactured of fibre reinforced polymer. During the service life of a wind turbine rotor blade, it is subjected to cyclic loading that potentially can lead to material failure, also known as fatigue. With reference to glass fibre...... reinforced composites used for the main laminate of a wind turbine rotor blade, the problem addressed in the present work is the effect of the fibre and fabric architecture on the fatigue life-time under tension-tension loading. Fatigue of composite materials has been a central research topic for the last...... and analyses identify and explain the onset of tension fatigue failure. It is documented that improvements of the fibre architecture and specimen design are needed in order to provide next generation of fatigue resistant composite materials for wind turbine rotor blades....

  17. Designing fault-tolerant real-time computer systems with diversified bus architecture for nuclear power plants

    International Nuclear Information System (INIS)

    Behera, Rajendra Prasad; Murali, N.; Satya Murty, S.A.V.

    2014-01-01

    Fault-tolerant real-time computer (FT-RTC) systems are widely used to perform safe operation of nuclear power plants (NPP) and safe shutdown in the event of any untoward situation. Design requirements for such systems need high reliability, availability, computational ability for measurement via sensors, control action via actuators, data communication and human interface via keyboard or display. All these attributes of FT-RTC systems are required to be implemented using best known methods such as redundant system design using diversified bus architecture to avoid common cause failure, fail-safe design to avoid unsafe failure and diagnostic features to validate system operation. In this context, the system designer must select efficient as well as highly reliable diversified bus architecture in order to realize fault-tolerant system design. This paper presents a comparative study between CompactPCI bus and Versa Module Eurocard (VME) bus architecture for designing FT-RTC systems with switch over logic system (SOLS) for NPP. (author)

  18. Processing of Instantaneous Angular Speed Signal for Detection of a Diesel Engine Failure

    Directory of Open Access Journals (Sweden)

    Adam Charchalis

    2013-01-01

    Full Text Available Continuous monitoring of diesel engine performance under its operating is critical for the prediction of malfunction development and subsequently functional failure detection. Analysis of instantaneous angular speed (IAS of the crankshaft is considered as one of the nonintrusive and effective methods of the detection of combustion quality deterioration. In this paper results of experimental verification of fuel system's malfunction detecting, using optical encoder for IAS recording are presented. The implemented method relies on the comparison of measurement results, recorded under healthy and faulty conditions of the engine. Elaborated dynamic model of angular speed variations enables us to build templates of engine behavior. Recorded during experiment, values of cylinder pressure were taken for the approximation of pressure basic waveform. The main task of data processing is smoothing the raw angular speed signal. The noise is due to sensor mount vibrations, signal emitter machining, engine body vibrations, and crankshaft torsional vibrations. Smoothing of the measurement data was carried out by the implementation of the Savitzky-Golay filter. Measured signal after smoothing was compared with the model of IAS run.

  19. Detection of Taenia solium taeniasis coproantigen is an early indicator of treatment failure for taeniasis.

    Science.gov (United States)

    Bustos, Javier A; Rodriguez, Silvia; Jimenez, Juan A; Moyano, Luz M; Castillo, Yesenia; Ayvar, Viterbo; Allan, James C; Craig, Philip S; Gonzalez, Armando E; Gilman, Robert H; Tsang, Victor C W; Garcia, Hector H

    2012-04-01

    Taenia solium causes taeniasis and cysticercosis, a zoonotic complex associated with a significant burden of epilepsy in most countries. Reliable diagnosis and efficacious treatment of taeniasis are needed for disease control. Currently, cure can be confirmed only after a period of at least 1 month, by negative stool microscopy. This study assessed the performance of detection by a coproantigen enzyme-linked immunosorbent assay (CoAg-ELISA) for the early evaluation of the efficacy of antiparasitic treatment of human T. solium taeniasis. We followed 69 tapeworm carriers who received niclosamide as standard treatment. Stool samples were collected on days 1, 3, 7, 15, 30, and 90 after treatment and were processed by microscopy and CoAg-ELISA. The efficacy of niclosamide was 77.9% (53/68). Thirteen patients received a second course of treatment and completed the follow-up. CoAg-ELISA was therefore evaluated for a total of 81 cases (68 treatments, 13 retreatments). In successful treatments (n = 64), the proportion of patients who became negative by CoAg-ELISA was 62.5% after 3 days, 89.1% after 7 days, 96.9% after 15 days, and 100% after 30 days. In treatment failures (n = 17), the CoAg-ELISA result was positive for 70.6% of patients after 3 days, 94.1% after 7 days, and 100% after 15 and 30 days. Only 2 of 17 samples in cases of treatment failure became positive by microscopy by day 30. The presence of one scolex, but not multiple scolices, in posttreatment stools was strongly associated with cure (odds ratio [OR], 52.5; P taeniasis. Early assessment at day 15 would detect treatment failure before patients become infective.

  20. Lab architecture

    Science.gov (United States)

    Crease, Robert P.

    2008-04-01

    There are few more dramatic illustrations of the vicissitudes of laboratory architecturethan the contrast between Building 20 at the Massachusetts Institute of Technology (MIT) and its replacement, the Ray and Maria Stata Center. Building 20 was built hurriedly in 1943 as temporary housing for MIT's famous Rad Lab, the site of wartime radar research, and it remained a productive laboratory space for over half a century. A decade ago it was demolished to make way for the Stata Center, an architecturally striking building designed by Frank Gehry to house MIT's computer science and artificial intelligence labs (above). But in 2004 - just two years after the Stata Center officially opened - the building was criticized for being unsuitable for research and became the subject of still ongoing lawsuits alleging design and construction failures.

  1. Architecture of high reliable control systems using complex software

    International Nuclear Information System (INIS)

    Tallec, M.

    1990-01-01

    The problems involved by the use of complex softwares in control systems that must insure a very high level of safety are examined. The first part makes a brief description of the prototype of PROSPER system. PROSPER means protection system for nuclear reactor with high performances. It has been installed on a French nuclear power plant at the beginnning of 1987 and has been continually working since that time. This prototype is realized on a multi-processors system. The processors communicate between themselves using interruptions and protected shared memories. On each processor, one or more protection algorithms are implemented. Those algorithms use data coming directly from the plant and, eventually, data computed by the other protection algorithms. Each processor makes its own acquisitions from the process and sends warning messages if some operating anomaly is detected. All algorithms are activated concurrently on an asynchronous way. The results are presented and the safety related problems are detailed. - The second part is about measurements' validation. First, we describe how the sensors' measurements will be used in a protection system. Then, a proposal for a method based on the techniques of artificial intelligence (expert systems and neural networks) is presented. - The last part is about the problems of architectures of systems including hardware and software: the different types of redundancies used till now and a proposition of a multi-processors architecture which uses an operating system that is able to manage several tasks implemented on different processors, which verifies the good operating of each of those tasks and of the related processors and which allows to carry on the operation of the system, even in a degraded manner when a failure has been detected are detailed [fr

  2. Mexican Art and Architecture Databases: Needs, Achievements, Problems.

    Science.gov (United States)

    Barberena, Elsa

    At the international level, a lack of diffusion of Mexican art and architecture in indexes and abstracts has been detected. Reasons for this could be lack of continuity in publications, the use of the Spanish language, lack of interest in Mexican art and architecture, and sporadic financial resources. Nevertheless, even though conditions are not…

  3. Detection of resistance mutations and CD4 slopes in individuals experiencing sustained virological failure

    DEFF Research Database (Denmark)

    Schultze, Anna; Paredes, Roger; Sabin, Caroline

    2014-01-01

    during the episode were included. Mutations were identified using the IAS-US (2013) list, and were presumed to be present from detection until the end of an episode. Multivariable linear mixed models with a random intercept and slope adjusted for age, baseline CD4 count, hepatitis C, drug type, RNA (log...... mutations on CD4 slopes in patients undergoing episodes of viral failure. MATERIALS AND METHODS: Patients from the EuroSIDA and UK CHIC cohorts undergoing at least one episode of virological failure (>3 consecutive RNA measurements >500 on ART) with at least three CD4 measurements and a resistance test......-scale), risk group and subtype were used to estimate CD4 slopes. Individual mutations with a population prevalence of >10% were tested for their effect on the CD4 slope. RESULTS: A total of 2731 patients experiencing a median of 1 (range 1-4) episodes were included in this analysis. The prevalence of any...

  4. Lifecycle Prognostics Architecture for Selected High-Cost Active Components

    Energy Technology Data Exchange (ETDEWEB)

    N. Lybeck; B. Pham; M. Tawfik; J. B. Coble; R. M. Meyer; P. Ramuhalli; L. J. Bond

    2011-08-01

    There are an extensive body of knowledge and some commercial products available for calculating prognostics, remaining useful life, and damage index parameters. The application of these technologies within the nuclear power community is still in its infancy. Online monitoring and condition-based maintenance is seeing increasing acceptance and deployment, and these activities provide the technological bases for expanding to add predictive/prognostics capabilities. In looking to deploy prognostics there are three key aspects of systems that are presented and discussed: (1) component/system/structure selection, (2) prognostic algorithms, and (3) prognostics architectures. Criteria are presented for component selection: feasibility, failure probability, consequences of failure, and benefits of the prognostics and health management (PHM) system. The basis and methods commonly used for prognostics algorithms are reviewed and summarized. Criteria for evaluating PHM architectures are presented: open, modular architecture; platform independence; graphical user interface for system development and/or results viewing; web enabled tools; scalability; and standards compatibility. Thirteen software products were identified and discussed in the context of being potentially useful for deployment in a PHM program applied to systems in a nuclear power plant (NPP). These products were evaluated by using information available from company websites, product brochures, fact sheets, scholarly publications, and direct communication with vendors. The thirteen products were classified into four groups of software: (1) research tools, (2) PHM system development tools, (3) deployable architectures, and (4) peripheral tools. Eight software tools fell into the deployable architectures category. Of those eight, only two employ all six modules of a full PHM system. Five systems did not offer prognostic estimates, and one system employed the full health monitoring suite but lacked operations and

  5. Lifecycle Prognostics Architecture for Selected High-Cost Active Components

    International Nuclear Information System (INIS)

    Lybeck, N.; Pham, B.; Tawfik, M.; Coble, J.B.; Meyer, R.M.; Ramuhalli, P.; Bond, L.J.

    2011-01-01

    There are an extensive body of knowledge and some commercial products available for calculating prognostics, remaining useful life, and damage index parameters. The application of these technologies within the nuclear power community is still in its infancy. Online monitoring and condition-based maintenance is seeing increasing acceptance and deployment, and these activities provide the technological bases for expanding to add predictive/prognostics capabilities. In looking to deploy prognostics there are three key aspects of systems that are presented and discussed: (1) component/system/structure selection, (2) prognostic algorithms, and (3) prognostics architectures. Criteria are presented for component selection: feasibility, failure probability, consequences of failure, and benefits of the prognostics and health management (PHM) system. The basis and methods commonly used for prognostics algorithms are reviewed and summarized. Criteria for evaluating PHM architectures are presented: open, modular architecture; platform independence; graphical user interface for system development and/or results viewing; web enabled tools; scalability; and standards compatibility. Thirteen software products were identified and discussed in the context of being potentially useful for deployment in a PHM program applied to systems in a nuclear power plant (NPP). These products were evaluated by using information available from company websites, product brochures, fact sheets, scholarly publications, and direct communication with vendors. The thirteen products were classified into four groups of software: (1) research tools, (2) PHM system development tools, (3) deployable architectures, and (4) peripheral tools. Eight software tools fell into the deployable architectures category. Of those eight, only two employ all six modules of a full PHM system. Five systems did not offer prognostic estimates, and one system employed the full health monitoring suite but lacked operations and

  6. Virtually-synchronous communication based on a weak failure suspector

    Science.gov (United States)

    Schiper, Andre; Ricciardi, Aleta

    1993-01-01

    Failure detectors (or, more accurately Failure Suspectors (FS)) appear to be a fundamental service upon which to build fault-tolerant, distributed applications. This paper shows that a FS with very weak semantics (i.e., that delivers failure and recovery information in no specific order) suffices to implement virtually-synchronous communication (VSC) in an asynchronous system subject to process crash failures and network partitions. The VSC paradigm is particularly useful in asynchronous systems and greatly simplifies building fault-tolerant applications that mask failures by replicating processes. We suggest a three-component architecture to implement virtually-synchronous communication: (1) at the lowest level, the FS component; (2) on top of it, a component (2a) that defines new views; and (3) a component (2b) that reliably multicasts messages within a view. The issues covered in this paper also lead to a better understanding of the various membership service semantics proposed in recent literature.

  7. The MGS Avionics System Architecture: Exploring the Limits of Inheritance

    Science.gov (United States)

    Bunker, R.

    1994-01-01

    Mars Global Surveyor (MGS) avionics system architecture comprises much of the electronics on board the spacecraft: electrical power, attitude and articulation control, command and data handling, telecommunications, and flight software. Schedule and cost constraints dictated a mix of new and inherited designs, especially hardware upgrades based on findings of the Mars Observer failure review boards.

  8. Detection of gaseous fission products in water - a method of monitoring fuel sheathing failures

    Energy Technology Data Exchange (ETDEWEB)

    Tunnicliffe, P. R.; Whittier, A. C.

    1959-05-15

    The gaseous activities stripped from samples of effluent coolant from the NRU fuel elements tested in the central thimble of the NRX reactor (NRU loop) and from the NRX main effluent have been investigated. The activities obtained from the NRU loop can be attributed to gaseous fission products only. Design data have been obtained for a 'Gaseous Fission Product Monitor' to be installed for use with the NRU reactor. It is expected that this monitor will have high sensitivity to activity indicative of an incipient fuel element sheath failure. No qualitative determination of the various gaseous activities obtained from the NRX effluent has been made. A strong component of 25 {+-}1 seconds half-life is not consistent with O-19. Limited information concerning sheath failures in NRX was obtained. Of six failures observed in parallel with the installed delayed neutron monitors, three of these gave pre-warnings and in each case the gaseous fission product monitor showed a substantially greater sensitivity. An experiment in which small samples of uranium, inserted into the NRX reactor, could be exposed at will to a stream of water showed the behaviour of the two types of monitors to be similar. However, a number of signals were detected only by the gaseous fission product monitor. These can be attributed to its sensitivity to relatively long lived fission products. (author)

  9. Novel elastic protection against DDF failures in an enhanced software-defined SIEPON

    Science.gov (United States)

    Pakpahan, Andrew Fernando; Hwang, I.-Shyan; Yu, Yu-Ming; Hsu, Wu-Hsiao; Liem, Andrew Tanny; Nikoukar, AliAkbar

    2017-07-01

    Ever-increasing bandwidth demands on passive optical networks (PONs) are pushing the utilization of every fiber strand to its limit. This is mandating comprehensive protection until the end of the distribution drop fiber (DDF). Hence, it is important to provide refined protection with an advanced fault-protection architecture and recovery mechanism that is able to cope with various DDF failures. We propose a novel elastic protection against DDF failures that incorporates a software-defined networking (SDN) capability and a bus protection line to enhance the resiliency of the existing Service Interoperability in Ethernet Passive Optical Networks (SIEPON) system. We propose the addition of an integrated SDN controller and flow tables to the optical line terminal and optical network units (ONUs) in order to deliver various DDF protection scenarios. The proposed architecture enables flexible assignment of backup ONU(s) in pre/post-fault conditions depending on the PON traffic load. A transient backup ONU and multiple backup ONUs can be deployed in the pre-fault and post-fault scenarios, respectively. Our extensively discussed simulation results show that our proposed architecture provides better overall throughput and drop probability compared to the architecture with a fixed DDF protection mechanism. It does so while still maintaining overall QoS performance in terms of packet delay, mean jitter, packet loss, and throughput under various fault conditions.

  10. Evaluation of the cool-down behaviour of ITER FW beryllium tiles for an early failure detection

    Directory of Open Access Journals (Sweden)

    Thomas Weber

    2016-12-01

    Full Text Available The design of the first wall in ITER foresees several hundred thousand beryllium tiles, which are bonded to the water-cooled CuCrZr supporting structure. Due to the nature of a Tokamak reactor this bonding is faced to thermal fatigue. Since the failure of a single tile might already have a major impact on the operability of ITER, comprehensive high heat flux tests are performed on prototypes prior to the acceptance of manufacturing procedures. For a deeper understanding of the temperature curves, which were and will be measured by IR devices of these first wall prototypes, thermo-mechanical FEM simulations shall demonstrate the possibilities of an early bonding failure detection. Hereby, the maximum temperatures for each cycle as well as the cool-down behaviour are the input data.

  11. DC-to-AC inverter ratio failure detector

    Science.gov (United States)

    Ebersole, T. J.; Andrews, R. E.

    1975-01-01

    Failure detection technique is based upon input-output ratios, which is independent of inverter loading. Since inverter has fixed relationship between V-in/V-out and I-in/I-out, failure detection criteria are based on this ratio, which is simply inverter transformer turns ratio, K, equal to primary turns divided by secondary turns.

  12. A Hybrid Architecture for Vision-Based Obstacle Avoidance

    Directory of Open Access Journals (Sweden)

    Mehmet Serdar Güzel

    2013-01-01

    Full Text Available This paper proposes a new obstacle avoidance method using a single monocular vision camera as the only sensor which is called as Hybrid Architecture. This architecture integrates a high performance appearance-based obstacle detection method into an optical flow-based navigation system. The hybrid architecture was designed and implemented to run both methods simultaneously and is able to combine the results of each method using a novel arbitration mechanism. The proposed strategy successfully fused two different vision-based obstacle avoidance methods using this arbitration mechanism in order to permit a safer obstacle avoidance system. Accordingly, to establish the adequacy of the design of the obstacle avoidance system, a series of experiments were conducted. The results demonstrate the characteristics of the proposed architecture, and the results prove that its performance is somewhat better than the conventional optical flow-based architecture. Especially, the robot employing Hybrid Architecture avoids lateral obstacles in a more smooth and robust manner than when using the conventional optical flow-based technique.

  13. Failure analysis in ITO-free all-solution processed organic solar cells

    NARCIS (Netherlands)

    Galagan, Y.; Eggenhuisen, T.M.; Coenen, M.J.J.; Biezemans, A.F.K.V.; Verhees, W.J.H.; Veenstra, S.C.; Groen, W.A.; Andriessen, R.; Janssen, R.A.J.

    2015-01-01

    In this paper we discuss a problem-solving methodology and present guidance for troubleshooting defects in ITO-free all-solution processed organic solar cells with an inverted cell architecture. A systematic approach for identifying the main causes of failures in devices is presented. Comprehensive

  14. A geometric approach for fault detection and isolation of stator short circuit failure in a single asynchronous machine

    KAUST Repository

    Khelouat, Samir

    2012-06-01

    This paper deals with the problem of detection and isolation of stator short-circuit failure in a single asynchronous machine using a geometric approach. After recalling the basis of the geometric approach for fault detection and isolation in nonlinear systems, we will study some structural properties which are fault detectability and isolation fault filter existence. We will then design filters for residual generation. We will consider two approaches: a two-filters structure and a single filter structure, both aiming at generating residuals which are sensitive to one fault and insensitive to the other faults. Some numerical tests will be presented to illustrate the efficiency of the method.

  15. Evaluation of a vibration diagnostic system for the detection of spur gear pitting failures

    Science.gov (United States)

    Townsend, Dennis P.; Zakrajsek, James J.

    1993-01-01

    A vibration diagnostic system was used to detect spur gear surface pitting fatigue in a closed-loop spur gear fatigue test rig. The diagnostic system, comprising a personal computer with an analog-to-digital conversion board, a diagnostic system unit, and software, uses time-synchronous averaging of the vibration signal to produce a vibration image of each tooth on any gear in a transmission. Several parameters were analyzed including gear pair stress wave and raw baseband vibration, kurtosis, peak ratios, and others. The system provides limits for the various parameters and gives a warning when the limits are exceeded. Several spur gear tests were conducted with this system and vibration data analyzed at 5-min. intervals. The results presented herein show that the system is fairly effective at detecting spur gear tooth surface fatigue pitting failures.

  16. Extending Failure Modes and Effects Analysis Approach for Reliability Analysis at the Software Architecture Design Level

    NARCIS (Netherlands)

    Sözer, Hasan; Tekinerdogan, B.; Aksit, Mehmet; de Lemos, Rogerio; Gacek, Cristina

    2007-01-01

    Several reliability engineering approaches have been proposed to identify and recover from failures. A well-known and mature approach is the Failure Mode and Effect Analysis (FMEA) method that is usually utilized together with Fault Tree Analysis (FTA) to analyze and diagnose the causes of failures.

  17. ALLIANCE: An architecture for fault tolerant multi-robot cooperation

    Energy Technology Data Exchange (ETDEWEB)

    Parker, L.E.

    1995-02-01

    ALLIANCE is a software architecture that facilitates the fault tolerant cooperative control of teams of heterogeneous mobile robots performing missions composed of loosely coupled, largely independent subtasks. ALLIANCE allows teams of robots, each of which possesses a variety of high-level functions that it can perform during a mission, to individually select appropriate actions throughout the mission based on the requirements of the mission, the activities of other robots, the current environmental conditions, and the robot`s own internal states. ALLIANCE is a fully distributed, behavior-based architecture that incorporates the use of mathematically modeled motivations (such as impatience and acquiescence) within each robot to achieve adaptive action selection. Since cooperative robotic teams usually work in dynamic and unpredictable environments, this software architecture allows the robot team members to respond robustly, reliably, flexibly, and coherently to unexpected environmental changes and modifications in the robot team that may occur due to mechanical failure, the learning of new skills, or the addition or removal of robots from the team by human intervention. The feasibility of this architecture is demonstrated in an implementation on a team of mobile robots performing a laboratory version of hazardous waste cleanup.

  18. ALLIANCE: An architecture for fault tolerant multi-robot cooperation

    International Nuclear Information System (INIS)

    Parker, L.E.

    1995-02-01

    ALLIANCE is a software architecture that facilitates the fault tolerant cooperative control of teams of heterogeneous mobile robots performing missions composed of loosely coupled, largely independent subtasks. ALLIANCE allows teams of robots, each of which possesses a variety of high-level functions that it can perform during a mission, to individually select appropriate actions throughout the mission based on the requirements of the mission, the activities of other robots, the current environmental conditions, and the robot's own internal states. ALLIANCE is a fully distributed, behavior-based architecture that incorporates the use of mathematically modeled motivations (such as impatience and acquiescence) within each robot to achieve adaptive action selection. Since cooperative robotic teams usually work in dynamic and unpredictable environments, this software architecture allows the robot team members to respond robustly, reliably, flexibly, and coherently to unexpected environmental changes and modifications in the robot team that may occur due to mechanical failure, the learning of new skills, or the addition or removal of robots from the team by human intervention. The feasibility of this architecture is demonstrated in an implementation on a team of mobile robots performing a laboratory version of hazardous waste cleanup

  19. A multivariate statistical methodology for detection of degradation and failure trends using nuclear power plant operational data

    International Nuclear Information System (INIS)

    Samanta, P.K.; Teichmann, T.

    1990-01-01

    In this paper, a multivariate statistical method is presented and demonstrated as a means for analyzing nuclear power plant transients (or events) and safety system performance for detection of malfunctions and degradations within the course of the event based on operational data. The study provides the methodology and illustrative examples based on data gathered from simulation of nuclear power plant transients (due to lack of easily accessible operational data). Such an approach, once fully developed, can be used to detect failure trends and patterns and so can lead to prevention of conditions with serious safety implications

  20. Failure analysis on a ruptured petrochemical pipe

    Energy Technology Data Exchange (ETDEWEB)

    Harun, Mohd [Industrial Technology Division, Malaysian Nuclear Agency, Ministry of Science, Technology and Innovation Malaysia, Bangi, Kajang, Selangor (Malaysia); Shamsudin, Shaiful Rizam; Kamardin, A. [Univ. Malaysia Perlis, Jejawi, Arau (Malaysia). School of Materials Engineering

    2010-08-15

    The failure took place on a welded elbow pipe which exhibited a catastrophic transverse rupture. The failure was located on the welding HAZ region, parallel to the welding path. Branching cracks were detected at the edge of the rupture area. Deposits of corrosion products were also spotted. The optical microscope analysis showed the presence of transgranular failures which were related to the stress corrosion cracking (SCC) and were predominantly caused by the welding residual stress. The significant difference in hardness between the welded area and the pipe confirmed the findings. Moreover, the failure was also caused by the low Mo content in the stainless steel pipe which was detected by means of spark emission spectrometer. (orig.)

  1. An Efficient VLSI Architecture for Multi-Channel Spike Sorting Using a Generalized Hebbian Algorithm

    Directory of Open Access Journals (Sweden)

    Ying-Lun Chen

    2015-08-01

    Full Text Available A novel VLSI architecture for multi-channel online spike sorting is presented in this paper. In the architecture, the spike detection is based on nonlinear energy operator (NEO, and the feature extraction is carried out by the generalized Hebbian algorithm (GHA. To lower the power consumption and area costs of the circuits, all of the channels share the same core for spike detection and feature extraction operations. Each channel has dedicated buffers for storing the detected spikes and the principal components of that channel. The proposed circuit also contains a clock gating system supplying the clock to only the buffers of channels currently using the computation core to further reduce the power consumption. The architecture has been implemented by an application-specific integrated circuit (ASIC with 90-nm technology. Comparisons to the existing works show that the proposed architecture has lower power consumption and hardware area costs for real-time multi-channel spike detection and feature extraction.

  2. An Efficient VLSI Architecture for Multi-Channel Spike Sorting Using a Generalized Hebbian Algorithm.

    Science.gov (United States)

    Chen, Ying-Lun; Hwang, Wen-Jyi; Ke, Chi-En

    2015-08-13

    A novel VLSI architecture for multi-channel online spike sorting is presented in this paper. In the architecture, the spike detection is based on nonlinear energy operator (NEO), and the feature extraction is carried out by the generalized Hebbian algorithm (GHA). To lower the power consumption and area costs of the circuits, all of the channels share the same core for spike detection and feature extraction operations. Each channel has dedicated buffers for storing the detected spikes and the principal components of that channel. The proposed circuit also contains a clock gating system supplying the clock to only the buffers of channels currently using the computation core to further reduce the power consumption. The architecture has been implemented by an application-specific integrated circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture has lower power consumption and hardware area costs for real-time multi-channel spike detection and feature extraction.

  3. An Efficient VLSI Architecture for Multi-Channel Spike Sorting Using a Generalized Hebbian Algorithm

    Science.gov (United States)

    Chen, Ying-Lun; Hwang, Wen-Jyi; Ke, Chi-En

    2015-01-01

    A novel VLSI architecture for multi-channel online spike sorting is presented in this paper. In the architecture, the spike detection is based on nonlinear energy operator (NEO), and the feature extraction is carried out by the generalized Hebbian algorithm (GHA). To lower the power consumption and area costs of the circuits, all of the channels share the same core for spike detection and feature extraction operations. Each channel has dedicated buffers for storing the detected spikes and the principal components of that channel. The proposed circuit also contains a clock gating system supplying the clock to only the buffers of channels currently using the computation core to further reduce the power consumption. The architecture has been implemented by an application-specific integrated circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture has lower power consumption and hardware area costs for real-time multi-channel spike detection and feature extraction. PMID:26287193

  4. A source of energy : sustainable architecture and urbanism

    Energy Technology Data Exchange (ETDEWEB)

    Roestvik, Harald N.

    2011-07-01

    An update on the environmental challenges. Meant to inspire and be a source of energy.Tearing down myths and floodlighting paradoxes. Particularly relevant for students of architecture, architects and concerned citizens. Training tasks, recommendations for further source books and web sites, are included. From the content: Climate change and consensus, Population growth, Food production, The sustainable city, Transportation myths and facts, A mini history of environmental architecture, Architects' approach to sustainable design, The failure of western architects; a case study; China, The passive, zeb and plus energy building, Natural ventilation, Sustainable materials, Plastics in building, Nuclear energy, Solar energy, The grid of the future, Indoor climate and health. The sick building syndrome, Radon, Universal design, Paradoxes, Bullying techniques, Trust yourself, Timing, Which gateway will you choose?, On transience. (au)

  5. Clinical utility of contrast-enhanced spectral mammography as an adjunct for tomosynthesis-detected architectural distortion.

    Science.gov (United States)

    Patel, Bhavika K; Naylor, Michelle E; Kosiorek, Heidi E; Lopez-Alvarez, Yania M; Miller, Adrian M; Pizzitola, Victor J; Pockaj, Barbara A

    Supplement tomosynthesis-detected architectural distortions (AD) with CESM to better characterize malignant vs benign lesions. Retrospective review CESM prior to biopsied AD. Pathology: benign, radial scar, or malignant. 49 lesions (45 patients). 29 invasive cancers, 1 DCIS (range, 0.4-4.7cm); 9 radial scars; 10 benign. 37 (75.5%) ADs had associated enhancement. PPV 78.4% (29/37), sensitivity 96.7% (29/30); specificity, 57.9% (11/19); NPV, 91.7% (11/12). False-positive rate 21.6% (8/37); false-negative rate, 8.3% (1/12). Accuracy 81.6% (40/49). High sensitivity and NPV of CESM in patients with AD is promising as an adjunct tool in diagnosing malignancy and avoiding unnecessary biopsy, respectively. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Operation experiences of JOYO fuel failure detection system

    International Nuclear Information System (INIS)

    Tamura, Seiji; Hikichi, Takayoshi; Rindo, Hiroshi.

    1982-01-01

    Monitoring of fuel failure in the experimental fast reactor JOYO is provided by two different methods, which are cover gas monitoring (FFDCGM) by means of a precipitator, and delayed neutron monitoring (FFDDNM) by means of neutron detectors. The interpretation of signals which were obtained during the reactor operation for performance testings, was performed. The countrate of the CGM is approximately 120 cps at 75MW operation, whose sources are due to Ne 23 , Ar 41 , and Na 24 . And the countrate of the DNM is approximately 2300 cps at 75MW operation which is mainly due to leakage neutron from the core. With those background of the systems, alarm level for monitoring was set at several times of each background level. The reactor has been operated for 5 years, the burn-up of the fuel is 40,000 MWD/T at the most. No trace of any fuel failure has been observed. The fact is also proven by the results of cover gas and sodium sampling analysis. In order to evaluate sensitivity of the FFD systems, a preliminary simulation study has been performed. According to the results, a signal level against one pin failure of 0.5 mm 2 hole may exceed the alarm level of the FFDCGM system. (author)

  7. Hierarchical model generation for architecture reconstruction using laser-scanned point clouds

    Science.gov (United States)

    Ning, Xiaojuan; Wang, Yinghui; Zhang, Xiaopeng

    2014-06-01

    Architecture reconstruction using terrestrial laser scanner is a prevalent and challenging research topic. We introduce an automatic, hierarchical architecture generation framework to produce full geometry of architecture based on a novel combination of facade structures detection, detailed windows propagation, and hierarchical model consolidation. Our method highlights the generation of geometric models automatically fitting the design information of the architecture from sparse, incomplete, and noisy point clouds. First, the planar regions detected in raw point clouds are interpreted as three-dimensional clusters. Then, the boundary of each region extracted by projecting the points into its corresponding two-dimensional plane is classified to obtain detailed shape structure elements (e.g., windows and doors). Finally, a polyhedron model is generated by calculating the proposed local structure model, consolidated structure model, and detailed window model. Experiments on modeling the scanned real-life buildings demonstrate the advantages of our method, in which the reconstructed models not only correspond to the information of architectural design accurately, but also satisfy the requirements for visualization and analysis.

  8. Architectural slicing

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2013-01-01

    Architectural prototyping is a widely used practice, con- cerned with taking architectural decisions through experiments with light- weight implementations. However, many architectural decisions are only taken when systems are already (partially) implemented. This is prob- lematic in the context...... of architectural prototyping since experiments with full systems are complex and expensive and thus architectural learn- ing is hindered. In this paper, we propose a novel technique for harvest- ing architectural prototypes from existing systems, \\architectural slic- ing", based on dynamic program slicing. Given...... a system and a slicing criterion, architectural slicing produces an architectural prototype that contain the elements in the architecture that are dependent on the ele- ments in the slicing criterion. Furthermore, we present an initial design and implementation of an architectural slicer for Java....

  9. Understanding the Value of Enterprise Architecture for Organizations: A Grounded Theory Approach

    Science.gov (United States)

    Nassiff, Edwin

    2012-01-01

    There is a high rate of information system implementation failures attributed to the lack of alignment between business and information technology strategy. Although enterprise architecture (EA) is a means to correct alignment problems and executives highly rate the importance of EA, it is still not used in most organizations today. Current…

  10. Advanced Ground Systems Maintenance Enterprise Architecture Project

    Science.gov (United States)

    Perotti, Jose M. (Compiler)

    2015-01-01

    The project implements an architecture for delivery of integrated health management capabilities for the 21st Century launch complex. The delivered capabilities include anomaly detection, fault isolation, prognostics and physics based diagnostics.

  11. SUSTAINABLE ARCHITECTURE : WHAT ARCHITECTURE STUDENTS THINK

    OpenAIRE

    SATWIKO, PRASASTO

    2013-01-01

    Sustainable architecture has become a hot issue lately as the impacts of climate change become more intense. Architecture educations have responded by integrating knowledge of sustainable design in their curriculum. However, in the real life, new buildings keep coming with designs that completely ignore sustainable principles. This paper discusses the results of two national competitions on sustainable architecture targeted for architecture students (conducted in 2012 and 2013). The results a...

  12. Analysis of failure dependent test, repair and shutdown strategies for redundant trains

    International Nuclear Information System (INIS)

    Uryasev, S.; Samanta, P.

    1994-09-01

    Failure-dependent testing implies a test of a redundant components (or trains) when failure of one component has been detected. The purpose of such testing is to detect any common cause failures (CCFs) of multiple components so that a corrective action such as repair or plant shutdown can be taken to reduce the residence time of multiple failures, given a failure has been detected. This type of testing focuses on reducing the conditional risk of CCFs. Formulas for calculating the conditional failure probability of a two train system with different test, repair and shutdown strategies are developed. A methodology is presented with an example calculation showing the risk-effectiveness of failure-dependent strategies for emergency diesel generators (EDGs) in nuclear power plants (NPPs)

  13. Architecture

    OpenAIRE

    Clear, Nic

    2014-01-01

    When discussing science fiction’s relationship with architecture, the usual practice is to look at the architecture “in” science fiction—in particular, the architecture in SF films (see Kuhn 75-143) since the spaces of literary SF present obvious difficulties as they have to be imagined. In this essay, that relationship will be reversed: I will instead discuss science fiction “in” architecture, mapping out a number of architectural movements and projects that can be viewed explicitly as scien...

  14. Landslide Frequency and Failure Mechanisms at NE Gela Basin (Strait of Sicily)

    Science.gov (United States)

    Kuhlmann, J.; Asioli, A.; Trincardi, F.; Klügel, A.; Huhn, K.

    2017-11-01

    Despite intense research by both academia and industry, the parameters controlling slope stability at continental margins are often speculated upon. Lack of core recovery and age control on failed sediments prevent the assessment of failure timing/frequency and the role of prefailure architecture as shaped by paleoenvironmental changes. This study uses an integrated chronological framework from two boreholes and complementary ultrahigh-resolution acoustic profiling in order to assess (1) the frequency of submarine landsliding at the continental margin of NE Gela Basin and (2) the associated mechanisms of failure. Accurate age control was achieved through absolute radiocarbon dating and indirect dating relying on isotope stratigraphic and micropaleontological reconstructions. A total of nine major slope failure events have been recognized that occurred within the last 87 kyr ( 10 kyr return frequency), though there is evidence for additional syndepositional, small-scaled transport processes of lower volume. Preferential failure involves translational movement of mudflows along subhorizontal surfaces that are induced by sedimentological changes relating to prefailure stratal architecture. Along with sequence-stratigraphic boundaries reflecting paleoenvironmental fluctuations, recovered core material suggests that intercalated volcaniclastic layers are key to the basal confinement and lateral movement of these events in the study area. Another major predisposing factor is given by rapid loading of fine-grained homogenous strata and successive generation of excess pore pressure, as expressed by several fluid escape structures. Recurrent failure, however, requires repeated generation of favorable conditions, and seismic activity, though low if compared to many other Mediterranean settings, is shown to represent a legitimate trigger mechanism.

  15. Communication architecture of an early warning system

    Directory of Open Access Journals (Sweden)

    M. Angermann

    2010-11-01

    Full Text Available This article discusses aspects of communication architecture for early warning systems (EWS in general and gives details of the specific communication architecture of an early warning system against tsunamis. While its sensors are the "eyes and ears" of a warning system and enable the system to sense physical effects, its communication links and terminals are its "nerves and mouth" which transport measurements and estimates within the system and eventually warnings towards the affected population. Designing the communication architecture of an EWS against tsunamis is particularly challenging. Its sensors are typically very heterogeneous and spread several thousand kilometers apart. They are often located in remote areas and belong to different organizations. Similarly, the geographic spread of the potentially affected population is wide. Moreover, a failure to deliver a warning has fatal consequences. Yet, the communication infrastructure is likely to be affected by the disaster itself. Based on an analysis of the criticality, vulnerability and availability of communication means, we describe the design and implementation of a communication system that employs both terrestrial and satellite communication links. We believe that many of the issues we encountered during our work in the GITEWS project (German Indonesian Tsunami Early Warning System, Rudloff et al., 2009 on the design and implementation communication architecture are also relevant for other types of warning systems. With this article, we intend to share our insights and lessons learned.

  16. Modeling Architectural Patterns Using Architectural Primitives

    NARCIS (Netherlands)

    Zdun, Uwe; Avgeriou, Paris

    2005-01-01

    Architectural patterns are a key point in architectural documentation. Regrettably, there is poor support for modeling architectural patterns, because the pattern elements are not directly matched by elements in modeling languages, and, at the same time, patterns support an inherent variability that

  17. APHRODITE: an Anomaly-based Architecture for False Positive Reduction

    NARCIS (Netherlands)

    Bolzoni, D.; Etalle, Sandro

    We present APHRODITE, an architecture designed to reduce false positives in network intrusion detection systems. APHRODITE works by detecting anomalies in the output traffic, and by correlating them with the alerts raised by the NIDS working on the input traffic. Benchmarks show a substantial

  18. Development of a system for automatic detection of pellet failures; Desarrollo de un sistema para deteccion automatica de fallas en pastillas

    Energy Technology Data Exchange (ETDEWEB)

    Lavagnino, C E [Comision Nacional de Energia Atomica, San Martin (Argentina). Unidad de Actividad Combustibles Nucleares

    1997-12-31

    Nowadays, the failure controls in UO{sub 2} pellets for Atucha and Embalse reactors are performed visually. In this work it is presented the first stage of the development of a system that allows an automatic approach to the task. For this purpose, the problem has been subdivided in three jobs: choosing the illumination environment, finding the algorithm that detects failures with user-defined tolerance and engineering the mechanic system that supports the desired manipulations of the pellets. In this paper, the former two are developed. a) Finding the illumination conditions that allow subtracting the failure from the normal element surface, knowing, in first place, the cylindrical characteristics of it and, as a consequence, the differences in the light reflection direction and, in second place, the texture differences in relation to the rectification type of the pellet. b) Writing a fast and simple algorithm that allows the identification of the failure following the production specifications. Examples of the developed algorithm are shown. (author). 4 refs.

  19. The composition of engineered cartilage at the time of implantation determines the likelihood of regenerating tissue with a normal collagen architecture.

    Science.gov (United States)

    Nagel, Thomas; Kelly, Daniel J

    2013-04-01

    The biomechanical functionality of articular cartilage is derived from both its biochemical composition and the architecture of the collagen network. Failure to replicate this normal Benninghoff architecture in regenerating articular cartilage may in turn predispose the tissue to failure. In this article, the influence of the maturity (or functionality) of a tissue-engineered construct at the time of implantation into a tibial chondral defect on the likelihood of recapitulating a normal Benninghoff architecture was investigated using a computational model featuring a collagen remodeling algorithm. Such a normal tissue architecture was predicted to form in the intact tibial plateau due to the interplay between the depth-dependent extracellular matrix properties, foremost swelling pressures, and external mechanical loading. In the presence of even small empty defects in the articular surface, the collagen architecture in the surrounding cartilage was predicted to deviate significantly from the native state, indicating a possible predisposition for osteoarthritic changes. These negative alterations were alleviated by the implantation of tissue-engineered cartilage, where a mature implant was predicted to result in the formation of a more native-like collagen architecture than immature implants. The results of this study highlight the importance of cartilage graft functionality to maintain and/or re-establish joint function and suggest that engineering a tissue with a native depth-dependent composition may facilitate the establishment of a normal Benninghoff collagen architecture after implantation into load-bearing defects.

  20. Using field feedback to estimate failure rates of safety-related systems

    International Nuclear Information System (INIS)

    Brissaud, Florent

    2017-01-01

    The IEC 61508 and IEC 61511 functional safety standards encourage the use of field feedback to estimate the failure rates of safety-related systems, which is preferred than generic data. In some cases (if “Route 2_H” is adopted for the 'hardware safety integrity constraints”), this is even a requirement. This paper presents how to estimate the failure rates from field feedback with confidence intervals, depending if the failures are detected on-line (called 'detected failures', e.g. by automatic diagnostic tests) or only revealed by proof tests (called 'undetected failures'). Examples show that for the same duration and number of failures observed, the estimated failure rates are basically higher for “undetected failures” because, in this case, the duration observed includes intervals of time where it is unknown that the elements have failed. This points out the need of using a proper approach for failure rates estimation, especially for failures that are not detected on-line. Then, this paper proposes an approach to use the estimated failure rates, with their uncertainties, for PFDavg and PFH assessment with upper confidence bounds, in accordance with IEC 61508 and IEC 61511 requirements. Examples finally show that the highest SIL that can be claimed for a safety function can be limited by the 90% upper confidence bound of PFDavg or PFH. The requirements of the IEC 61508 and IEC 61511 relating to the data collection and analysis should therefore be properly considered for the study of all safety-related systems. - Highlights: • This paper deals with requirements of the IEC 61508 and IEC 61511 for using field feedback to estimate failure rates of safety-related systems. • This paper presents how to estimate the failure rates from field feedback with confidence intervals for failures that are detected on-line. • This paper presents how to estimate the failure rates from field feedback with confidence intervals for failures that are only revealed by

  1. Performance anomaly detection in microservice architectures under continuous change

    OpenAIRE

    Düllmann, Thomas F.

    2017-01-01

    The idea of DevOps and agile approaches like Continuous Integration (CI) and microservice architectures are bocoming more and more popular as the demand for flexible and scalable solutions is increasing. By raising the degree of automation and distribution new challenges in terms of application performance monitoring arise because microservices are possibly short-lived and may be replaced within seconds. The fact that microservices are added and removed on a regular basis brings new requireme...

  2. Developing architecture for upgrading I and C systems of an operating nuclear power plant using a quality attribute-driven design method

    Energy Technology Data Exchange (ETDEWEB)

    Suh, Yong Suk; Keum, Jong Yong [SMART Technology Validation Division, Korea Atomic Energy Research Institute, 150-1 Dukjin-dong, Yuseong-gu, Daejon (Korea, Republic of); Kim, Hyeon Soo, E-mail: hskim401@cnu.ac.kr [Department of Computer Science and Engineering, Chungnam Nat' l Univ., 220 Gung-dong, Yuseong-gu, Daejon (Korea, Republic of)

    2011-12-15

    This paper presents the architecture for upgrading the instrumentation and control (I and C) systems of a Korean standard nuclear power plant (KSNP) as an operating nuclear power plant. This paper uses the analysis results of KSNP's I and C systems performed in a previous study. This paper proposes a Preparation-Decision-Design-Assessment (PDDA) process that focuses on quality oriented development, as a cyclical process to develop the architecture. The PDDA was motivated from the practice of architecture-based development used in software engineering fields. In the preparation step of the PDDA, the architecture of digital-based I and C systems was setup for an architectural goal. Single failure criterion and determinism were setup for architectural drivers. In the decision step, defense-in-depth, diversity, redundancy, and independence were determined as architectural tactics to satisfy the single failure criterion, and sequential execution was determined as a tactic to satisfy the determinism. After determining the tactics, the primitive digital-based I and C architecture was determined. In the design step, 17 systems were selected from the KSNP's I and C systems for the upgrade and functionally grouped based on the primitive architecture. The overall architecture was developed to show the deployment of the systems. The detailed architecture of the safety systems was developed by applying a 2-out-of-3 voting logic, and the detailed architecture of the non-safety systems was developed by hot-standby redundancy. While developing the detailed architecture, three ways of signal transmission were determined with proper rationales: hardwire, datalink, and network. In the assessment step, the required network performance, considering the worst-case of data transmission was calculated: the datalink was required by 120 kbps, the safety network by 5 Mbps, and the non-safety network by 60 Mbps. The architecture covered 17 systems out of 22 KSNP's I and C

  3. Developing architecture for upgrading I and C systems of an operating nuclear power plant using a quality attribute-driven design method

    International Nuclear Information System (INIS)

    Suh, Yong Suk; Keum, Jong Yong; Kim, Hyeon Soo

    2011-01-01

    This paper presents the architecture for upgrading the instrumentation and control (I and C) systems of a Korean standard nuclear power plant (KSNP) as an operating nuclear power plant. This paper uses the analysis results of KSNP's I and C systems performed in a previous study. This paper proposes a Preparation–Decision–Design–Assessment (PDDA) process that focuses on quality oriented development, as a cyclical process to develop the architecture. The PDDA was motivated from the practice of architecture-based development used in software engineering fields. In the preparation step of the PDDA, the architecture of digital-based I and C systems was setup for an architectural goal. Single failure criterion and determinism were setup for architectural drivers. In the decision step, defense-in-depth, diversity, redundancy, and independence were determined as architectural tactics to satisfy the single failure criterion, and sequential execution was determined as a tactic to satisfy the determinism. After determining the tactics, the primitive digital-based I and C architecture was determined. In the design step, 17 systems were selected from the KSNP's I and C systems for the upgrade and functionally grouped based on the primitive architecture. The overall architecture was developed to show the deployment of the systems. The detailed architecture of the safety systems was developed by applying a 2-out-of-3 voting logic, and the detailed architecture of the non-safety systems was developed by hot-standby redundancy. While developing the detailed architecture, three ways of signal transmission were determined with proper rationales: hardwire, datalink, and network. In the assessment step, the required network performance, considering the worst-case of data transmission was calculated: the datalink was required by 120 kbps, the safety network by 5 Mbps, and the non-safety network by 60 Mbps. The architecture covered 17 systems out of 22 KSNP's I and C systems. The

  4. Comparative assessment of instrumentation and control (I and C) system architectures for research reactors

    International Nuclear Information System (INIS)

    Khalil, Rah Man; Heo, Gyun Young; Son, Han Seong; Kim, Young Ki; Park, Jae Kwan

    2012-01-01

    Application of digital I and C has increased in nuclear industry since last two decades but lack of experience, innovative and naive nature of technology and insufficient failure information raised questions on its use. The issues has been highlighted due to the use of digital I and C which were not relevant to analog. These are the potential weakness of digital systems for Common Cause Failure, threat to system security and reliability due to inter channel communication, need for highly integrated control room and difficulty to assess the digital I and C reliability. In the existing scenario, HANARO and JRTR have hybrid I and C systems (digital plus analog) whereas OPAL is fully digitalized. In order to authenticate the choice of fully digital I and C architecture for research reactor, it is required to perform assessment from risk point of view, cyber security as well other issues. The architecture assessment method and restrictions are discussed in the next part of article

  5. Comparative assessment of instrumentation and control (I and C) system architectures for research reactors

    Energy Technology Data Exchange (ETDEWEB)

    Khalil, Rah Man; Heo, Gyun Young [Kyung Hee Univ., Seoul (Korea, Republic of); Son, Han Seong [Joongbu Univ., Chungnam (Korea, Republic of); Kim, Young Ki; Park, Jae Kwan [KAERI, Daejeon (Korea, Republic of)

    2012-10-15

    Application of digital I and C has increased in nuclear industry since last two decades but lack of experience, innovative and naive nature of technology and insufficient failure information raised questions on its use. The issues has been highlighted due to the use of digital I and C which were not relevant to analog. These are the potential weakness of digital systems for Common Cause Failure, threat to system security and reliability due to inter channel communication, need for highly integrated control room and difficulty to assess the digital I and C reliability. In the existing scenario, HANARO and JRTR have hybrid I and C systems (digital plus analog) whereas OPAL is fully digitalized. In order to authenticate the choice of fully digital I and C architecture for research reactor, it is required to perform assessment from risk point of view, cyber security as well other issues. The architecture assessment method and restrictions are discussed in the next part of article.

  6. Mitigating Software Failures with Distributed and Recovery-Oriented Flight System Architectures, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The primary focus of Integrated Vehicle Health Management (IVHM) has been on faults due to hardware failures. Yet software is growing in complexity, controls...

  7. Trends of Sustainable Residential Architecture

    OpenAIRE

    Narvydas, A

    2014-01-01

    The article is based on Master’s research conducted during Scottish Housing Expo 2010. The aim of the research was to determine the prevailing trends in sustainable residential architecture. Each trend can be described by features detected during visual and technical observation of project data. Based on that architects may predict possible problems related to a specific trend.

  8. Novel fault tolerant modular system architecture for I and C applications

    International Nuclear Information System (INIS)

    Kumar, Ankit; Venkatesan, A.; Madhusoodanan, K.

    2013-01-01

    Novel fault tolerant 3U modular system architecture has been developed for safety related and safety critical I and C systems of the reactor. Design innovatively utilizes simplest multi-drop serial bus called Inter-Integrated Circuits (I 2 C) Bus for system operation with simplicity, fault tolerance and online maintainability (hot swap). I 2 C bus failure modes analysis was done and system design was hardened for possible failure modes. System backplane uses only passive components, dual redundant I 2 C buses, data consistency checks and geographical addressing scheme to tackle bus lock ups/stuck buses and bit flips in data transactions. Dual CPU active/standby redundancy architecture with hot swap implements tolerance for CPU software stuck up conditions and hardware faults. System cards implement hot swap for online maintainability, power supply fault containment, communication buses fault containment and I/O channel to channel isolation and independency. Typical applications for pure hardwired (without real time software) Core Temperature Monitoring System for FBRs, as a Universal Signal Conditioning System for safety related I and C systems and as a complete control system for non nuclear safety systems have also been discussed. (author)

  9. Collision detection of convex polyhedra on the NVIDIA GPU architecture for the discrete element method

    CSIR Research Space (South Africa)

    Govender, Nicolin

    2015-09-01

    Full Text Available consideration due to the architectural differences between CPU and GPU platforms. This paper describes the DEM algorithms and heuristics that are optimized for the parallel NVIDIA Kepler GPU architecture in detail. This includes a GPU optimized collision...

  10. PERANCANGAN ARSITEKTUR SISTEM INFORMASI MENGGUNAKAN METODE ENTERPRISE ARCHITECTURE PLANNING (Studi Kasus : Universitas Purwakarta - Purwakarta

    Directory of Open Access Journals (Sweden)

    Beki Subaeki

    2016-03-01

    organization is necessary, it is required in advance by large organizations because they sort of college information system development will be more complex and will affect the ongoing business development. In practice, not least the construction / development of information systems failures, this is because since it is incompatible with the goals and academic needs. To achieve the goals of the organization there are several methodologies that can be used, such as Enterprise Architecture Planning is a modern approach to the planning of data quality and achieve the mission of information systems and processes are carried out to define a number of architecture that consists of a data architecture, applications, and technology and implementation plan. EAP describes the architecture of data, applications and technologies needed to support the organization's business. Many IS development failures due to the development needs of IS are based on specific needs without any advance planning by management in implementing an integrated information system development.Purwakarta university education as one of the organizers, did not escape the need for defining the business requirements and information architecture for the organization's development policy strategy can be unplanned as well. Modeling information architecture at the University of Purwakarta, including defining the data architecture, application architecture, technology architecture, and mapping creation or implementation of the roadmap plan.Process in terms of defining the information architecture, refers to the process that is common in business administration education system especially at the University of Purwakarta, while the scope of the discussion surrounding the academic field, the field of financial administration. Keywords : Enterprise Architecture Planning, Data Architecture, Application Architecture And Technology Architecture,Information System.

  11. Inversion Method for Early Detection of ARES-1 Case Breach Failure

    Science.gov (United States)

    Mackey, Ryan M.; Kulikov, Igor K.; Bajwa, Anupa; Berg, Peter; Smelyanskiy, Vadim

    2010-01-01

    A document describes research into the problem of detecting a case breach formation at an early stage of a rocket flight. An inversion algorithm for case breach allocation is proposed and analyzed. It is shown how the case breach can be allocated at an early stage of its development by using the rocket sensor data and the output data from the control block of the rocket navigation system. The results are simulated with MATLAB/Simulink software. The efficiency of an inversion algorithm for a case breach location is discussed. The research was devoted to the analysis of the ARES-l flight during the first 120 seconds after the launch and early prediction of case breach failure. During this time, the rocket is propelled by its first-stage Solid Rocket Booster (SRB). If a breach appears in SRB case, the gases escaping through it will produce the (side) thrust directed perpendicular to the rocket axis. The side thrust creates torque influencing the rocket attitude. The ARES-l control system will compensate for the side thrust until it reaches some critical value, after which the flight will be uncontrollable. The objective of this work was to obtain the start time of case breach development and its location using the rocket inertial navigation sensors and GNC data. The algorithm was effective for the detection and location of a breach in an SRB field joint at an early stage of its development.

  12. Corrosion in electronics: Overview of failures and countermeasures

    DEFF Research Database (Denmark)

    Jellesen, Morten Stendahl; Verdingovas, Vadimas; Conseil, Helene

    2014-01-01

    Many field failure returns of electronics are marked as “no failure found”, yet numerous of these failures are likely due to corrosion, since corrosion related failures are not easily detected during subsequent failure analysis. In some cases failures are intermittent and occur because of service...... life conditions (humidity and contamination) where water film formation on the printed circuit board assembly (PCBA) leads to leakage currents resulting in wrong output signal of the electronic device. If the leakage current itself will not result in malfunctioning of the electronics, the formed water...

  13. Failure in the detection of the sentinel lymph node with a combined technique of radioactive tracer and blue dye in a patient with cancer of the vulva and a single positive lymph node

    NARCIS (Netherlands)

    Fons, G.; ter Rahe, B.; Sloof, G.; de Hullu, J.; van der Velden, J.

    2004-01-01

    Background. In early stage vulvar cancer, the sentinel lymph node procedure with a radioactive tracer appears to be a promising new diagnostic tool to predict lymph node status. No detection failures have been published so far in vulvar cancer. We recently experienced failure in the detection of the

  14. Failure in the detection of the sentinel lymph node with a combined technique of radioactive tracer and blue dye in a patient with cancer of the vulva and a single positive lymph node

    NARCIS (Netherlands)

    Fons, G; ter Rahe, B; Sloof, G; de Hullu, J; van der Velden, J

    Background. In early stage vulvar cancer, the sentinel lymph node procedure with a radioactive tracer appears to be a promising new diagnostic tool to predict lymph node status. No detection failures have been published so far in vulvar cancer. We recently experienced failure in the detection of the

  15. Inverter ratio failure detector

    Science.gov (United States)

    Wagner, A. P.; Ebersole, T. J.; Andrews, R. E. (Inventor)

    1974-01-01

    A failure detector which detects the failure of a dc to ac inverter is disclosed. The inverter under failureless conditions is characterized by a known linear relationship of its input and output voltages and by a known linear relationship of its input and output currents. The detector includes circuitry which is responsive to the detector's input and output voltages and which provides a failure-indicating signal only when the monitored output voltage is less by a selected factor, than the expected output voltage for the monitored input voltage, based on the known voltages' relationship. Similarly, the detector includes circuitry which is responsive to the input and output currents and provides a failure-indicating signal only when the input current exceeds by a selected factor the expected input current for the monitored output current based on the known currents' relationship.

  16. Architectural prototyping

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind; Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2004-01-01

    A major part of software architecture design is learning how specific architectural designs balance the concerns of stakeholders. We explore the notion of "architectural prototypes", correspondingly architectural prototyping, as a means of using executable prototypes to investigate stakeholders...

  17. The architecture of a reliable software monitoring system for embedded software systems

    International Nuclear Information System (INIS)

    Munson, J.; Krings, A.; Hiromoto, R.

    2006-01-01

    We develop the notion of a measurement-based methodology for embedded software systems to ensure properties of reliability, survivability and security, not only under benign faults but under malicious and hazardous conditions as well. The driving force is the need to develop a dynamic run-time monitoring system for use in these embedded mission critical systems. These systems must run reliably, must be secure and they must fail gracefully. That is, they must continue operating in the face of the departures from their nominal operating scenarios, the failure of one or more system components due to normal hardware and software faults, as well as malicious acts. To insure the integrity of embedded software systems, the activity of these systems must be monitored as they operate. For each of these systems, it is possible to establish a very succinct representation of nominal system activity. Furthermore, it is possible to detect departures from the nominal operating scenario in a timely fashion. Such departure may be due to various circumstances, e.g., an assault from an outside agent, thus forcing the system to operate in an off-nominal environment for which it was neither tested nor certified, or a hardware/software component that has ceased to operate in a nominal fashion. A well-designed system will have the property of graceful degradation. It must continue to run even though some of the functionality may have been lost. This involves the intelligent re-mapping of system functions. Those functions that are impacted by the failure of a system component must be identified and isolated. Thus, a system must be designed so that its basic operations may be re-mapped onto system components still operational. That is, the mission objectives of the software must be reassessed in terms of the current operational capabilities of the software system. By integrating the mechanisms to support observation and detection directly into the design methodology, we propose to shift

  18. Architectural communication: Intra and extra activity of architecture

    Directory of Open Access Journals (Sweden)

    Stamatović-Vučković Slavica

    2013-01-01

    Full Text Available Apart from a brief overview of architectural communication viewed from the standpoint of theory of information and semiotics, this paper contains two forms of dualistically viewed architectural communication. The duality denotation/connotation (”primary” and ”secondary” architectural communication is one of semiotic postulates taken from Umberto Eco who viewed architectural communication as a semiotic phenomenon. In addition, architectural communication can be viewed as an intra and an extra activity of architecture where the overall activity of the edifice performed through its spatial manifestation may be understood as an act of communication. In that respect, the activity may be perceived as the ”behavior of architecture”, which corresponds to Lefebvre’s production of space.

  19. Analyzing dynamic fault trees derived from model-based system architectures

    International Nuclear Information System (INIS)

    Dehlinger, Josh; Dugan, Joanne Bechta

    2008-01-01

    Dependability-critical systems, such as digital instrumentation and control systems in nuclear power plants, necessitate engineering techniques and tools to provide assurances of their safety and reliability. Determining system reliability at the architectural design phase is important since it may guide design decisions and provide crucial information for trade-off analysis and estimating system cost. Despite this, reliability and system engineering remain separate disciplines and engineering processes by which the dependability analysis results may not represent the designed system. In this article we provide an overview and application of our approach to build architecture-based, dynamic system models for dependability-critical systems and then automatically generate Dynamic Fault Trees (DFT) for comprehensive, toolsupported reliability analysis. Specifically, we use the Architectural Analysis and Design Language (AADL) to model the structural, behavioral and failure aspects of the system in a composite architecture model. From the AADL model, we seek to derive the DFT(s) and use Galileo's automated reliability analyses to estimate system reliability. This approach alleviates the dependability engineering - systems engineering knowledge expertise gap, integrates the dependability and system engineering design and development processes and enables a more formal, automated and consistent DFT construction. We illustrate this work using an example based on a dynamic digital feed-water control system for a nuclear reactor

  20. A preliminary evaluation of the generalized likelihood ratio for detecting and identifying control element failures in a transport aircraft

    Science.gov (United States)

    Bundick, W. T.

    1985-01-01

    The application of the Generalized Likelihood Ratio technique to the detection and identification of aircraft control element failures has been evaluated in a linear digital simulation of the longitudinal dynamics of a B-737 aircraft. Simulation results show that the technique has potential but that the effects of wind turbulence and Kalman filter model errors are problems which must be overcome.

  1. Architectural freedom and industrialized architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2012-01-01

    to explain that architecture can be thought as a complex and diverse design through customization, telling exactly the revitalized storey about the change to a contemporary sustainable and better performing expression in direct relation to the given context. Through the last couple of years we have...... proportions, to organize the process on site choosing either one room wall components or several rooms wall components – either horizontally or vertically. Combined with the seamless joint the playing with these possibilities the new industrialized architecture can deliver variations in choice of solutions...... for retrofit design. If we add the question of the installations e.g. ventilation to this systematic thinking of building technique we get a diverse and functional architecture, thereby creating a new and clearer story telling about new and smart system based thinking behind architectural expression....

  2. Architectural freedom and industrialized architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2012-01-01

    to explain that architecture can be thought as a complex and diverse design through customization, telling exactly the revitalized storey about the change to a contemporary sustainable and better performing expression in direct relation to the given context. Through the last couple of years we have...... expression in the specific housing area. It is the aim of this article to expand the different design strategies which architects can use – to give the individual project attitudes and designs with architectural quality. Through the customized component production it is possible to choose different...... for retrofit design. If we add the question of the installations e.g. ventilation to this systematic thinking of building technique we get a diverse and functional architecture, thereby creating a new and clearer story telling about new and smart system based thinking behind architectural expression....

  3. Architectural freedom and industrialised architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2012-01-01

    Architectural freedom and industrialized architecture. Inge Vestergaard, Associate Professor, Cand. Arch. Aarhus School of Architecture, Denmark Noerreport 20, 8000 Aarhus C Telephone +45 89 36 0000 E-mai l inge.vestergaard@aarch.dk Based on the repetitive architecture from the "building boom" 1960...... customization, telling exactly the revitalized storey about the change to a contemporary sustainable and better performed expression in direct relation to the given context. Through the last couple of years we have in Denmark been focusing a more sustainable and low energy building technique, which also include...... to the building physic problems a new industrialized period has started based on light weight elements basically made of wooden structures, faced with different suitable materials meant for individual expression for the specific housing area. It is the purpose of this article to widen up the different design...

  4. Analysis of fault tolerance and reliability in distributed real-time system architectures

    International Nuclear Information System (INIS)

    Philippi, Stephan

    2003-01-01

    Safety critical real-time systems are becoming ubiquitous in many areas of our everyday life. Failures of such systems potentially have catastrophic consequences on different scales, in the worst case even the loss of human life. Therefore, safety critical systems have to meet maximum fault tolerance and reliability requirements. As the design of such systems is far from being trivial, this article focuses on concepts to specifically support the early architectural design. In detail, a simulation based approach for the analysis of fault tolerance and reliability in distributed real-time system architectures is presented. With this approach, safety related features can be evaluated in the early development stages and thus prevent costly redesigns in later ones

  5. eHealth integration and interoperability issues: towards a solution through enterprise architecture.

    Science.gov (United States)

    Adenuga, Olugbenga A; Kekwaletswe, Ray M; Coleman, Alfred

    2015-01-01

    Investments in healthcare information and communication technology (ICT) and health information systems (HIS) continue to increase. This is creating immense pressure on healthcare ICT and HIS to deliver and show significance in such investments in technology. It is discovered in this study that integration and interoperability contribute largely to this failure in ICT and HIS investment in healthcare, thus resulting in the need towards healthcare architecture for eHealth. This study proposes an eHealth architectural model that accommodates requirement based on healthcare need, system, implementer, and hardware requirements. The model is adaptable and examines the developer's and user's views that systems hold high hopes for their potential to change traditional organizational design, intelligence, and decision-making.

  6. Monitoring WLCG with lambda-architecture: a new scalable data store and analytics platform for monitoring at petabyte scale

    International Nuclear Information System (INIS)

    Magnoni, L; Cordeiro, C; Georgiou, M; Andreeva, J; Suthakar, U; Khan, A; Smith, D R

    2015-01-01

    Monitoring the WLCG infrastructure requires the gathering and analysis of a high volume of heterogeneous data (e.g. data transfers, job monitoring, site tests) coming from different services and experiment-specific frameworks to provide a uniform and flexible interface for scientists and sites. The current architecture, where relational database systems are used to store, to process and to serve monitoring data, has limitations in coping with the foreseen increase in the volume (e.g. higher LHC luminosity) and the variety (e.g. new data-transfer protocols and new resource-types, as cloud-computing) of WLCG monitoring events. This paper presents a new scalable data store and analytics platform designed by the Support for Distributed Computing (SDC) group, at the CERN IT department, which uses a variety of technologies each one targeting specific aspects of big-scale distributed data-processing (commonly referred as lambda-architecture approach). Results of data processing on Hadoop for WLCG data activities monitoring are presented, showing how the new architecture can easily analyze hundreds of millions of transfer logs in a few minutes. Moreover, a comparison of data partitioning, compression and file format (e.g. CSV, Avro) is presented, with particular attention given to how the file structure impacts the overall MapReduce performance. In conclusion, the evolution of the current implementation, which focuses on data storage and batch processing, towards a complete lambda-architecture is discussed, with consideration of candidate technology for the serving layer (e.g. Elasticsearch) and a description of a proof of concept implementation, based on Apache Spark and Esper, for the real-time part which compensates for batch-processing latency and automates problem detection and failures. (paper)

  7. Monitoring WLCG with lambda-architecture: a new scalable data store and analytics platform for monitoring at petabyte scale.

    Science.gov (United States)

    Magnoni, L.; Suthakar, U.; Cordeiro, C.; Georgiou, M.; Andreeva, J.; Khan, A.; Smith, D. R.

    2015-12-01

    Monitoring the WLCG infrastructure requires the gathering and analysis of a high volume of heterogeneous data (e.g. data transfers, job monitoring, site tests) coming from different services and experiment-specific frameworks to provide a uniform and flexible interface for scientists and sites. The current architecture, where relational database systems are used to store, to process and to serve monitoring data, has limitations in coping with the foreseen increase in the volume (e.g. higher LHC luminosity) and the variety (e.g. new data-transfer protocols and new resource-types, as cloud-computing) of WLCG monitoring events. This paper presents a new scalable data store and analytics platform designed by the Support for Distributed Computing (SDC) group, at the CERN IT department, which uses a variety of technologies each one targeting specific aspects of big-scale distributed data-processing (commonly referred as lambda-architecture approach). Results of data processing on Hadoop for WLCG data activities monitoring are presented, showing how the new architecture can easily analyze hundreds of millions of transfer logs in a few minutes. Moreover, a comparison of data partitioning, compression and file format (e.g. CSV, Avro) is presented, with particular attention given to how the file structure impacts the overall MapReduce performance. In conclusion, the evolution of the current implementation, which focuses on data storage and batch processing, towards a complete lambda-architecture is discussed, with consideration of candidate technology for the serving layer (e.g. Elasticsearch) and a description of a proof of concept implementation, based on Apache Spark and Esper, for the real-time part which compensates for batch-processing latency and automates problem detection and failures.

  8. Ultrasonographic Findings of Mammographic Architectural Distortion

    International Nuclear Information System (INIS)

    Ma, Jeong Hyun; Kang, Bong Joo; Cha, Eun Suk; Hwangbo, Seol; Kim, Hyeon Sook; Park, Chang Suk; Kim, Sung Hun; Choi, Jae Jeong; Chung, Yong An

    2008-01-01

    To review the sonographic findings of various diseases showing architectural distortion depicted under mammography. We collected and reviewed architectural distortions observed under mammography at our health institution between 1 March 2004, and 28 February 2007. We collected 23 cases of sonographically-detected mammographic architectural distortions that confirmed lesions after surgical resection. The sonographic findings of mammographic architectural distortion were analyzed by use of the BI-RADS lexicon for shape, margin, lesion boundary, echo pattern, posterior acoustic feature and orientation. There were variable diseases that showed architectural distortion depicted under mammography. Fibrocystic disease was the most common presentation (n = 6), followed by adenosis (n = 2), stromal fibrosis (n = 2), radial scar (n = 3), usual ductal hyperplasia (n = 1), atypical ductal hyperplasia (n = 1) and mild fibrosis with microcalcification (n = 1). Malignant lesions such as ductal carcinoma in situ (DCIS) (n = 2), lobular carcinoma in situ (LCIS) (n = 2), invasive ductal carcinoma (n = 2) and invasive lobular carcinoma (n = 1) were observed. As observed by sonography, shape was divided as irregular (n = 22) and round (n = 1). Margin was divided as circumscribed (n = 1), indistinct (n = 7), angular (n = 1), microlobulated (n = 1) and sipculated (n = 13). Lesion boundary was divided as abrupt interface (n = 11) and echogenic halo (n = 12). Echo pattern was divided as hypoechoic (n = 20), anechoic (n = 1), hyperechoic (n = 1) and isoechoic (n = 1). Posterior acoustic feature was divided as posterior acoustic feature (n = 7), posterior acoustic shadow (n = 15) and complex posterior acoustic feature (n = 1). Orientation was divided as parallel (n = 12) and not parallel (n = 11). There were no differential sonographic findings between benign and malignant lesions. This study presented various sonographic findings of mammographic architectural distortion and that it is

  9. Enterprise architecture evaluation using architecture framework and UML stereotypes

    Directory of Open Access Journals (Sweden)

    Narges Shahi

    2014-08-01

    Full Text Available There is an increasing need for enterprise architecture in numerous organizations with complicated systems with various processes. Support for information technology, organizational units whose elements maintain complex relationships increases. Enterprise architecture is so effective that its non-use in organizations is regarded as their institutional inability in efficient information technology management. The enterprise architecture process generally consists of three phases including strategic programing of information technology, enterprise architecture programing and enterprise architecture implementation. Each phase must be implemented sequentially and one single flaw in each phase may result in a flaw in the whole architecture and, consequently, in extra costs and time. If a model is mapped for the issue and then it is evaluated before enterprise architecture implementation in the second phase, the possible flaws in implementation process are prevented. In this study, the processes of enterprise architecture are illustrated through UML diagrams, and the architecture is evaluated in programming phase through transforming the UML diagrams to Petri nets. The results indicate that the high costs of the implementation phase will be reduced.

  10. Connecting a cognitive architecture to robotic perception

    Science.gov (United States)

    Kurup, Unmesh; Lebiere, Christian; Stentz, Anthony; Hebert, Martial

    2012-06-01

    We present an integrated architecture in which perception and cognition interact and provide information to each other leading to improved performance in real-world situations. Our system integrates the Felzenswalb et. al. object-detection algorithm with the ACT-R cognitive architecture. The targeted task is to predict and classify pedestrian behavior in a checkpoint scenario, most specifically to discriminate between normal versus checkpoint-avoiding behavior. The Felzenswalb algorithm is a learning-based algorithm for detecting and localizing objects in images. ACT-R is a cognitive architecture that has been successfully used to model human cognition with a high degree of fidelity on tasks ranging from basic decision-making to the control of complex systems such as driving or air traffic control. The Felzenswalb algorithm detects pedestrians in the image and provides ACT-R a set of features based primarily on their locations. ACT-R uses its pattern-matching capabilities, specifically its partial-matching and blending mechanisms, to track objects across multiple images and classify their behavior based on the sequence of observed features. ACT-R also provides feedback to the Felzenswalb algorithm in the form of expected object locations that allow the algorithm to eliminate false-positives and improve its overall performance. This capability is an instance of the benefits pursued in developing a richer interaction between bottom-up perceptual processes and top-down goal-directed cognition. We trained the system on individual behaviors (only one person in the scene) and evaluated its performance across single and multiple behavior sets.

  11. Motion camera based on a custom vision sensor and an FPGA architecture

    Science.gov (United States)

    Arias-Estrada, Miguel

    1998-09-01

    A digital camera for custom focal plane arrays was developed. The camera allows the test and development of analog or mixed-mode arrays for focal plane processing. The camera is used with a custom sensor for motion detection to implement a motion computation system. The custom focal plane sensor detects moving edges at the pixel level using analog VLSI techniques. The sensor communicates motion events using the event-address protocol associated to a temporal reference. In a second stage, a coprocessing architecture based on a field programmable gate array (FPGA) computes the time-of-travel between adjacent pixels. The FPGA allows rapid prototyping and flexible architecture development. Furthermore, the FPGA interfaces the sensor to a compact PC computer which is used for high level control and data communication to the local network. The camera could be used in applications such as self-guided vehicles, mobile robotics and smart surveillance systems. The programmability of the FPGA allows the exploration of further signal processing like spatial edge detection or image segmentation tasks. The article details the motion algorithm, the sensor architecture, the use of the event- address protocol for velocity vector computation and the FPGA architecture used in the motion camera system.

  12. An energy-efficient failure detector for vehicular cloud computing.

    Science.gov (United States)

    Liu, Jiaxi; Wu, Zhibo; Dong, Jian; Wu, Jin; Wen, Dongxin

    2018-01-01

    Failure detectors are one of the fundamental components for maintaining the high availability of vehicular cloud computing. In vehicular cloud computing, lots of RSUs are deployed along the road to improve the connectivity. Many of them are equipped with solar battery due to the unavailability or excess expense of wired electrical power. So it is important to reduce the battery consumption of RSU. However, the existing failure detection algorithms are not designed to save battery consumption RSU. To solve this problem, a new energy-efficient failure detector 2E-FD has been proposed specifically for vehicular cloud computing. 2E-FD does not only provide acceptable failure detection service, but also saves the battery consumption of RSU. Through the comparative experiments, the results show that our failure detector has better performance in terms of speed, accuracy and battery consumption.

  13. Modern Adaptive Analytics Approach to Lowering Seismic Network Detection Thresholds

    Science.gov (United States)

    Johnson, C. E.

    2017-12-01

    Modern seismic networks present a number of challenges, but perhaps most notably are those related to 1) extreme variation in station density, 2) temporal variation in station availability, and 3) the need to achieve detectability for much smaller events of strategic importance. The first of these has been reasonably addressed in the development of modern seismic associators, such as GLASS 3.0 by the USGS/NEIC, though some work still remains to be done in this area. However, the latter two challenges demand special attention. Station availability is impacted by weather, equipment failure or the adding or removing of stations, and while thresholds have been pushed to increasingly smaller magnitudes, new algorithms are needed to achieve even lower thresholds. Station availability can be addressed by a modern, adaptive architecture that maintains specified performance envelopes using adaptive analytics coupled with complexity theory. Finally, detection thresholds can be lowered using a novel approach that tightly couples waveform analytics with the event detection and association processes based on a principled repicking algorithm that uses particle realignment for enhanced phase discrimination.

  14. Software architecture 2

    CERN Document Server

    Oussalah, Mourad Chabanne

    2014-01-01

    Over the past 20 years, software architectures have significantly contributed to the development of complex and distributed systems. Nowadays, it is recognized that one of the critical problems in the design and development of any complex software system is its architecture, i.e. the organization of its architectural elements. Software Architecture presents the software architecture paradigms based on objects, components, services and models, as well as the various architectural techniques and methods, the analysis of architectural qualities, models of representation of architectural templa

  15. Lightweight enterprise architectures

    CERN Document Server

    Theuerkorn, Fenix

    2004-01-01

    STATE OF ARCHITECTUREArchitectural ChaosRelation of Technology and Architecture The Many Faces of Architecture The Scope of Enterprise Architecture The Need for Enterprise ArchitectureThe History of Architecture The Current Environment Standardization Barriers The Need for Lightweight Architecture in the EnterpriseThe Cost of TechnologyThe Benefits of Enterprise Architecture The Domains of Architecture The Gap between Business and ITWhere Does LEA Fit? LEA's FrameworkFrameworks, Methodologies, and Approaches The Framework of LEATypes of Methodologies Types of ApproachesActual System Environmen

  16. Software architecture 1

    CERN Document Server

    Oussalah , Mourad Chabane

    2014-01-01

    Over the past 20 years, software architectures have significantly contributed to the development of complex and distributed systems. Nowadays, it is recognized that one of the critical problems in the design and development of any complex software system is its architecture, i.e. the organization of its architectural elements. Software Architecture presents the software architecture paradigms based on objects, components, services and models, as well as the various architectural techniques and methods, the analysis of architectural qualities, models of representation of architectural template

  17. Cardiac architecture: Gothic versus Romanesque. A cardiologist's view.

    Science.gov (United States)

    Coghlan, H C; Coghlan, L

    2001-10-01

    The healthy left ventricle, with remarkable mechanical efficiency, has a gothic architecture, which results from the disposition of the myocardial fibers supported and maintained by a normal collagen matrix scaffold. This conclusion, arising from the analysis of roman and gothic buildings and from comparative biology of the left ventricles of different species, has been substantiated by the study of three-dimensional images obtained by MRI and analyzed with mathematic methods for measurements of the curvature and thickness of the ventricular walls. The assessment of left ventricular functional reserve based on the architecture has been very important in making therapeutic and surgical decisions in our patients and has important implications for the design of surgical strategies designed to try to improve ventricular function by restoring an architecture that allows more efficient ventricular mechanics. The structural approach and its combination with important advances in the knowledge of membrane channels, signaling pathways, cytokines, growth factors, neuroregulation, and targeted pharmacology, and with the advances in methods for reducing hemodynamic load and its cellular and structural consequences, is certain to bring about a dramatic change in the very serious and highly prevalent congestive failure associated with the Romanesque transformation of the diseased left ventricle. Copyright 2001 by W.B. Saunders Company

  18. Indigenous architecture as a context-oriented architecture, a look at ...

    African Journals Online (AJOL)

    What has become problematic as the achievement of international style and globalization of architecture during the time has been the purely technological look at architecture, and the architecture without belonging to a place. In recent decades, the topic of sustainable architecture and reconsidering indigenous architecture ...

  19. Analyses Of Techniques On Structural Fatigue Failure Detection ...

    African Journals Online (AJOL)

    Machines and structures are subjected to variable loading conditions where the stress cycle does not remain the same during the operation of the machine. Fatigue is undoubtedly one of the most serious of all causes of breakdowns of machines and structures which results in sudden failures. The use of the time domain ...

  20. Architecture in the Islamic Civilization: Muslim Building or Islamic Architecture

    OpenAIRE

    Yassin, Ayat Ali; Utaberta, Dr. Nangkula

    2012-01-01

    The main problem of the theory in the arena of islamic architecture is affected by some of its Westernthoughts, and stereotyping the islamic architecture according to Western thoughts; this leads to the breakdownof the foundations in the islamic architecture. It is a myth that islamic architecture is subjected to theinfluence from foreign architectures. This paper will highlight the dialectical concept of islamic architecture ormuslim buildings and the areas of recognition in islamic architec...

  1. Reasons for Implementing Movement in Kinetic Architecture

    Science.gov (United States)

    Cudzik, Jan; Nyka, Lucyna

    2017-10-01

    The paper gives insights into different forms of movement in contemporary architecture and examines them based on the reasons for their implementation. The main objective of the paper is to determine: the degree to which the complexity of kinematic architecture results from functional and spatial needs and what other motivations there are. The method adopted to investigate these questions involves theoretical studies and comparative analyses of architectural objects with different forms of movement imbedded in their structure. Using both methods allowed delving into reasons that lie behind the implementation of movement in contemporary kinetic architecture. As research shows, there is a constantly growing range of applications with kinematic solutions inserted in buildings’ structures. The reasons for their implementation are manifold and encompass pursuits of functional qualities, environmental performance, spatial effects, social interactions and new aesthetics. In those early projects based on simple mechanisms, the main motives were focused on functional values and in later experiments - on improving buildings’ environmental performance. Additionally, in recent proposals, a significant quest could be detected toward kinematic solutions that are focused on factors related to alternative aesthetics and innovative spatial effects. Research reveals that the more complicated form of movement, the more often the reason for its implementation goes beyond the traditionally understood “function”. However, research also shows that the effects resulting from investigations on spatial qualities of architecture and new aesthetics often appear to provide creative insights into new functionalities in architecture.

  2. Cooperative distributed architecture for mashups

    Science.gov (United States)

    Al-Haj Hassan, Osama Mohammad; Ramaswamy, Lakshmish; Hamad, Fadi; Abu Taleb, Anas

    2014-05-01

    Since the advent of Web 2.0, personalised applications such as mashups have become widely popular. Mashups enable end-users to fetch data from distributed data sources, and refine it based on their personal needs. This high degree of personalisation that mashups offer comes at the expense of performance and scalability. These scalability challenges are exacerbated by the centralised architectures of current mashup platforms. In this paper, we address the performance and scalability issues by designing CoMaP - a distributed mashup platform. CoMaP's architecture comprises of several cooperative mashup processing nodes distributed over the Internet upon which mashups can, fully or partially, be executed. CoMaP incorporates a dynamic and efficient scheme for deploying mashups on the processing nodes. Our scheme considers a number of parameters such as variations in link delays and bandwidths, and loads on mashup processing nodes. CoMaP includes effective and low-cost mechanisms for balancing loads on the processing nodes as well for handling node failures. Furthermore, we propose novel techniques that leverage keyword synonyms, ontologies and caching to enhance end-user experience. This paper reports several experiments to comprehensively study CoMaP's performance. The results demonstrate CoMaP's benefits as a scalable distributed mashup platform.

  3. Fuzzy logic prioritization of failures in a system failure mode, effects and criticality analysis

    International Nuclear Information System (INIS)

    Bowles, John B.; Pelaez, C.E.

    1995-01-01

    This paper describes a new technique, based on fuzzy logic, for prioritizing failures for corrective actions in a Failure Mode, Effects and Criticality Analysis (FMECA). As in a traditional criticality analysis, the assessment is based on the severity, frequency of occurrence, and detectability of an item failure. However, these parameters are here represented as members of a fuzzy set, combined by matching them against rules in a rule base, evaluated with min-max inferencing, and then defuzzified to assess the riskiness of the failure. This approach resolves some of the problems in traditional methods of evaluation and it has several advantages compared to strictly numerical methods: 1) it allows the analyst to evaluate the risk associated with item failure modes directly using the linguistic terms that are employed in making the criticality assessment; 2) ambiguous, qualitative, or imprecise information, as well as quantitative data, can be used in the assessment and they are handled in a consistent manner; and 3) it gives a more flexible structure for combining the severity, occurrence, and detectability parameters. Two fuzzy logic based approaches for assessing criticality are presented. The first is based on the numerical rankings used in a conventional Risk Priority Number (RPN) calculation and uses crisp inputs gathered from the user or extracted from a reliability analysis. The second, which can be used early in the design process when less detailed information is available, allows fuzzy inputs and also illustrates the direct use of the linguistic rankings defined for the RPN calculations

  4. Failure Mode and Effect Analysis for Wind Turbine Systems in China

    DEFF Research Database (Denmark)

    Zhu, Jiangsheng; Ma, Kuichao; N. Soltani, Mohsen

    2017-01-01

    This paper discusses a cost based Failure Mode and Effect Analysis (FMEA) approch for the Wind Turbine (WT) with condition monitoring system in China. Normally, the traditional FMEA uses the Risk Priority Number (RPN) to rank failure modes. But the RPN can be changed with the Condition Monitoring...... Systems (CMS) due to change of the score of detection. The cost of failure mode should also be considered because faults can be detected at an incipient level, and condition-based maintenance can be scheduled. The results show that the proposed failure mode priorities considering their cost consequences...

  5. On experiments to detect possible failures on relativity theory

    International Nuclear Information System (INIS)

    Rodrigues Junior, W.A.; Tiomno, J.

    1982-01-01

    Conditions under which is expected the failure of Einstein's Relativity are analysed. A complete analysis of a recently proposed experiment by Kolen-Torr is also given showing that it must give a negative result. (Author) [pt

  6. Sensor Fault Detection and Diagnosis for autonomous vehicles

    Directory of Open Access Journals (Sweden)

    Realpe Miguel

    2015-01-01

    Full Text Available In recent years testing autonomous vehicles on public roads has become a reality. However, before having autonomous vehicles completely accepted on the roads, they have to demonstrate safe operation and reliable interaction with other traffic participants. Furthermore, in real situations and long term operation, there is always the possibility that diverse components may fail. This paper deals with possible sensor faults by defining a federated sensor data fusion architecture. The proposed architecture is designed to detect obstacles in an autonomous vehicle’s environment while detecting a faulty sensor using SVM models for fault detection and diagnosis. Experimental results using sensor information from the KITTI dataset confirm the feasibility of the proposed architecture to detect soft and hard faults from a particular sensor.

  7. Detection of intra-cardiac thrombi and congestive heart failure in cats using computed tomographic angiography.

    Science.gov (United States)

    Vititoe, Kyle P; Fries, Ryan C; Joslyn, Stephen; Selmic, Laura E; Howes, Mark; Vitt, Jordan P; O'Brien, Robert T

    2018-04-16

    Arterial thromboembolism is a life-threatening condition in cats most commonly secondary to cardiac disease. Echocardiography is the reference standard to evaluate for presence of a thrombus. In humans, computed tomographic (CT) angiography is becoming widely used to detect left atrial thrombi precluding the use of sedation. The purpose of this prospective, controlled, methods comparison pilot study was threefold: (1) describe new CT angiography protocol used in awake cats with cardiac disease and congestive heart failure; (2) determine accuracy of continuous and dynamic acquisition CT angiography to identify and characterize cardiac thrombi from spontaneous echocardiographic contrast using transthoracic echocardiography as our reference standard; (3) identify known negative prognostic factors and comorbidities of the thorax that CT angiography may provide that complement or supersede echocardiographic examination. Fourteen cats with heart disease were recruited; 7 with thrombi and 7 with spontaneous echocardiographic contrast. Echocardiography and awake CT angiography were performed using a microdose of contrast. Six of 7 thrombi were identified on CT angiography as filling defects by at least one reviewer within the left auricle (n = 6) and right heart (n = 1). Highest sensitivity (71.4%) was in continuous phase and highest specificity (85.7%) was in dynamic studies with fair to moderate interobserver agreement (0.38 and 0.44). CT angiography identified prognostic cardiac information (left atrial enlargement, congestive heart failure, arterial thromboembolism) and comorbidities (suspected idiopathic pulmonary fibrosis, asthma). This study indicates CT angiography can readily identify cardiac thrombi, important prognostic information and comorbidities, and can be safely performed in cats with cardiac disease and congestive heart failure. © 2018 American College of Veterinary Radiology.

  8. Novel Detection Method for Consecutive DC Commutation Failure Based on Daubechies Wavelet with 2nd-Order Vanishing Moments

    Directory of Open Access Journals (Sweden)

    Tao Lin

    2018-01-01

    Full Text Available Accurate detection and effective control strategy of commutation failure (CF of high voltage direct current (HVDC are of great significance for keeping the safe and stable operations of the hybrid power grid. At first, a novel detection method for consecutive CF is proposed. Concretely, the 2nd and higher orders’ derivative values of direct current are summarized as the core to judge CF by analyzing the physical characteristics of the direct current waveform of the converter station in CF. Then, the Daubechies wavelet coefficient that can represent the 2nd and higher order derivative values of direct current is derived. Once the wavelet coefficients of the sampling points are detected to exceed the threshold, the occurrence of CF is confirmed. Furthermore, by instantly increasing advanced firing angle β in the inverter side, an additional emergency control strategy to prevent subsequent CF is proposed. Eventually, with simulations of the benchmark model, the effectiveness and superiorities of the proposed detection method and additional control strategy in accuracy and rapidity are verified.

  9. Open architecture design and approach for the Integrated Sensor Architecture (ISA)

    Science.gov (United States)

    Moulton, Christine L.; Krzywicki, Alan T.; Hepp, Jared J.; Harrell, John; Kogut, Michael

    2015-05-01

    Integrated Sensor Architecture (ISA) is designed in response to stovepiped integration approaches. The design, based on the principles of Service Oriented Architectures (SOA) and Open Architectures, addresses the problem of integration, and is not designed for specific sensors or systems. The use of SOA and Open Architecture approaches has led to a flexible, extensible architecture. Using these approaches, and supported with common data formats, open protocol specifications, and Department of Defense Architecture Framework (DoDAF) system architecture documents, an integration-focused architecture has been developed. ISA can help move the Department of Defense (DoD) from costly stovepipe solutions to a more cost-effective plug-and-play design to support interoperability.

  10. Analysis of Architecture Pattern Usage in Legacy System Architecture Documentation

    NARCIS (Netherlands)

    Harrison, Neil B.; Avgeriou, Paris

    2008-01-01

    Architecture patterns are an important tool in architectural design. However, while many architecture patterns have been identified, there is little in-depth understanding of their actual use in software architectures. For instance, there is no overview of how many patterns are used per system or

  11. On the Architectural Engineering Competences in Architectural Design

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    2007-01-01

    In 1997 a new education in Architecture & Design at Department of Architecture and Design, Aalborg University was started with 50 students. During the recent years this number has increased to approximately 100 new students each year, i.e. approximately 500 students are following the 3 years...... bachelor (BSc) and the 2 years master (MSc) programme. The first 5 semesters are common for all students followed by 5 semesters with specialization into Architectural Design, Urban Design, Industrial Design or Digital Design. The present paper gives a short summary of the architectural engineering...

  12. Connecting Architecture and Implementation

    Science.gov (United States)

    Buchgeher, Georg; Weinreich, Rainer

    Software architectures are still typically defined and described independently from implementation. To avoid architectural erosion and drift, architectural representation needs to be continuously updated and synchronized with system implementation. Existing approaches for architecture representation like informal architecture documentation, UML diagrams, and Architecture Description Languages (ADLs) provide only limited support for connecting architecture descriptions and implementations. Architecture management tools like Lattix, SonarJ, and Sotoarc and UML-tools tackle this problem by extracting architecture information directly from code. This approach works for low-level architectural abstractions like classes and interfaces in object-oriented systems but fails to support architectural abstractions not found in programming languages. In this paper we present an approach for linking and continuously synchronizing a formalized architecture representation to an implementation. The approach is a synthesis of functionality provided by code-centric architecture management and UML tools and higher-level architecture analysis approaches like ADLs.

  13. A CMOS self-powered front-end architecture for subcutaneous event-detector devices

    CERN Document Server

    Colomer-Farrarons, Jordi

    2011-01-01

    A CMOS Self-Powered Front-End Architecture for Subcutaneous Event-Detector Devices presents the conception and prototype realization of a Self-Powered architecture for subcutaneous detector devices. The architecture is designed to work as a true/false (event detector) or threshold level alarm of some substances, ions, etc. that are detected through a three-electrodes amperometric BioSensor approach. The device is conceived as a Low-Power subcutaneous implantable application powered by an inductive link, one emitter antenna at the external side of the skin and the receiver antenna under the ski

  14. From Requirements to code: an Architecture-centric Approach for producing Quality Systems

    OpenAIRE

    Bucchiarone, Antonio; Di Ruscio, Davide; Muccini, Henry; Pelliccione, Patrizio

    2009-01-01

    When engineering complex and distributed software and hardware systems (increasingly used in many sectors, such as manufacturing, aerospace, transportation, communication, energy, and health-care), quality has become a big issue, since failures can have economics consequences and can also endanger human life. Model-based specifications of a component-based system permit to explicitly model the structure and behaviour of components and their integration. In particular Software Architectures (S...

  15. A modular microfluidic architecture for integrated biochemical analysis.

    Science.gov (United States)

    Shaikh, Kashan A; Ryu, Kee Suk; Goluch, Edgar D; Nam, Jwa-Min; Liu, Juewen; Thaxton, C Shad; Chiesl, Thomas N; Barron, Annelise E; Lu, Yi; Mirkin, Chad A; Liu, Chang

    2005-07-12

    Microfluidic laboratory-on-a-chip (LOC) systems based on a modular architecture are presented. The architecture is conceptualized on two levels: a single-chip level and a multiple-chip module (MCM) system level. At the individual chip level, a multilayer approach segregates components belonging to two fundamental categories: passive fluidic components (channels and reaction chambers) and active electromechanical control structures (sensors and actuators). This distinction is explicitly made to simplify the development process and minimize cost. Components belonging to these two categories are built separately on different physical layers and can communicate fluidically via cross-layer interconnects. The chip that hosts the electromechanical control structures is called the microfluidic breadboard (FBB). A single LOC module is constructed by attaching a chip comprised of a custom arrangement of fluid routing channels and reactors (passive chip) to the FBB. Many different LOC functions can be achieved by using different passive chips on an FBB with a standard resource configuration. Multiple modules can be interconnected to form a larger LOC system (MCM level). We demonstrated the utility of this architecture by developing systems for two separate biochemical applications: one for detection of protein markers of cancer and another for detection of metal ions. In the first case, free prostate-specific antigen was detected at 500 aM concentration by using a nanoparticle-based bio-bar-code protocol on a parallel MCM system. In the second case, we used a DNAzyme-based biosensor to identify the presence of Pb(2+) (lead) at a sensitivity of 500 nM in <1 nl of solution.

  16. How organisation of architecture documentation affects architectural knowledge retrieval

    NARCIS (Netherlands)

    de Graaf, K.A.; Liang, P.; Tang, A.; Vliet, J.C.

    A common approach to software architecture documentation in industry projects is the use of file-based documents. This approach offers a single-dimensional arrangement of the architectural knowledge. Knowledge retrieval from file-based architecture documentation is efficient if the organisation of

  17. REQUIREMENTS PATTERNS FOR FORMAL CONTRACTS IN ARCHITECTURAL ANALYSIS AND DESIGN LANGUAGE (AADL) MODELS

    Science.gov (United States)

    2017-04-17

    because of simultaneous failures in two of the aircrafts braking system . The architecture of the primary Braking System Control Unit (BSCU) is...is a component of the overall Flight Control System (FCS) that compares the measured state of an aircraft (position, speed, and attitude) to the...Cyberphysical Systems , Formal Methods, Requirements Patterns, AADL, Assume Guarantee Reasoning Environment 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF

  18. Application of data mining in a maintenance system for failure prediction

    OpenAIRE

    Bastos, Pedro; Lopes, Isabel; Pires, L.C.M.

    2014-01-01

    In industrial environment, data generated during equipment maintenance and monitoring activities has become increasingly overwhelming. Data mining presents an opportunity to increase significantly the rate at which the volume of data can be turned into useful information. This paper presents an architecture designed to gather data generated in industrial units on their maintenance activities, and to forecast future failures based on data analysis. Rapid Miner is used to apply diff...

  19. Palliative Care in Heart Failure

    Directory of Open Access Journals (Sweden)

    Hatice Mert

    2012-04-01

    Full Text Available Heart failure is an important health problem since its incidence and prevalence is increasing year by year. Since symptom burden and mortality are high in heart failure, supportive and palliative care should be provided. However, very few patients are referred to palliative care services. In comparison with cancer patients, it is difficult to identify end of life care for patients with heart failure, because these patients are hospitalized when the signs of acute decompensation appear, and their symptoms decrease and functional status improve before they are discharged. Therefore, palliative care, which is a holistic approach aiming to improve patients’ quality of life, to detect and treat the attacks of the disease before they become severe, and to deal with patients’ physical, psychological, social, and mental health altogether during their care, should be integrated into heart failure patients’ care. [TAF Prev Med Bull 2012; 11(2.000: 217-222

  20. ALLIANCE: An architecture for fault tolerant, cooperative control of heterogeneous mobile robots

    Energy Technology Data Exchange (ETDEWEB)

    Parker, L.E.

    1995-02-01

    This research addresses the problem of achieving fault tolerant cooperation within small- to medium-sized teams of heterogeneous mobile robots. The author describes a novel behavior-based, fully distributed architecture, called ALLIANCE, that utilizes adaptive action selection to achieve fault tolerant cooperative control in robot missions involving loosely coupled, largely independent tasks. The robots in this architecture possess a variety of high-level functions that they can perform during a mission, and must at all times select an appropriate action based on the requirements of the mission, the activities of other robots, the current environmental conditions, and their own internal states. Since such cooperative teams often work in dynamic and unpredictable environments, the software architecture allows the team members to respond robustly and reliably to unexpected environmental changes and modifications in the robot team that may occur due to mechanical failure, the learning of new skills, or the addition or removal of robots from the team by human intervention. After presenting ALLIANCE, the author describes in detail experimental results of an implementation of this architecture on a team of physical mobile robots performing a cooperative box pushing demonstration. These experiments illustrate the ability of ALLIANCE to achieve adaptive, fault-tolerant cooperative control amidst dynamic changes in the capabilities of the robot team.

  1. Capital Architecture: Situating symbolism parallel to architectural methods and technology

    Science.gov (United States)

    Daoud, Bassam

    Capital Architecture is a symbol of a nation's global presence and the cultural and social focal point of its inhabitants. Since the advent of High-Modernism in Western cities, and subsequently decolonised capitals, civic architecture no longer seems to be strictly grounded in the philosophy that national buildings shape the legacy of government and the way a nation is regarded through its built environment. Amidst an exceedingly globalized architectural practice and with the growing concern of key heritage foundations over the shortcomings of international modernism in representing its immediate socio-cultural context, the contextualization of public architecture within its sociological, cultural and economic framework in capital cities became the key denominator of this thesis. Civic architecture in capital cities is essential to confront the challenges of symbolizing a nation and demonstrating the legitimacy of the government'. In today's dominantly secular Western societies, governmental architecture, especially where the seat of political power lies, is the ultimate form of architectural expression in conveying a sense of identity and underlining a nation's status. Departing with these convictions, this thesis investigates the embodied symbolic power, the representative capacity, and the inherent permanence in contemporary architecture, and in its modes of production. Through a vast study on Modern architectural ideals and heritage -- in parallel to methodologies -- the thesis stimulates the future of large scale governmental building practices and aims to identify and index the key constituents that may respond to the lack representation in civic architecture in capital cities.

  2. Software architecture evolution

    DEFF Research Database (Denmark)

    Barais, Olivier; Le Meur, Anne-Francoise; Duchien, Laurence

    2008-01-01

    Software architectures must frequently evolve to cope with changing requirements, and this evolution often implies integrating new concerns. Unfortunately, when the new concerns are crosscutting, existing architecture description languages provide little or no support for this kind of evolution....... The software architect must modify multiple elements of the architecture manually, which risks introducing inconsistencies. This chapter provides an overview, comparison and detailed treatment of the various state-of-the-art approaches to describing and evolving software architectures. Furthermore, we discuss...... one particular framework named Tran SAT, which addresses the above problems of software architecture evolution. Tran SAT provides a new element in the software architecture descriptions language, called an architectural aspect, for describing new concerns and their integration into an existing...

  3. Architectural design decisions

    NARCIS (Netherlands)

    Jansen, Antonius Gradus Johannes

    2008-01-01

    A software architecture can be considered as the collection of key decisions concerning the design of the software of a system. Knowledge about this design, i.e. architectural knowledge, is key for understanding a software architecture and thus the software itself. Architectural knowledge is mostly

  4. A new texture descriptor based on local micro-pattern for detection of architectural distortion in mammographic images

    Science.gov (United States)

    de Oliveira, Helder C. R.; Moraes, Diego R.; Reche, Gustavo A.; Borges, Lucas R.; Catani, Juliana H.; de Barros, Nestor; Melo, Carlos F. E.; Gonzaga, Adilson; Vieira, Marcelo A. C.

    2017-03-01

    This paper presents a new local micro-pattern texture descriptor for the detection of Architectural Distortion (AD) in digital mammography images. AD is a subtle contraction of breast parenchyma that may represent an early sign of breast cancer. Due to its subtlety and variability, AD is more difficult to detect compared to microcalcifications and masses, and is commonly found in retrospective evaluations of false-negative mammograms. Several computer-based systems have been proposed for automatic detection of AD, but their performance are still unsatisfactory. The proposed descriptor, Local Mapped Pattern (LMP), is a generalization of the Local Binary Pattern (LBP), which is considered one of the most powerful feature descriptor for texture classification in digital images. Compared to LBP, the LMP descriptor captures more effectively the minor differences between the local image pixels. Moreover, LMP is a parametric model which can be optimized for the desired application. In our work, the LMP performance was compared to the LBP and four Haralick's texture descriptors for the classification of 400 regions of interest (ROIs) extracted from clinical mammograms. ROIs were selected and divided into four classes: AD, normal tissue, microcalcifications and masses. Feature vectors were used as input to a multilayer perceptron neural network, with a single hidden layer. Results showed that LMP is a good descriptor to distinguish AD from other anomalies in digital mammography. LMP performance was slightly better than the LBP and comparable to Haralick's descriptors (mean classification accuracy = 83%).

  5. Architecture Governance: The Importance of Architecture Governance for Achieving Operationally Responsive Ground Systems

    Science.gov (United States)

    Kolar, Mike; Estefan, Jeff; Giovannoni, Brian; Barkley, Erik

    2011-01-01

    Topics covered (1) Why Governance and Why Now? (2) Characteristics of Architecture Governance (3) Strategic Elements (3a) Architectural Principles (3b) Architecture Board (3c) Architecture Compliance (4) Architecture Governance Infusion Process. Governance is concerned with decision making (i.e., setting directions, establishing standards and principles, and prioritizing investments). Architecture governance is the practice and orientation by which enterprise architectures and other architectures are managed and controlled at an enterprise-wide level

  6. Level of Automation and Failure Frequency Effects on Simulated Lunar Lander Performance

    Science.gov (United States)

    Marquez, Jessica J.; Ramirez, Margarita

    2014-01-01

    A human-in-the-loop experiment was conducted at the NASA Ames Research Center Vertical Motion Simulator, where instrument-rated pilots completed a simulated terminal descent phase of a lunar landing. Ten pilots participated in a 2 x 2 mixed design experiment, with level of automation as the within-subjects factor and failure frequency as the between subjects factor. The two evaluated levels of automation were high (fully automated landing) and low (manual controlled landing). During test trials, participants were exposed to either a high number of failures (75% failure frequency) or low number of failures (25% failure frequency). In order to investigate the pilots' sensitivity to changes in levels of automation and failure frequency, the dependent measure selected for this experiment was accuracy of failure diagnosis, from which D Prime and Decision Criterion were derived. For each of the dependent measures, no significant difference was found for level of automation and no significant interaction was detected between level of automation and failure frequency. A significant effect was identified for failure frequency suggesting failure frequency has a significant effect on pilots' sensitivity to failure detection and diagnosis. Participants were more likely to correctly identify and diagnose failures if they experienced the higher levels of failures, regardless of level of automation

  7. Minimalism in architecture: Architecture as a language of its identity

    Directory of Open Access Journals (Sweden)

    Vasilski Dragana

    2012-01-01

    Full Text Available Every architectural work is created on the principle that includes the meaning, and then this work is read like an artifact of the particular meaning. Resources by which the meaning is built primarily, susceptible to transformation, as well as routing of understanding (decoding messages carried by a work of architecture, are subject of semiotics and communication theories, which have played significant role for the architecture and the architect. Minimalism in architecture, as a paradigm of the XXI century architecture, means searching for essence located in the irreducible minimum. Inspired use of architectural units (archetypical elements, trough the fatasm of simplicity, assumes the primary responsibility for providing the object identity, because it participates in language formation and therefore in its reading. Volume is form by clean language that builds the expression of the fluid areas liberated of recharge needs. Reduced architectural language is appropriating to the age marked by electronic communications.

  8. Decrease the Number of Glovebox Glove Breaches and Failures

    Energy Technology Data Exchange (ETDEWEB)

    Hurtle, Jackie C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2013-12-24

    Los Alamos National Laboratory (LANL) is committed to the protection of the workers, public, and environment while performing work and uses gloveboxes as engineered controls to protect workers from exposure to hazardous materials while performing plutonium operations. Glovebox gloves are a weak link in the engineered controls and are a major cause of radiation contamination events which can result in potential worker exposure and localized contamination making operational areas off-limits and putting programmatic work on hold. Each day of lost opportunity at Technical Area (TA) 55, Plutonium Facility (PF) 4 is estimated at $1.36 million. Between July 2011 and June 2013, TA-55-PF-4 had 65 glovebox glove breaches and failures with an average of 2.7 per month. The glovebox work follows the five step safety process promoted at LANL with a decision diamond interjected for whether or not a glove breach or failure event occurred in the course of performing glovebox work. In the event that no glove breach or failure is detected, there is an additional decision for whether or not contamination is detected. In the event that contamination is detected, the possibility for a glove breach or failure event is revisited.

  9. Low power test architecture for dynamic read destructive fault detection in SRAM

    Science.gov (United States)

    Takher, Vikram Singh; Choudhary, Rahul Raj

    2018-06-01

    Dynamic Read Destructive Fault (dRDF) is the outcome of resistive open defects in the core cells of static random-access memories (SRAMs). The sensitisation of dRDF involves either performing multiple read operations or creation of number of read equivalent stress (RES), on the core cell under test. Though the creation of RES is preferred over the performing multiple read operation on the core cell, cell dissipates more power during RES than during the read or write operation. This paper focuses on the reduction in power dissipation by optimisation of number of RESs, which are required to sensitise the dRDF during test mode of operation of SRAM. The novel pre-charge architecture has been proposed in order to reduce the power dissipation by limiting the number of RESs to an optimised number of two. The proposed low power architecture is simulated and analysed which shows reduction in power dissipation by reducing the number of RESs up to 18.18%.

  10. Hybrid architecture for building secure sensor networks

    Science.gov (United States)

    Owens, Ken R., Jr.; Watkins, Steve E.

    2012-04-01

    Sensor networks have various communication and security architectural concerns. Three approaches are defined to address these concerns for sensor networks. The first area is the utilization of new computing architectures that leverage embedded virtualization software on the sensor. Deploying a small, embedded virtualization operating system on the sensor nodes that is designed to communicate to low-cost cloud computing infrastructure in the network is the foundation to delivering low-cost, secure sensor networks. The second area focuses on securing the sensor. Sensor security components include developing an identification scheme, and leveraging authentication algorithms and protocols that address security assurance within the physical, communication network, and application layers. This function will primarily be accomplished through encrypting the communication channel and integrating sensor network firewall and intrusion detection/prevention components to the sensor network architecture. Hence, sensor networks will be able to maintain high levels of security. The third area addresses the real-time and high priority nature of the data that sensor networks collect. This function requires that a quality-of-service (QoS) definition and algorithm be developed for delivering the right data at the right time. A hybrid architecture is proposed that combines software and hardware features to handle network traffic with diverse QoS requirements.

  11. Architectural Narratives

    DEFF Research Database (Denmark)

    Kiib, Hans

    2010-01-01

    a functional framework for these concepts, but tries increasingly to endow the main idea of the cultural project with a spatially aesthetic expression - a shift towards “experience architecture.” A great number of these projects typically recycle and reinterpret narratives related to historical buildings......In this essay, I focus on the combination of programs and the architecture of cultural projects that have emerged within the last few years. These projects are characterized as “hybrid cultural projects,” because they intend to combine experience with entertainment, play, and learning. This essay...... and architectural heritage; another group tries to embed new performative technologies in expressive architectural representation. Finally, this essay provides a theoretical framework for the analysis of the political rationales of these projects and for the architectural representation bridges the gap between...

  12. A Study of Energy Management Systems and its Failure Modes in Smart Grid Power Distribution

    Science.gov (United States)

    Musani, Aatif

    The subject of this thesis is distribution level load management using a pricing signal in a smart grid infrastructure. The project relates to energy management in a spe-cialized distribution system known as the Future Renewable Electric Energy Delivery and Management (FREEDM) system. Energy management through demand response is one of the key applications of smart grid. Demand response today is envisioned as a method in which the price could be communicated to the consumers and they may shift their loads from high price periods to the low price periods. The development and deployment of the FREEDM system necessitates controls of energy and power at the point of end use. In this thesis, the main objective is to develop the control model of the Energy Management System (EMS). The energy and power management in the FREEDM system is digitally controlled therefore all signals containing system states are discrete. The EMS is modeled as a discrete closed loop transfer function in the z-domain. A breakdown of power and energy control devices such as EMS components may result in energy con-sumption error. This leads to one of the main focuses of the thesis which is to identify and study component failures of the designed control system. Moreover, H-infinity ro-bust control method is applied to ensure effectiveness of the control architecture. A focus of the study is cyber security attack, specifically bad data detection in price. Test cases are used to illustrate the performance of the EMS control design, the effect of failure modes and the application of robust control technique. The EMS was represented by a linear z-domain model. The transfer function be-tween the pricing signal and the demand response was designed and used as a test bed. EMS potential failure modes were identified and studied. Three bad data detection meth-odologies were implemented and a voting policy was used to declare bad data. The run-ning mean and standard deviation analysis method proves to be

  13. A Low Cost VLSI Architecture for Spike Sorting Based on Feature Extraction with Peak Search.

    Science.gov (United States)

    Chang, Yuan-Jyun; Hwang, Wen-Jyi; Chen, Chih-Chang

    2016-12-07

    The goal of this paper is to present a novel VLSI architecture for spike sorting with high classification accuracy, low area costs and low power consumption. A novel feature extraction algorithm with low computational complexities is proposed for the design of the architecture. In the feature extraction algorithm, a spike is separated into two portions based on its peak value. The area of each portion is then used as a feature. The algorithm is simple to implement and less susceptible to noise interference. Based on the algorithm, a novel architecture capable of identifying peak values and computing spike areas concurrently is proposed. To further accelerate the computation, a spike can be divided into a number of segments for the local feature computation. The local features are subsequently merged with the global ones by a simple hardware circuit. The architecture can also be easily operated in conjunction with the circuits for commonly-used spike detection algorithms, such as the Non-linear Energy Operator (NEO). The architecture has been implemented by an Application-Specific Integrated Circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture is well suited for real-time multi-channel spike detection and feature extraction requiring low hardware area costs, low power consumption and high classification accuracy.

  14. Functional microimaging. A hierarchical investigation of bone failure behavior

    International Nuclear Information System (INIS)

    Voide, Romain; Lenthe, G.Harry van; Stauber, Martin; Schneider, Philipp; Thurner, Philipp J.; Mueller, Ralph; Wyss, Peter; Stampanoni, Marco

    2008-01-01

    Biomechanical testing is the gold standard to determine bone competence, and has been used extensively. Direct mechanical testing provides detailed information on overall bone mechanical and material properties, but fails in revealing local properties such as local deformations and strains and does not permit quantification of fracture progression. Therefore, we incorporated several imaging methods in our mechanical setups to get a better insight into bone deformation and failure characteristics on various levels of structural organization. Our aim was to develop an integrative approach for hierarchical investigation of bone, working at different scales of resolution ranging from the whole bone to its ultrastructure. Inbred strains of mice make useful models to study bone properties. In this study, we concentrated on C57BL/6 (B6) and in C3H/He (C3H) mice, two strains known for their differences in bone phenotype. At the macroscopic level, we used high-resolution and high-speed cameras which allowed to visualize global failure behavior and fracture initiation with high temporal resolution. This image data proved especially important when dealing with small bones such as murine femora. At the microscopic level, bone microstructure, i.e. trabecular architecture and cortical porosity, are known to influence bone strength and failure mechanisms significantly. For this reason, we developed an image-guided failure assessment technique, also referred to as functional microimaging, allowing direct time-lapsed three-dimensional visualization and computation of local displacements and strains for better quantification of fracture initiation and progression. While the resolution of conventional desktop micro-computed tomography is typically around a few micrometers, computer tomography systems based on highly brilliant synchrotron radiation X-ray sources permit to explore the sub-micrometer world. This allowed, for the first time, to uncover fully nondestructively the 3D

  15. Architecture & Environment

    Science.gov (United States)

    Erickson, Mary; Delahunt, Michael

    2010-01-01

    Most art teachers would agree that architecture is an important form of visual art, but they do not always include it in their curriculums. In this article, the authors share core ideas from "Architecture and Environment," a teaching resource that they developed out of a long-term interest in teaching architecture and their fascination with the…

  16. Exporting Humanist Architecture

    DEFF Research Database (Denmark)

    Nielsen, Tom

    2016-01-01

    The article is a chapter in the catalogue for the Danish exhibition at the 2016 Architecture Biennale in Venice. The catalogue is conceived at an independent book exploring the theme Art of Many - The Right to Space. The chapter is an essay in this anthology tracing and discussing the different...... values and ethical stands involved in the export of Danish Architecture. Abstract: Danish architecture has, in a sense, been driven by an unwritten contract between the architects and the democratic state and its institutions. This contract may be viewed as an ethos – an architectural tradition...... with inherent aesthetic and moral values. Today, however, Danish architecture is also an export commodity. That raises questions, which should be debated as openly as possible. What does it mean for architecture and architects to practice in cultures and under political systems that do not use architecture...

  17. VISUALIZATION SKILLS FOR THE NEW ARCHITECTURAL FORMS

    Directory of Open Access Journals (Sweden)

    Khaled Nassar

    2010-07-01

    physical model and vice versa through a number of progressively timed attempts. Results on the average time to success, failure rates, manipulation rate as well as manipulation rate to total time are calculated and analyzed. Statistical analysis of the outcomes was conducted and the results and conclusions of these experiments are presented in this paper along with limitations of the experiments and suggestions for future research. The results should be of interest to architectural educators and architects concerned with the effect of computer technology on the design process, as well as the future of manual skills in our design studios.

  18. Sensor failure and multivariable control for airbreathing propulsion systems. Ph.D. Thesis - Dec. 1979 Final Report

    Science.gov (United States)

    Behbehani, K.

    1980-01-01

    A new sensor/actuator failure analysis technique for turbofan jet engines was developed. Three phases of failure analysis, namely detection, isolation, and accommodation are considered. Failure detection and isolation techniques are developed by utilizing the concept of Generalized Likelihood Ratio (GLR) tests. These techniques are applicable to both time varying and time invariant systems. Three GLR detectors are developed for: (1) hard-over sensor failure; (2) hard-over actuator failure; and (3) brief disturbances in the actuators. The probability distribution of the GLR detectors and the detectability of sensor/actuator failures are established. Failure type is determined by the maximum of the GLR detectors. Failure accommodation is accomplished by extending the Multivariable Nyquest Array (MNA) control design techniques to nonsquare system designs. The performance and effectiveness of the failure analysis technique are studied by applying the technique to a turbofan jet engine, namely the Quiet Clean Short Haul Experimental Engine (QCSEE). Single and multiple sensor/actuator failures in the QCSEE are simulated and analyzed and the effects of model degradation are studied.

  19. Fragments of Architecture

    DEFF Research Database (Denmark)

    Bang, Jacob Sebastian

    2016-01-01

    Topic 3: “Case studies dealing with the artistic and architectural work of architects worldwide, and the ties between specific artistic and architectural projects, methodologies and products”......Topic 3: “Case studies dealing with the artistic and architectural work of architects worldwide, and the ties between specific artistic and architectural projects, methodologies and products”...

  20. A Reconfigurable Design and Architecture of the Ethernet and HomePNA3.0 MAC

    Science.gov (United States)

    Khalilydermany, M.; Hosseinghadiry, M.

    In this paper a reconfigurable architecture for Ethernet and HomePNA MAC is presented. By using this new architecture, Ethernet and HomePNA reconfigurable network card can be produced. This architecture has been implemented using VHDL language and after that synthesized on a chip. The differences between HomePNA (synchronized and unsynchronized mode) and Ethernet in collision detection mechanism and priority access to media have caused the need to separate architectures for Ethernet and HomePNA, but by using similarities of them, both the Ethernet and the HomePNA can be implemented in a single chip with a little extra hardware. The number of logical elements of the proposed architecture is increased by 19% in compare to when only an Ethernet MAC is implemented

  1. Distributed Prognostics and Health Management with a Wireless Network Architecture

    Science.gov (United States)

    Goebel, Kai; Saha, Sankalita; Sha, Bhaskar

    2013-01-01

    A heterogeneous set of system components monitored by a varied suite of sensors and a particle-filtering (PF) framework, with the power and the flexibility to adapt to the different diagnostic and prognostic needs, has been developed. Both the diagnostic and prognostic tasks are formulated as a particle-filtering problem in order to explicitly represent and manage uncertainties in state estimation and remaining life estimation. Current state-of-the-art prognostic health management (PHM) systems are mostly centralized in nature, where all the processing is reliant on a single processor. This can lead to a loss in functionality in case of a crash of the central processor or monitor. Furthermore, with increases in the volume of sensor data as well as the complexity of algorithms, traditional centralized systems become for a number of reasons somewhat ungainly for successful deployment, and efficient distributed architectures can be more beneficial. The distributed health management architecture is comprised of a network of smart sensor devices. These devices monitor the health of various subsystems or modules. They perform diagnostics operations and trigger prognostics operations based on user-defined thresholds and rules. The sensor devices, called computing elements (CEs), consist of a sensor, or set of sensors, and a communication device (i.e., a wireless transceiver beside an embedded processing element). The CE runs in either a diagnostic or prognostic operating mode. The diagnostic mode is the default mode where a CE monitors a given subsystem or component through a low-weight diagnostic algorithm. If a CE detects a critical condition during monitoring, it raises a flag. Depending on availability of resources, a networked local cluster of CEs is formed that then carries out prognostics and fault mitigation by efficient distribution of the tasks. It should be noted that the CEs are expected not to suspend their previous tasks in the prognostic mode. When the

  2. Enterprise architecture management

    DEFF Research Database (Denmark)

    Rahimi, Fatemeh; Gøtze, John; Møller, Charles

    2017-01-01

    Despite the growing interest in enterprise architecture management, researchers and practitioners lack a shared understanding of its applications in organizations. Building on findings from a literature review and eight case studies, we develop a taxonomy that categorizes applications of enterprise...... architecture management based on three classes of enterprise architecture scope. Organizations may adopt enterprise architecture management to help form, plan, and implement IT strategies; help plan and implement business strategies; or to further complement the business strategy-formation process....... The findings challenge the traditional IT-centric view of enterprise architecture management application and suggest enterprise architecture management as an approach that could support the consistent design and evolution of an organization as a whole....

  3. Enterprise architecture management

    DEFF Research Database (Denmark)

    Rahimi, Fatemeh; Gøtze, John; Møller, Charles

    2017-01-01

    architecture management based on three classes of enterprise architecture scope. Organizations may adopt enterprise architecture management to help form, plan, and implement IT strategies; help plan and implement business strategies; or to further complement the business strategy-formation process......Despite the growing interest in enterprise architecture management, researchers and practitioners lack a shared understanding of its applications in organizations. Building on findings from a literature review and eight case studies, we develop a taxonomy that categorizes applications of enterprise....... The findings challenge the traditional IT-centric view of enterprise architecture management application and suggest enterprise architecture management as an approach that could support the consistent design and evolution of an organization as a whole....

  4. The Architecture and Administration of the ATLAS Online Computing System

    CERN Document Server

    Dobson, M; Ertorer, E; Garitaonandia, H; Leahu, L; Leahu, M; Malciu, I M; Panikashvili, E; Topurov, A; Ünel, G; Computing In High Energy and Nuclear Physics

    2006-01-01

    The needs of ATLAS experiment at the upcoming LHC accelerator, CERN, in terms of data transmission rates and processing power require a large cluster of computers (of the order of thousands) administrated and exploited in a coherent and optimal manner. Requirements like stability, robustness and fast recovery in case of failure impose a server-client system architecture with servers distributed in a tree like structure and clients booted from the network. For security reasons, the system should be accessible only through an application gateway and, also to ensure the autonomy of the system, the network services should be provided internally by dedicated machines in synchronization with CERN IT department's central services. The paper describes a small scale implementation of the system architecture that fits the given requirements and constraints. Emphasis will be put on the mechanisms and tools used to net boot the clients via the "Boot With Me" project and to synchronize information within the cluster via t...

  5. A Distributed Middleware Architecture for Attack-Resilient Communications in Smart Grids

    Energy Technology Data Exchange (ETDEWEB)

    Hodge, Brian S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Wu, Yifu [University of Akron; Wei, Jin [University of Akron

    2017-07-31

    Distributed Energy Resources (DERs) are being increasingly accepted as an excellent complement to traditional energy sources in smart grids. As most of these generators are geographically dispersed, dedicated communications investments for every generator are capital cost prohibitive. Real-time distributed communications middleware, which supervises, organizes and schedules tremendous amounts of data traffic in smart grids with high penetrations of DERs, allows for the use of existing network infrastructure. In this paper, we propose a distributed attack-resilient middleware architecture that detects and mitigates the congestion attacks by exploiting the Quality of Experience (QoE) measures to complement the conventional Quality of Service (QoS) information to detect and mitigate the congestion attacks effectively. The simulation results illustrate the efficiency of our proposed communications middleware architecture.

  6. Investigation of Effectiveness of Some Vibration-Based Techniques in Early Detection of Real-Time Fatigue Failure in Gears

    Directory of Open Access Journals (Sweden)

    Hasan Ozturk

    2010-01-01

    Full Text Available Bending fatigue crack is a dangerous and insidious mode of failure in gears. As it produces no debris in its early stages, it gives little warning during its progression, and usually results in either immediate loss of serviceability or greatly reduced power transmitting capacity. This paper presents the applications of vibration-based techniques (i.e. conventional time and frequency domain analysis, cepstrum, and continuous wavelet transform to real gear vibrations in the early detection, diagnosis and advancement monitoring of a real tooth fatigue crack and compares their detection and diagnostic capabilities on the basis of experimental results. Gear fatigue damage is achieved under heavy-loading conditions and the gearbox is allowed to run until the gears suffer badly from complete tooth breakage. It has been found that the initiation and progression of fatigue crack cannot be easily detected by conventional time and frequency domain approaches until the fault is significantly developed. On the contrary, the wavelet transform is quite sensitive to any change in gear vibration and reveals fault features earlier than other methods considered.

  7. Health-enabling technologies for pervasive health care: on services and ICT architecture paradigms.

    Science.gov (United States)

    Haux, Reinhold; Howe, Jurgen; Marschollek, Michael; Plischke, Maik; Wolf, Klaus-Hendrik

    2008-06-01

    Progress in information and communication technologies (ICT) is providing new opportunities for pervasive health care services in aging societies. To identify starting points of health-enabling technologies for pervasive health care. To describe typical services of and contemporary ICT architecture paradigms for pervasive health care. Summarizing outcomes of literature analyses and results from own research projects in this field. Basic functions for pervasive health care with respect to home care comprise emergency detection and alarm, disease management, as well as health status feedback and advice. These functions are complemented by optional (non-health care) functions. Four major paradigms for contemporary ICT architectures are person-centered ICT architectures, home-centered ICT architectures, telehealth service-centered ICT architectures and health care institution-centered ICT architectures. Health-enabling technologies may lead to both new ways of living and new ways of health care. Both ways are interwoven. This has to be considered for appropriate ICT architectures of sensor-enhanced health information systems. IMIA, the International Medical Informatics Association, may be an appropriate forum for interdisciplinary research exchange on health-enabling technologies for pervasive health care.

  8. Modeling Architectural Patterns’ Behavior Using Architectural Primitives

    NARCIS (Netherlands)

    Waqas Kamal, Ahmad; Avgeriou, Paris

    2008-01-01

    Architectural patterns have an impact on both the structure and the behavior of a system at the architecture design level. However, it is challenging to model patterns’ behavior in a systematic way because modeling languages do not provide the appropriate abstractions and because each pattern

  9. Space and Architecture's Current Line of Research? A Lunar Architecture Workshop With An Architectural Agenda.

    Science.gov (United States)

    Solomon, D.; van Dijk, A.

    The "2002 ESA Lunar Architecture Workshop" (June 3-16) ESTEC, Noordwijk, NL and V2_Lab, Rotterdam, NL) is the first-of-its-kind workshop for exploring the design of extra-terrestrial (infra) structures for human exploration of the Moon and Earth-like planets introducing 'architecture's current line of research', and adopting an architec- tural criteria. The workshop intends to inspire, engage and challenge 30-40 European masters students from the fields of aerospace engineering, civil engineering, archi- tecture, and art to design, validate and build models of (infra) structures for Lunar exploration. The workshop also aims to open up new physical and conceptual terrain for an architectural agenda within the field of space exploration. A sound introduc- tion to the issues, conditions, resources, technologies, and architectural strategies will initiate the workshop participants into the context of lunar architecture scenarios. In my paper and presentation about the development of the ideology behind this work- shop, I will comment on the following questions: * Can the contemporary architectural agenda offer solutions that affect the scope of space exploration? It certainly has had an impression on urbanization and colonization of previously sparsely populated parts of Earth. * Does the current line of research in architecture offer any useful strategies for com- bining scientific interests, commercial opportunity, and public space? What can be learned from 'state of the art' architecture that blends commercial and public pro- grammes within one location? * Should commercial 'colonisation' projects in space be required to provide public space in a location where all humans present are likely to be there in a commercial context? Is the wave in Koolhaas' new Prada flagship store just a gesture to public space, or does this new concept in architecture and shopping evolve the public space? * What can we learn about designing (infra-) structures on the Moon or any other

  10. Comparison of different modelling approaches of drive train temperature for the purposes of wind turbine failure detection

    Science.gov (United States)

    Tautz-Weinert, J.; Watson, S. J.

    2016-09-01

    Effective condition monitoring techniques for wind turbines are needed to improve maintenance processes and reduce operational costs. Normal behaviour modelling of temperatures with information from other sensors can help to detect wear processes in drive trains. In a case study, modelling of bearing and generator temperatures is investigated with operational data from the SCADA systems of more than 100 turbines. The focus is here on automated training and testing on a farm level to enable an on-line system, which will detect failures without human interpretation. Modelling based on linear combinations, artificial neural networks, adaptive neuro-fuzzy inference systems, support vector machines and Gaussian process regression is compared. The selection of suitable modelling inputs is discussed with cross-correlation analyses and a sensitivity study, which reveals that the investigated modelling techniques react in different ways to an increased number of inputs. The case study highlights advantages of modelling with linear combinations and artificial neural networks in a feedforward configuration.

  11. Detection of mechanical failures in induction motors by current spectrum analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sokansky, K; Novak, P; Bilos, J; Labaj, J [Technical University Ostrava, Moraviasilesian Power Stations s.h.c. (Czech Republic)

    1998-12-31

    From the diagnostic point of view, an electric machine can be understood as an electromechanical system. It means that any manifestations of mechanical failures do not have to show themselves only in mechanical quantities, i.e. vibration in our case. Mechanical failures can also manifest themselves in electrical quantities, namely in electric current in our case. This statement is valid inversely too, which means that faults occurring in electric circuits can be measured through mechanical quantities. This presentation deals with measuring the current spectra of induction motors with short circuited armatures that are drives used in the industries most. (orig.) 3 refs.

  12. Detection of mechanical failures in induction motors by current spectrum analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sokansky, K.; Novak, P.; Bilos, J.; Labaj, J. [Technical University Ostrava, Moraviasilesian Power Stations s.h.c. (Czech Republic)

    1997-12-31

    From the diagnostic point of view, an electric machine can be understood as an electromechanical system. It means that any manifestations of mechanical failures do not have to show themselves only in mechanical quantities, i.e. vibration in our case. Mechanical failures can also manifest themselves in electrical quantities, namely in electric current in our case. This statement is valid inversely too, which means that faults occurring in electric circuits can be measured through mechanical quantities. This presentation deals with measuring the current spectra of induction motors with short circuited armatures that are drives used in the industries most. (orig.) 3 refs.

  13. Rogue AP Detection in the Wireless LAN for Large Scale Deployment

    OpenAIRE

    Sang-Eon Kim; Byung-Soo Chang; Sang Hong Lee; Dae Young Kim

    2006-01-01

    The wireless LAN standard, also known as WiFi, has begun to use commercial purposes. This paper describes access network architecture of wireless LAN for large scale deployment to provide public service. A metro Ethernet and digital subscriber line access network can be used for wireless LAN with access point. In this network architecture, access point plays interface between wireless node and network infrastructure. It is important to maintain access point without any failure and problems to...

  14. Methodical Design of Software Architecture Using an Architecture Design Assistant (ArchE)

    Science.gov (United States)

    2005-04-01

    PA 15213-3890 Methodical Design of Software Architecture Using an Architecture Design Assistant (ArchE) Felix Bachmann and Mark Klein Software...DATES COVERED 00-00-2005 to 00-00-2005 4. TITLE AND SUBTITLE Methodical Design of Software Architecture Using an Architecture Design Assistant...important for architecture design – quality requirements and constraints are most important Here’s some evidence: If the only concern is

  15. Memory Efficient VLSI Implementation of Real-Time Motion Detection System Using FPGA Platform

    Directory of Open Access Journals (Sweden)

    Sanjay Singh

    2017-06-01

    Full Text Available Motion detection is the heart of a potentially complex automated video surveillance system, intended to be used as a standalone system. Therefore, in addition to being accurate and robust, a successful motion detection technique must also be economical in the use of computational resources on selected FPGA development platform. This is because many other complex algorithms of an automated video surveillance system also run on the same platform. Keeping this key requirement as main focus, a memory efficient VLSI architecture for real-time motion detection and its implementation on FPGA platform is presented in this paper. This is accomplished by proposing a new memory efficient motion detection scheme and designing its VLSI architecture. The complete real-time motion detection system using the proposed memory efficient architecture along with proper input/output interfaces is implemented on Xilinx ML510 (Virtex-5 FX130T FPGA development platform and is capable of operating at 154.55 MHz clock frequency. Memory requirement of the proposed architecture is reduced by 41% compared to the standard clustering based motion detection architecture. The new memory efficient system robustly and automatically detects motion in real-world scenarios (both for the static backgrounds and the pseudo-stationary backgrounds in real-time for standard PAL (720 × 576 size color video.

  16. Steam generator tube failures

    International Nuclear Information System (INIS)

    MacDonald, P.E.; Shah, V.N.; Ward, L.W.; Ellison, P.G.

    1996-04-01

    A review and summary of the available information on steam generator tubing failures and the impact of these failures on plant safety is presented. The following topics are covered: pressurized water reactor (PWR), Canadian deuterium uranium (CANDU) reactor, and Russian water moderated, water cooled energy reactor (VVER) steam generator degradation, PWR steam generator tube ruptures, the thermal-hydraulic response of a PWR plant with a faulted steam generator, the risk significance of steam generator tube rupture accidents, tubing inspection requirements and fitness-for-service criteria in various countries, and defect detection reliability and sizing accuracy. A significant number of steam generator tubes are defective and are removed from service or repaired each year. This wide spread damage has been caused by many diverse degradation mechanisms, some of which are difficult to detect and predict. In addition, spontaneous tube ruptures have occurred at the rate of about one every 2 years over the last 20 years, and incipient tube ruptures (tube failures usually identified with leak detection monitors just before rupture) have been occurring at the rate of about one per year. These ruptures have caused complex plant transients which have not always been easy for the reactor operators to control. Our analysis shows that if more than 15 tubes rupture during a main steam line break, the system response could lead to core melting. Although spontaneous and induced steam generator tube ruptures are small contributors to the total core damage frequency calculated in probabilistic risk assessments, they are risk significant because the radionuclides are likely to bypass the reactor containment building. The frequency of steam generator tube ruptures can be significantly reduced through appropriate and timely inspections and repairs or removal from service

  17. An Adaptive Failure Detector Based on Quality of Service in Peer-to-Peer Networks

    Directory of Open Access Journals (Sweden)

    Jian Dong

    2014-09-01

    Full Text Available The failure detector is one of the fundamental components that maintain high availability of Peer-to-Peer (P2P networks. Under different network conditions, the adaptive failure detector based on quality of service (QoS can achieve the detection time and accuracy required by upper applications with lower detection overhead. In P2P systems, complexity of network and high churn lead to high message loss rate. To reduce the impact on detection accuracy, baseline detection strategy based on retransmission mechanism has been employed widely in many P2P applications; however, Chen’s classic adaptive model cannot describe this kind of detection strategy. In order to provide an efficient service of failure detection in P2P systems, this paper establishes a novel QoS evaluation model for the baseline detection strategy. The relationship between the detection period and the QoS is discussed and on this basis, an adaptive failure detector (B-AFD is proposed, which can meet the quantitative QoS metrics under changing network environment. Meanwhile, it is observed from the experimental analysis that B-AFD achieves better detection accuracy and time with lower detection overhead compared to the traditional baseline strategy and the adaptive detectors based on Chen’s model. Moreover, B-AFD has better adaptability to P2P network.

  18. An Adaptive Failure Detector Based on Quality of Service in Peer-to-Peer Networks

    Science.gov (United States)

    Dong, Jian; Ren, Xiao; Zuo, Decheng; Liu, Hongwei

    2014-01-01

    The failure detector is one of the fundamental components that maintain high availability of Peer-to-Peer (P2P) networks. Under different network conditions, the adaptive failure detector based on quality of service (QoS) can achieve the detection time and accuracy required by upper applications with lower detection overhead. In P2P systems, complexity of network and high churn lead to high message loss rate. To reduce the impact on detection accuracy, baseline detection strategy based on retransmission mechanism has been employed widely in many P2P applications; however, Chen's classic adaptive model cannot describe this kind of detection strategy. In order to provide an efficient service of failure detection in P2P systems, this paper establishes a novel QoS evaluation model for the baseline detection strategy. The relationship between the detection period and the QoS is discussed and on this basis, an adaptive failure detector (B-AFD) is proposed, which can meet the quantitative QoS metrics under changing network environment. Meanwhile, it is observed from the experimental analysis that B-AFD achieves better detection accuracy and time with lower detection overhead compared to the traditional baseline strategy and the adaptive detectors based on Chen's model. Moreover, B-AFD has better adaptability to P2P network. PMID:25198005

  19. Architectural freedom and industrialised architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2012-01-01

    to the building physic problems a new industrialized period has started based on light weight elements basically made of wooden structures, faced with different suitable materials meant for individual expression for the specific housing area. It is the purpose of this article to widen up the different design...... to this systematic thinking of the building technique we get a diverse and functional architecture. Creating a new and clearer story telling about new and smart system based thinking behind the architectural expression....

  20. Preemptive Architecture: Explosive Art and Future Architectures in Cursed Urban Zones

    Directory of Open Access Journals (Sweden)

    Stahl Stenslie

    2017-04-01

    Full Text Available This article describes the art and architectural research project Preemptive Architecture that uses artistic strategies and approaches to create bomb-ready architectural structures that act as instruments for the undoing of violence in war. Increasing environmental usability through destruction represents an inverse strategy that reverses common thinking patterns about warfare, art and architecture. Building structures predestined for a construc­tive destruction becomes a creative act. One of the main motivations behind this paper is to challenge and expand the material thinking as well as the socio-political conditions related to artistic, architectural and design based practices.   Article received: December 12, 2016; Article accepted: January 10, 2017; Published online: April 20, 2017 Original scholarly paper How to cite this article: Stenslie, Stahl, and Magne Wiggen. "Preemptive Architecture: Explosive Art and Future Architectures in Cursed Urban Zones." AM Journal of Art and Media Studies 12 (2017: 29-39. doi: 10.25038/am.v0i12.165

  1. Evaluation of I and C architecture alternatives required for the jupiter Icy moons orbiter (JIMO) reactor

    International Nuclear Information System (INIS)

    Muhlheim, M. D.; Wood, R. T.; Bryan, W. L.; Wilson Jr, T. L.; Holcomb, D. E.; Korsah, K.; Jagadish, U.

    2006-01-01

    This paper discusses alternative architectural considerations for instrumentation and control (I and C) systems in high-reliability applications to support remote, autonomous, inaccessible nuclear reactors, such as a space nuclear power plant (SNPP) for mission electrical power and space exploration propulsion. This work supported the pre-conceptual design of the reactor control system for the Jupiter Icy Moons Orbiter (JIMO) mission. Long-term continuous operation without intermediate maintenance cycles forces consideration of alternatives to commonly used active, N-multiple redundancy techniques for high-availability systems. Long space missions, where mission duration can exceed the 50% reliability limit of constituent components, can make active, N-multiple redundant systems less reliable than simplex systems. To extend a control system lifetime beyond the 50% reliability limits requires incorporation of passive redundancy of functions. Time-dependent availability requirements must be factored into the use of combinations of active and passive redundancy techniques for different mission phases. Over the course of a 12 to 20-year mission, reactor control, power conversion, and thermal management system components may fail, and the I and C system must react and adjust to accommodate these failures and protect non-failed components to continue the mission. This requires architectural considerations to accommodate partial system failures and to adapt to multiple control schemes according to the state of non-failed components without going through a complete shutdown and restart cycle. Relevant SNPP I and C architecture examples provide insights into real-time fault tolerance and long-term reliability and availability beyond time periods normally associated with terrestrial power reactor I and C systems operating cycles. I and C architectures from aerospace systems provide examples of highly reliable and available control systems associated with short- and long

  2. A Low Cost VLSI Architecture for Spike Sorting Based on Feature Extraction with Peak Search

    Directory of Open Access Journals (Sweden)

    Yuan-Jyun Chang

    2016-12-01

    Full Text Available The goal of this paper is to present a novel VLSI architecture for spike sorting with high classification accuracy, low area costs and low power consumption. A novel feature extraction algorithm with low computational complexities is proposed for the design of the architecture. In the feature extraction algorithm, a spike is separated into two portions based on its peak value. The area of each portion is then used as a feature. The algorithm is simple to implement and less susceptible to noise interference. Based on the algorithm, a novel architecture capable of identifying peak values and computing spike areas concurrently is proposed. To further accelerate the computation, a spike can be divided into a number of segments for the local feature computation. The local features are subsequently merged with the global ones by a simple hardware circuit. The architecture can also be easily operated in conjunction with the circuits for commonly-used spike detection algorithms, such as the Non-linear Energy Operator (NEO. The architecture has been implemented by an Application-Specific Integrated Circuit (ASIC with 90-nm technology. Comparisons to the existing works show that the proposed architecture is well suited for real-time multi-channel spike detection and feature extraction requiring low hardware area costs, low power consumption and high classification accuracy.

  3. Enterprise architecture patterns practical solutions for recurring IT-architecture problems

    CERN Document Server

    Perroud, Thierry

    2013-01-01

    Every enterprise architect faces similar problems when designing and governing the enterprise architecture of a medium to large enterprise. Design patterns are a well-established concept in software engineering, used to define universally applicable solution schemes. By applying this approach to enterprise architectures, recurring problems in the design and implementation of enterprise architectures can be solved over all layers, from the business layer to the application and data layer down to the technology layer.Inversini and Perroud describe patterns at the level of enterprise architecture

  4. MUF architecture /art London

    DEFF Research Database (Denmark)

    Svenningsen Kajita, Heidi

    2009-01-01

    Om MUF architecture samt interview med Liza Fior og Katherine Clarke, partnere i muf architecture/art......Om MUF architecture samt interview med Liza Fior og Katherine Clarke, partnere i muf architecture/art...

  5. Control System Architectures, Technologies and Concepts for Near Term and Future Human Exploration of Space

    Science.gov (United States)

    Boulanger, Richard; Overland, David

    2004-01-01

    Technologies that facilitate the design and control of complex, hybrid, and resource-constrained systems are examined. This paper focuses on design methodologies, and system architectures, not on specific control methods that may be applied to life support subsystems. Honeywell and Boeing have estimated that 60-80Y0 of the effort in developing complex control systems is software development, and only 20-40% is control system development. It has also been shown that large software projects have failure rates of as high as 50-65%. Concepts discussed include the Unified Modeling Language (UML) and design patterns with the goal of creating a self-improving, self-documenting system design process. Successful architectures for control must not only facilitate hardware to software integration, but must also reconcile continuously changing software with much less frequently changing hardware. These architectures rely on software modules or components to facilitate change. Architecting such systems for change leverages the interfaces between these modules or components.

  6. Desmin loss and mitochondrial damage precede left ventricular systolic failure in volume overload heart failure.

    Science.gov (United States)

    Guichard, Jason L; Rogowski, Michael; Agnetti, Giulio; Fu, Lianwu; Powell, Pamela; Wei, Chih-Chang; Collawn, James; Dell'Italia, Louis J

    2017-07-01

    Heart failure due to chronic volume overload (VO) in rats and humans is characterized by disorganization of the cardiomyocyte desmin/mitochondrial network. Here, we tested the hypothesis that desmin breakdown is an early and continuous process throughout VO. Male Sprague-Dawley rats had aortocaval fistula (ACF) or sham surgery and were examined 24 h and 4 and 12 wk later. Desmin/mitochondrial ultrastructure was examined by transmission electron microscopy (TEM) and immunohistochemistry (IHC). Protein and kinome analysis were performed in isolated cardiomyocytes, and desmin cleavage was assessed by mass spectrometry in left ventricular (LV) tissue. Echocardiography demonstrated a 40% decrease in the LV mass-to-volume ratio with spherical remodeling at 4 wk with ACF and LV systolic dysfunction at 12 wk. Starting at 24 h and continuing to 4 and 12 wk, with ACF there is TEM evidence of extensive mitochondrial clustering, IHC evidence of disorganization associated with desmin breakdown, and desmin protein cleavage verified by Western blot analysis and mass spectrometry. IHC results revealed that ACF cardiomyocytes at 4 and 12 wk had perinuclear translocation of αB-crystallin from the Z disk with increased α, β-unsaturated aldehyde 4-hydroxynonelal. Use of protein markers with verification by TUNEL staining and kinome analysis revealed an absence of cardiomyocyte apoptosis at 4 and 12 wk of ACF. Significant increases in protein indicators of mitophagy were countered by a sixfold increase in p62/sequestosome-1, which is indicative of an inability to complete autophagy. An early and continuous disruption of the desmin/mitochondrial architecture, accompanied by oxidative stress and inhibition of apoptosis and mitophagy, suggests its causal role in LV dilatation and systolic dysfunction in VO. NEW & NOTEWORTHY This study provides new evidence of early onset (24 h) and continuous (4-12 wk) desmin misarrangement and disruption of the normal sarcomeric and mitochondrial

  7. Program computes single-point failures in critical system designs

    Science.gov (United States)

    Brown, W. R.

    1967-01-01

    Computer program analyzes the designs of critical systems that will either prove the design is free of single-point failures or detect each member of the population of single-point failures inherent in a system design. This program should find application in the checkout of redundant circuits and digital systems.

  8. VERNACULAR ARCHITECTURE: AN INTRODUCTORY COURSE TO LEARN ARCHITECTURE IN INDIA

    Directory of Open Access Journals (Sweden)

    Miki Desai

    2010-07-01

    Full Text Available “The object in view of both my predecessors in office and by myself has been rather to bring out the reasoning powers of individual students, so that they may understand the inner meaning of the old forms and their original function and may develop and modernize and gradually produce an architecture, Indian in character, but at the same time as suited to present day India as the old styles were to their own times and environment.” Claude Batley-1940; Lang, Desai, Desai, 1997 (p.143. The article introduces teaching philosophy, content and method of Basic Design I and II for first year students of architecture at the Faculty of Architecture, Centre for Environmental Planning and Technology (CEPT University, Ahmedabad, India. It is framed within the Indian perspective of architectural education from the British colonial times. Commencing with important academic literature and biases of the initial colonial period, it quickly traces architectural education in CEPT, the sixteenth school of post-independent India, set up in 1962, discussing the foundation year teaching imparted. The school was Modernist and avant-garde. The author introduced these two courses against the back drop of the Universalist Modernist credo of architecture and education. In the courses, the primary philosophy behind learning design emerges from heuristic method. The aim of the first course is seen as infusing interest in visual world, development of manual skills and dexterity through the dictum of ‘Look-feel-reason out-evaluate’ and ‘observe-record-interpret-synthesize transform express’. Due to the lack of architectural orientation in Indian schooling; the second course assumes vernacular architecture as a reasonable tool for a novice to understand the triangular relationship of society, architecture and physical context and its impact on design. The students are analytically exposed to the regional variety of architectures logically stemming from the geo

  9. Architecture Descriptions. A Contribution to Modeling of Production System Architecture

    DEFF Research Database (Denmark)

    Jepsen, Allan Dam; Hvam, Lars

    a proper understanding of the architecture phenomenon and the ability to describe it in a manner that allow the architecture to be communicated to and handled by stakeholders throughout the company. Despite the existence of several design philosophies in production system design such as Lean, that focus...... a diverse set of stakeholder domains and tools in the production system life cycle. To support such activities, a contribution is made to the identification and referencing of production system elements within architecture descriptions as part of the reference architecture framework. The contribution...

  10. Failure rate of piping in hydrogen sulphide systems

    International Nuclear Information System (INIS)

    Hare, M.G.

    1993-08-01

    The objective of this study is to provide information about piping failures in hydrogen sulphide service that could be used to establish failures rates for piping in 'sour service'. Information obtained from the open literature, various petrochemical industries and the Bruce Heavy Water Plant (BHWP) was used to quantify the failure analysis data. On the basis of this background information, conclusions from the study and recommendations for measures that could reduce the frequency of failures for piping systems at heavy water plants are presented. In general, BHWP staff should continue carrying out their present integrity and leak detection programmes. The failure rate used in the safety studies for the BHWP appears to be based on the rupture statistics for pipelines carrying sweet natural gas. The failure rate should be based on the rupture rate for sour gas lines, adjusted for the unique conditions at Bruce

  11. U.S. Department Of Energy's nuclear engineering education research: highlights of recent and current research-III. 4. Early Detection of Plant Equipment Failures: A Case Study in Just-In-Time Maintenance

    International Nuclear Information System (INIS)

    Parlos, Alexander G.; Kim, Kyusung; Bharadwaj, Raj M.

    2001-01-01

    Approximately 60% of all incipient electric motor failures is attributed to mechanical and electromechanical causes, whereas 33% of all motor failures is attributed to faults related to motor winding insulation. There has been much research reported on the detection and diagnosis of incipient motor failures. The most widely accepted approach for detection of mechanical failures is vibration monitoring, whereas motor current monitoring is used for electromechanical faults such as broken rotor bars and end-rings. In this paper, the development and testing of a model-based fault detection system for electric motors is briefly presented. In particular, the presented fault detection system has been developed using only motor nameplate information. Furthermore, the fault detection results presented utilize only motor voltage and current sensor information, minimizing the need for expensive or intrusive sensors. In this study, dynamic recurrent neural networks are used to predict the input-output response of a three-phase induction motor while using an estimate of the motor speed signal. Accurate state filtering of the motor speed using only electrical measurements is feasible, and it has been demonstrated in other recent publications. The developed input-output motor model requires no knowledge of the motor specifics; rather, only motor nameplate information is used. The resulting model appears very effective in accurately predicting the dynamic behavior of the nonlinear motor system to varying supply unbalance and load levels. The motor model is then used to generate the residuals needed in the fault diagnosis system. Following the residual generation step, fault detection must be pursued by appropriately processing the residuals. It is common to first extract features characteristic of the faults being investigated prior to attempting fault detection. In this study, multi-resolution (or wavelet) signal-processing techniques are used in combination with more traditional

  12. Software fault detection and recovery in critical real-time systems: An approach based on loose coupling

    International Nuclear Information System (INIS)

    Alho, Pekka; Mattila, Jouni

    2014-01-01

    Highlights: •We analyze fault tolerance in mission-critical real-time systems. •Decoupled architectural model can be used to implement fault tolerance. •Prototype implementation for remote handling control system and service manager. •Recovery from transient faults by restarting services. -- Abstract: Remote handling (RH) systems are used to inspect, make changes to, and maintain components in the ITER machine and as such are an example of mission-critical system. Failure in a critical system may cause damage, significant financial losses and loss of experiment runtime, making dependability one of their most important properties. However, even if the software for RH control systems has been developed using best practices, the system might still fail due to undetected faults (bugs), hardware failures, etc. Critical systems therefore need capability to tolerate faults and resume operation after their occurrence. However, design of effective fault detection and recovery mechanisms poses a challenge due to timeliness requirements, growth in scale, and complex interactions. In this paper we evaluate effectiveness of service-oriented architectural approach to fault tolerance in mission-critical real-time systems. We use a prototype implementation for service management with an experimental RH control system and industrial manipulator. The fault tolerance is based on using the high level of decoupling between services to recover from transient faults by service restarts. In case the recovery process is not successful, the system can still be used if the fault was not in a critical software module

  13. Software fault detection and recovery in critical real-time systems: An approach based on loose coupling

    Energy Technology Data Exchange (ETDEWEB)

    Alho, Pekka, E-mail: pekka.alho@tut.fi; Mattila, Jouni

    2014-10-15

    Highlights: •We analyze fault tolerance in mission-critical real-time systems. •Decoupled architectural model can be used to implement fault tolerance. •Prototype implementation for remote handling control system and service manager. •Recovery from transient faults by restarting services. -- Abstract: Remote handling (RH) systems are used to inspect, make changes to, and maintain components in the ITER machine and as such are an example of mission-critical system. Failure in a critical system may cause damage, significant financial losses and loss of experiment runtime, making dependability one of their most important properties. However, even if the software for RH control systems has been developed using best practices, the system might still fail due to undetected faults (bugs), hardware failures, etc. Critical systems therefore need capability to tolerate faults and resume operation after their occurrence. However, design of effective fault detection and recovery mechanisms poses a challenge due to timeliness requirements, growth in scale, and complex interactions. In this paper we evaluate effectiveness of service-oriented architectural approach to fault tolerance in mission-critical real-time systems. We use a prototype implementation for service management with an experimental RH control system and industrial manipulator. The fault tolerance is based on using the high level of decoupling between services to recover from transient faults by service restarts. In case the recovery process is not successful, the system can still be used if the fault was not in a critical software module.

  14. Architecture and Film

    OpenAIRE

    Mohammad Javaheri, Saharnaz

    2016-01-01

    Film does not exist without architecture. In every movie that has ever been made throughout history, the cinematic image of architecture is embedded within the picture. Throughout my studies and research, I began to see that there is no director who can consciously or unconsciously deny the use of architectural elements in his or her movies. Architecture offers a strong profile to distinguish characters and story. In the early days, films were shot in streets surrounde...

  15. Fuel rod failure detection method and system

    International Nuclear Information System (INIS)

    Assmann, H.; Janson, W.; Stehle, H.; Wahode, P.

    1975-01-01

    The inventor claims a method for the detection of a defective fuel rod cladding tube or of inleaked water in the cladding tube of a fuel rod in the fuel assembly of a pressurized-water reactor. The fuel assembly is not disassembled but examined as a whole. In the examination, the cladding tube is heated near one of its two end plugs, e.g. with an attached high-frequency inductor. The water contained in the cladding tube evaporates, and steam bubbles or a condensate are detected by the ultrasonic impulse-echo method. It is also possible to measure the delay of the temperature rise at the end plug or to determine the cooling energy required to keep the end plug temperature stable and thus to detect water ingression. (DG/AK) [de

  16. Space and place concepts analysis based on semiology approach in residential architecture

    Directory of Open Access Journals (Sweden)

    Mojtaba Parsaee

    2015-12-01

    Full Text Available Space and place are among the fundamental concepts in architecture about which many discussions have been held and the complexity and importance of these concepts were focused on. This research has introduced an approach to better cognition of the architectural concepts based on theory and method of semiology in linguistics. Hence, at first the research investigates the concepts of space and place and explains their characteristics in architecture. Then, it reviews the semiology theory and explores its concepts and ideas. After obtaining the principles of theory and also the method of semiology, they are redefined in an architectural system based on an adaptive method. Finally, the research offers a conceptual model which is called the semiology approach by considering the architectural system as a system of signs. The approach can be used to decode the content of meanings and forms and analyses of the architectural mechanism in order to obtain its meanings and concepts. In this way and based on this approach, the residential architecture of the traditional city of Bushehr – Iran was analyzed as a case of study and its concepts were extracted. The results of this research demonstrate the effectiveness of this approach in structure detection and identification of an architectural system. Besides, this approach has the capability to be used in processes of sustainable development and also be a basis for deconstruction of architectural texts. The research methods of this study are qualitative based on comparative and descriptive analyses.

  17. Enhancement of Electroluminescence (EL) image measurements for failure quantification methods

    DEFF Research Database (Denmark)

    Parikh, Harsh; Spataru, Sergiu; Sera, Dezso

    2018-01-01

    Enhanced quality images are necessary for EL image analysis and failure quantification. A method is proposed which determines image quality in terms of more accurate failure detection of solar panels through electroluminescence (EL) imaging technique. The goal of the paper is to determine the most...

  18. Architectural geometry

    KAUST Repository

    Pottmann, Helmut

    2014-11-26

    Around 2005 it became apparent in the geometry processing community that freeform architecture contains many problems of a geometric nature to be solved, and many opportunities for optimization which however require geometric understanding. This area of research, which has been called architectural geometry, meanwhile contains a great wealth of individual contributions which are relevant in various fields. For mathematicians, the relation to discrete differential geometry is significant, in particular the integrable system viewpoint. Besides, new application contexts have become available for quite some old-established concepts. Regarding graphics and geometry processing, architectural geometry yields interesting new questions but also new objects, e.g. replacing meshes by other combinatorial arrangements. Numerical optimization plays a major role but in itself would be powerless without geometric understanding. Summing up, architectural geometry has become a rewarding field of study. We here survey the main directions which have been pursued, we show real projects where geometric considerations have played a role, and we outline open problems which we think are significant for the future development of both theory and practice of architectural geometry.

  19. Architectural geometry

    KAUST Repository

    Pottmann, Helmut; Eigensatz, Michael; Vaxman, Amir; Wallner, Johannes

    2014-01-01

    Around 2005 it became apparent in the geometry processing community that freeform architecture contains many problems of a geometric nature to be solved, and many opportunities for optimization which however require geometric understanding. This area of research, which has been called architectural geometry, meanwhile contains a great wealth of individual contributions which are relevant in various fields. For mathematicians, the relation to discrete differential geometry is significant, in particular the integrable system viewpoint. Besides, new application contexts have become available for quite some old-established concepts. Regarding graphics and geometry processing, architectural geometry yields interesting new questions but also new objects, e.g. replacing meshes by other combinatorial arrangements. Numerical optimization plays a major role but in itself would be powerless without geometric understanding. Summing up, architectural geometry has become a rewarding field of study. We here survey the main directions which have been pursued, we show real projects where geometric considerations have played a role, and we outline open problems which we think are significant for the future development of both theory and practice of architectural geometry.

  20. Elements of Architecture

    DEFF Research Database (Denmark)

    Elements of Architecture explores new ways of engaging architecture in archaeology. It conceives of architecture both as the physical evidence of past societies and as existing beyond the physical environment, considering how people in the past have not just dwelled in buildings but have existed...

  1. Digitally-Driven Architecture

    Directory of Open Access Journals (Sweden)

    Henriette Bier

    2014-07-01

    Full Text Available The shift from mechanical to digital forces architects to reposition themselves: Architects generate digital information, which can be used not only in designing and fabricating building components but also in embedding behaviours into buildings. This implies that, similar to the way that industrial design and fabrication with its concepts of standardisation and serial production influenced modernist architecture, digital design and fabrication influences contemporary architecture. While standardisation focused on processes of rationalisation of form, mass-customisation as a new paradigm that replaces mass-production, addresses non-standard, complex, and flexible designs. Furthermore, knowledge about the designed object can be encoded in digital data pertaining not just to the geometry of a design but also to its physical or other behaviours within an environment. Digitally-driven architecture implies, therefore, not only digitally-designed and fabricated architecture, it also implies architecture – built form – that can be controlled, actuated, and animated by digital means.In this context, this sixth Footprint issue examines the influence of digital means as pragmatic and conceptual instruments for actuating architecture. The focus is not so much on computer-based systems for the development of architectural designs, but on architecture incorporating digital control, sens­ing, actuating, or other mechanisms that enable buildings to inter­act with their users and surroundings in real time in the real world through physical or sensory change and variation.

  2. A Distributed Middleware Architecture for Attack-Resilient Communications in Smart Grids: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Yifu; Wei, Jin; Hodge, Bri-Mathias

    2017-05-24

    Distributed energy resources (DERs) are being increasingly accepted as an excellent complement to traditional energy sources in smart grids. Because most of these generators are geographically dispersed, dedicated communications investments for every generator are capital-cost prohibitive. Real-time distributed communications middleware - which supervises, organizes, and schedules tremendous amounts of data traffic in smart grids with high penetrations of DERs - allows for the use of existing network infrastructure. In this paper, we propose a distributed attack-resilient middleware architecture that detects and mitigates the congestion attacks by exploiting the quality of experience measures to complement the conventional quality of service information to effectively detect and mitigate congestion attacks. The simulation results illustrate the efficiency of our proposed communications middleware architecture.

  3. Online Anomaly Energy Consumption Detection Using Lambda Architecture

    DEFF Research Database (Denmark)

    Liu, Xiufeng; Iftikhar, Nadeem; Nielsen, Per Sieverts

    2016-01-01

    problem, which does data mining on a large amount of parallel data streams from smart meters. In this paper, we propose a supervised learning and statistical-based anomaly detection method, and implement a Lambda system using the in-memory distributed computing framework, Spark and its extension Spark...... of the lambda detection system....

  4. Symmetry in quantum system theory: Rules for quantum architecture design

    Energy Technology Data Exchange (ETDEWEB)

    Schulte-Herbrueggen, Thomas; Sander, Uwe [Technical University of Munich, Garching (Germany). Dept. Chem.

    2010-07-01

    We investigate universality in the sense of controllability and observability, of multi-qubit systems in architectures of various symmetries of coupling type and topology. By determining the respective dynamic system Lie algebras, explicit reachability sets under symmetry constraints are provided. Thus for a given (possibly symmetric) experimental coupling architecture several decision problems can be solved in a unified way: (i) can a target Hamiltonian be simulated? (ii) can a target gate be synthesised? (iii) to which extent is the system observable by a given set of detection operators? and, as a special case of the latter, (iv) can an underlying system Hamiltonian be identified with a given set of detection operators? Finally, in turn, the absence of symmetry provides a convenient necessary condition for full controllability. Though often easier to assess than the well-established Lie-algebra rank condition, this is not sufficient unless the candidate dynamic simple Lie algebra can be pre-identified uniquely. Thus for architectures with various Ising and Heisenberg coupling types we give design rules sufficient to ensure full controllability. In view of follow-up studies, we relate the unification of necessary and sufficient conditions for universality to filtering simple Lie subalgebras of su(N) comprising classical and exceptional types.

  5. PICNIC Architecture.

    Science.gov (United States)

    Saranummi, Niilo

    2005-01-01

    The PICNIC architecture aims at supporting inter-enterprise integration and the facilitation of collaboration between healthcare organisations. The concept of a Regional Health Economy (RHE) is introduced to illustrate the varying nature of inter-enterprise collaboration between healthcare organisations collaborating in providing health services to citizens and patients in a regional setting. The PICNIC architecture comprises a number of PICNIC IT Services, the interfaces between them and presents a way to assemble these into a functioning Regional Health Care Network meeting the needs and concerns of its stakeholders. The PICNIC architecture is presented through a number of views relevant to different stakeholder groups. The stakeholders of the first view are national and regional health authorities and policy makers. The view describes how the architecture enables the implementation of national and regional health policies, strategies and organisational structures. The stakeholders of the second view, the service viewpoint, are the care providers, health professionals, patients and citizens. The view describes how the architecture supports and enables regional care delivery and process management including continuity of care (shared care) and citizen-centred health services. The stakeholders of the third view, the engineering view, are those that design, build and implement the RHCN. The view comprises four sub views: software engineering, IT services engineering, security and data. The proposed architecture is founded into the main stream of how distributed computing environments are evolving. The architecture is realised using the web services approach. A number of well established technology platforms and generic standards exist that can be used to implement the software components. The software components that are specified in PICNIC are implemented in Open Source.

  6. Remote monitoring of heart failure: benefits for therapeutic decision making.

    Science.gov (United States)

    Martirosyan, Mihran; Caliskan, Kadir; Theuns, Dominic A M J; Szili-Torok, Tamas

    2017-07-01

    Chronic heart failure is a cardiovascular disorder with high prevalence and incidence worldwide. The course of heart failure is characterized by periods of stability and instability. Decompensation of heart failure is associated with frequent and prolonged hospitalizations and it worsens the prognosis for the disease and increases cardiovascular mortality among affected patients. It is therefore important to monitor these patients carefully to reveal changes in their condition. Remote monitoring has been designed to facilitate an early detection of adverse events and to minimize regular follow-up visits for heart failure patients. Several new devices have been developed and introduced to the daily practice of cardiology departments worldwide. Areas covered: Currently, special tools and techniques are available to perform remote monitoring. Concurrently there are a number of modern cardiac implantable electronic devices that incorporate a remote monitoring function. All the techniques that have a remote monitoring function are discussed in this paper in detail. All the major studies on this subject have been selected for review of the recent data on remote monitoring of HF patients and demonstrate the role of remote monitoring in the therapeutic decision making for heart failure patients. Expert commentary: Remote monitoring represents a novel intensified follow-up strategy of heart failure management. Overall, theoretically, remote monitoring may play a crucial role in the early detection of heart failure progression and may improve the outcome of patients.

  7. The architecture of the management system of complex steganographic information

    Science.gov (United States)

    Evsutin, O. O.; Meshcheryakov, R. V.; Kozlova, A. S.; Solovyev, T. M.

    2017-01-01

    The aim of the study is to create a wide area information system that allows one to control processes of generation, embedding, extraction, and detection of steganographic information. In this paper, the following problems are considered: the definition of the system scope and the development of its architecture. For creation of algorithmic maintenance of the system, classic methods of steganography are used to embed information. Methods of mathematical statistics and computational intelligence are used to identify the embedded information. The main result of the paper is the development of the architecture of the management system of complex steganographic information. The suggested architecture utilizes cloud technology in order to provide service using the web-service via the Internet. It is meant to provide streams of multimedia data processing that are streams with many sources of different types. The information system, built in accordance with the proposed architecture, will be used in the following areas: hidden transfer of documents protected by medical secrecy in telemedicine systems; copyright protection of online content in public networks; prevention of information leakage caused by insiders.

  8. ALGORITHMS FOR OPTIMIZATION OF SYSYTEM PERFORMANCE IN LAYERED DETECTION SYSTEMS UNDER DETECTOR COORELATION

    International Nuclear Information System (INIS)

    Wood, Thomas W.; Heasler, Patrick G.; Daly, Don S.

    2010-01-01

    Almost all of the 'architectures' for radiation detection systems in Department of Energy (DOE) and other USG programs rely on some version of layered detector deployment. Efficacy analyses of layered (or more generally extended) detection systems in many contexts often assume statistical independence among detection events and thus predict monotonically increasing system performance with the addition of detection layers. We show this to be a false conclusion for the ROC curves typical of most current technology gamma detectors, and more generally show that statistical independence is often an unwarranted assumption for systems in which there is ambiguity about the objects to be detected. In such systems, a model of correlation among detection events allows optimization of system algorithms for interpretation of detector signals. These algorithms are framed as optimal discriminant functions in joint signal space, and may be applied to gross counting or spectroscopic detector systems. We have shown how system algorithms derived from this model dramatically improve detection probabilities compared to the standard serial detection operating paradigm for these systems. These results would not surprise anyone who has confronted the problem of correlated errors (or failure rates) in the analogous contexts, but is seems to be largely underappreciated among those analyzing the radiation detection problem - independence is widely assumed and experimental studies typical fail to measure correlation. This situation, if not rectified, will lead to several unfortunate results. Including overconfidence in system efficacy, overinvestment in layers of similar technology, and underinvestment in diversity among detection assets.

  9. Architectural Theatricality

    DEFF Research Database (Denmark)

    Tvedebrink, Tenna Doktor Olsen

    environments and a knowledge gap therefore exists in present hospital designs. Consequently, the purpose of this thesis has been to investigate if any research-based knowledge exist supporting the hypothesis that the interior architectural qualities of eating environments influence patient food intake, health...... and well-being, as well as outline a set of basic design principles ‘predicting’ the future interior architectural qualities of patient eating environments. Methodologically the thesis is based on an explorative study employing an abductive approach and hermeneutic-interpretative strategy utilizing tactics...... and food intake, as well as a series of references exist linking the interior architectural qualities of healthcare environments with the health and wellbeing of patients. On the basis of these findings, the thesis presents the concept of Architectural Theatricality as well as a set of design principles...

  10. Architecture of Institution & Home. Architecture as Cultural Medium

    NARCIS (Netherlands)

    Robinson, J.W.

    2004-01-01

    This dissertation addresses how architecture functions as a cultural medium. It does so by by investigating how the architecture of institution and home each construct and support different cultural practices. By studying the design of ordinary settings in terms of how qualitative differences in

  11. Failure mechanisms of additively manufactured porous biomaterials: Effects of porosity and type of unit cell.

    Science.gov (United States)

    Kadkhodapour, J; Montazerian, H; Darabi, A Ch; Anaraki, A P; Ahmadi, S M; Zadpoor, A A; Schmauder, S

    2015-10-01

    Since the advent of additive manufacturing techniques, regular porous biomaterials have emerged as promising candidates for tissue engineering scaffolds owing to their controllable pore architecture and feasibility in producing scaffolds from a variety of biomaterials. The architecture of scaffolds could be designed to achieve similar mechanical properties as in the host bone tissue, thereby avoiding issues such as stress shielding in bone replacement procedure. In this paper, the deformation and failure mechanisms of porous titanium (Ti6Al4V) biomaterials manufactured by selective laser melting from two different types of repeating unit cells, namely cubic and diamond lattice structures, with four different porosities are studied. The mechanical behavior of the above-mentioned porous biomaterials was studied using finite element models. The computational results were compared with the experimental findings from a previous study of ours. The Johnson-Cook plasticity and damage model was implemented in the finite element models to simulate the failure of the additively manufactured scaffolds under compression. The computationally predicted stress-strain curves were compared with the experimental ones. The computational models incorporating the Johnson-Cook damage model could predict the plateau stress and maximum stress at the first peak with less than 18% error. Moreover, the computationally predicted deformation modes were in good agreement with the results of scaling law analysis. A layer-by-layer failure mechanism was found for the stretch-dominated structures, i.e. structures made from the cubic unit cell, while the failure of the bending-dominated structures, i.e. structures made from the diamond unit cells, was accompanied by the shearing bands of 45°. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. On Detailing in Contemporary Architecture

    DEFF Research Database (Denmark)

    Kristensen, Claus; Kirkegaard, Poul Henning

    2010-01-01

    Details in architecture have a significant influence on how architecture is experienced. One can touch the materials and analyse the detailing - thus details give valuable information about the architectural scheme as a whole. The absence of perceptual stimulation like details and materiality...... / tactility can blur the meaning of the architecture and turn it into an empty statement. The present paper will outline detailing in contemporary architecture and discuss the issue with respect to architectural quality. Architectural cases considered as sublime piece of architecture will be presented...

  13. Smart House Interconnected System Architecture

    Directory of Open Access Journals (Sweden)

    ALBU Răzvan-Daniel

    2017-05-01

    Full Text Available In this research work we will present the architecture of an intelligent house system capable to detect accidents cause by floods, gas, and to protect against unauthorized access or burglary. Our system is not just an alarm, it continuously monitors the house and reports over internet its state. Most of the current smart house systems available on the market alarms the user via email or SMS when an unwanted event happens. Thus, the user assumes that the house is not affected if an alarm message is not received. This is not always true, since the monitoring system components can also damage, or the entire system can become unable to send an alarm message even if it detects an unwanted event. This article presents also details about both hardware and software implementation.

  14. A Big Data Analysis Approach for Rail Failure Risk Assessment.

    Science.gov (United States)

    Jamshidi, Ali; Faghih-Roohi, Shahrzad; Hajizadeh, Siamak; Núñez, Alfredo; Babuska, Robert; Dollevoet, Rolf; Li, Zili; De Schutter, Bart

    2017-08-01

    Railway infrastructure monitoring is a vital task to ensure rail transportation safety. A rail failure could result in not only a considerable impact on train delays and maintenance costs, but also on safety of passengers. In this article, the aim is to assess the risk of a rail failure by analyzing a type of rail surface defect called squats that are detected automatically among the huge number of records from video cameras. We propose an image processing approach for automatic detection of squats, especially severe types that are prone to rail breaks. We measure the visual length of the squats and use them to model the failure risk. For the assessment of the rail failure risk, we estimate the probability of rail failure based on the growth of squats. Moreover, we perform severity and crack growth analyses to consider the impact of rail traffic loads on defects in three different growth scenarios. The failure risk estimations are provided for several samples of squats with different crack growth lengths on a busy rail track of the Dutch railway network. The results illustrate the practicality and efficiency of the proposed approach. © 2017 The Authors Risk Analysis published by Wiley Periodicals, Inc. on behalf of Society for Risk Analysis.

  15. Systemic Architecture

    DEFF Research Database (Denmark)

    Poletto, Marco; Pasquero, Claudia

    -up or tactical design, behavioural space and the boundary of the natural and the artificial realms within the city and architecture. A new kind of "real-time world-city" is illustrated in the form of an operational design manual for the assemblage of proto-architectures, the incubation of proto-gardens...... and the coding of proto-interfaces. These prototypes of machinic architecture materialize as synthetic hybrids embedded with biological life (proto-gardens), computational power, behavioural responsiveness (cyber-gardens), spatial articulation (coMachines and fibrous structures), remote sensing (FUNclouds...

  16. Digitally-Driven Architecture

    Directory of Open Access Journals (Sweden)

    Henriette Bier

    2010-06-01

    Full Text Available The shift from mechanical to digital forces architects to reposition themselves: Architects generate digital information, which can be used not only in designing and fabricating building components but also in embedding behaviours into buildings. This implies that, similar to the way that industrial design and fabrication with its concepts of standardisation and serial production influenced modernist architecture, digital design and fabrication influences contemporary architecture. While standardisa­tion focused on processes of rationalisation of form, mass-customisation as a new paradigm that replaces mass-production, addresses non-standard, complex, and flexible designs. Furthermore, knowledge about the designed object can be encoded in digital data pertaining not just to the geometry of a design but also to its physical or other behaviours within an environment. Digitally-driven architecture implies, therefore, not only digitally-designed and fabricated architecture, it also implies architecture – built form – that can be controlled, actuated, and animated by digital means. In this context, this sixth Footprint issue examines the influence of digital means as prag­matic and conceptual instruments for actuating architecture. The focus is not so much on computer-based systems for the development of architectural designs, but on architecture incorporating digital control, sens­ing, actuating, or other mechanisms that enable buildings to inter­act with their users and surroundings in real time in the real world through physical or sensory change and variation.

  17. Architecture and Stages

    DEFF Research Database (Denmark)

    Kiib, Hans

    2009-01-01

    as "experiencescape" - a space between tourism, culture, learning and economy. Strategies related to these challenges involve new architectural concepts and art as ‘engines' for a change. New expressive architecture and old industrial buildings are often combined into hybrid narratives, linking the past...... with the future. But this is not enough. The agenda is to develop architectural spaces, where social interaction and learning are enhanced by art and fun. How can we develop new architectural designs in our inner cities and waterfronts where eventscapes, learning labs and temporal use are merged with everyday...

  18. Architecture in Its Own Shadow

    Directory of Open Access Journals (Sweden)

    Alexander Rappaport

    2016-11-01

    Full Text Available Those who consider themselves architects disapprove of the statements about destruction of the subject of architectural culture, profession and of the subject of architectural theory. At the same time, a deep crisis of both theory and practice is obvious. When theorists of architecture of the 20th and early 21st centuries turned to the subjects external to architecture – sociology, psychology, semiotics, ecology, post-structuralist criticism, etc., instead of enriching and renovating the architectural theory, the results were just the opposite. A brand new and independent paradigm of architecture is needed. It should contain three parts specific by their logical-subject nature: ontology of architecture, methodology of architectural thought and axiology of architectural thought.

  19. Analysis of leak and break behavior in a failure assessment diagram for carbon steel pipes

    International Nuclear Information System (INIS)

    Kanno, Satoshi; Hasegawa, Kunio; Shimizu, Tasuku; Saitoh, Takashi; Gotoh, Nobuho

    1992-01-01

    The leak and break behavior of a cracked coolant pipe subjected to an internal pressure and a bending moment was analyzed with a failure assessment diagram using the R6 approach. This paper examines the conditions of the detectable coolant leakage without breakage. A leakage assessment curve, a locus of assessment point for detectable coolant leakage, was defined in the failure assessment diagram. The region between the leak assessment and failure assessment curves satisfies the condition of detectable leakage without breakage. In this region, a crack can be safely inspected by a coolant leak detector. (orig.)

  20. Architectural technology

    DEFF Research Database (Denmark)

    2005-01-01

    The booklet offers an overall introduction to the Institute of Architectural Technology and its projects and activities, and an invitation to the reader to contact the institute or the individual researcher for further information. The research, which takes place at the Institute of Architectural...... Technology at the Roayl Danish Academy of Fine Arts, School of Architecture, reflects a spread between strategic, goal-oriented pilot projects, commissioned by a ministry, a fund or a private company, and on the other hand projects which originate from strong personal interests and enthusiasm of individual...

  1. Humanizing Architecture

    DEFF Research Database (Denmark)

    Toft, Tanya Søndergaard

    2015-01-01

    The article proposes the urban digital gallery as an opportunity to explore the relationship between ‘human’ and ‘technology,’ through the programming of media architecture. It takes a curatorial perspective when proposing an ontological shift from considering media facades as visual spectacles...... agency and a sense of being by way of dematerializing architecture. This is achieved by way of programming the symbolic to provide new emotional realizations and situations of enlightenment in the public audience. This reflects a greater potential to humanize the digital in media architecture....

  2. Robotic architectures

    CSIR Research Space (South Africa)

    Mtshali, M

    2010-01-01

    Full Text Available In the development of mobile robotic systems, a robotic architecture plays a crucial role in interconnecting all the sub-systems and controlling the system. The design of robotic architectures for mobile autonomous robots is a challenging...

  3. Fuel failure in water reactors: Causes and mitigation. Proceedings of a technical meeting

    International Nuclear Information System (INIS)

    2003-03-01

    The objective of this technical meeting (TM) was to review the present knowledge of the causes and mechanisms of fuel failure in water reactors during normal operational conditions. Emphasis has been given to analysis of failure causes and their mitigation by means of design as well as plant and core operation including strategies for operation with failed fuel. Some information on detection techniques (on-line monitoring and diagnostics, flux tilting, sipping techniques, etc) has also been presented. This TM presented also the progress on the above-mentioned subjects since the last meeting held in 1992 (Dimitrovgrad, Russian Federation). The topics covered in the papers were as follows: Experience feedback on fuel reliability (8 papers); Strategies to avoid or mitigate fuel failures (4 papers); Experimental studies on fuel failures and degradation mechanisms (4 papers); Modelling of fuel failure mechanisms (3 papers); Detection and monitoring during operation or outage (4 papers); Modelling and assessment of fuel failures (3 papers)

  4. Technical Basis for Evaluating Software-Related Common-Cause Failures

    Energy Technology Data Exchange (ETDEWEB)

    Muhlheim, Michael David [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wood, Richard [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-04-01

    The instrumentation and control (I&C) system architecture at a nuclear power plant (NPP) incorporates protections against common-cause failures (CCFs) through the use of diversity and defense-in-depth. Even for well-established analog-based I&C system designs, the potential for CCFs of multiple systems (or redundancies within a system) constitutes a credible threat to defeating the defense-in-depth provisions within the I&C system architectures. The integration of digital technologies into the I&C systems provides many advantages compared to the aging analog systems with respect to reliability, maintenance, operability, and cost effectiveness. However, maintaining the diversity and defense-in-depth for both the hardware and software within the digital system is challenging. In fact, the introduction of digital technologies may actually increase the potential for CCF vulnerabilities because of the introduction of undetected systematic faults. These systematic faults are defined as a “design fault located in a software component” and at a high level, are predominately the result of (1) errors in the requirement specification, (2) inadequate provisions to account for design limits (e.g., environmental stress), or (3) technical faults incorporated in the internal system (or architectural) design or implementation. Other technology-neutral CCF concerns include hardware design errors, equipment qualification deficiencies, installation or maintenance errors, instrument loop scaling and setpoint mistakes.

  5. FY 1996 Report on the industrial science and technology research and development project. R and D of brain type computer architecture; 1996 nendo nogata computer architecture no kenkyu kaihatsu seika hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    It is an object of this project to develop an information processing device based on a completely new architecture, in order to technologically realize human-oriented information processing mechanisms, e.g., memory, learning, association of ideas, perception, intuition and value judgement. Described herein are the FY 1996 results. For development of an LSI based on a neural network in the primary visual cortex, it is confirmed that the basic circuit structure comprising the position-signal generators, memories, signal selectors and adders is suitable for development of the LSI circuit for a neural network function (Hough transform). For development of realtime parallel distributed processor (RPDP), the basic specifications are established for, e.g., local memory capacity of RPDP, functions incorporated in RPDP and number of RPDPs incorporated in the RPDP chip, operating frequency and clock supply method, and estimated power consumption and package, in order to realize the RPDP chip. For development and advanced evaluation of large-scale neural network silicon chip, the chip developed by the advanced research project is incorporated with learning rules, cell models and failure-detection circuits, to design the evaluation substrate incorporated with the above chip. The evaluation methods and implementation procedures are drawn. (NEDO)

  6. Architecture Sustainability

    NARCIS (Netherlands)

    Avgeriou, Paris; Stal, Michael; Hilliard, Rich

    2013-01-01

    Software architecture is the foundation of software system development, encompassing a system's architects' and stakeholders' strategic decisions. A special issue of IEEE Software is intended to raise awareness of architecture sustainability issues and increase interest and work in the area. The

  7. Adaptive Failure Identification for Healthcare Risk Analysis and Its Application on E-Healthcare

    Directory of Open Access Journals (Sweden)

    Kuo-Chung Chu

    2014-01-01

    Full Text Available To satisfy the requirement for diverse risk preferences, we propose a generic risk priority number (GRPN function that assigns a risk weight to each parameter such that they represent individual organization/department/process preferences for the parameters. This research applies GRPN function-based model to differentiate the types of risk, and primary data are generated through simulation. We also conduct sensitivity analysis on correlation and regression to compare it with the traditional RPN (TRPN. The proposed model outperforms the TRPN model and provides a practical, effective, and adaptive method for risk evaluation. In particular, the defined GRPN function offers a new method to prioritize failure modes in failure mode and effect analysis (FMEA. The different risk preferences considered in the healthcare example show that the modified FMEA model can take into account the various risk factors and prioritize failure modes more accurately. In addition, the model also can apply to a generic e-healthcare service environment with a hierarchical architecture.

  8. Architecture of a software quench management system

    International Nuclear Information System (INIS)

    Jerzy M. Nogiec et al.

    2001-01-01

    Testing superconducting accelerator magnets is inherently coupled with the proper handling of quenches; i.e., protecting the magnet and characterizing the quench process. Therefore, software implementations must include elements of both data acquisition and real-time controls. The architecture of the quench management software developed at Fermilab's Magnet Test Facility is described. This system consists of quench detection, quench protection, and quench characterization components that execute concurrently in a distributed system. Collaboration between the elements of quench detection, quench characterization and current control are discussed, together with a schema of distributed saving of various quench-related data. Solutions to synchronization and reliability in such a distributed quench system are also presented

  9. The IASI detection chain

    Science.gov (United States)

    Nicol, Patrick; Fleury, Joel; Le Naour, Claire; Bernard, Frédéric

    2017-11-01

    IASI (Infrared Atmospheric Sounding Interferometer) is an infrared atmospheric sounder. It will provide meteorologist and scientific community with atmospheric spectra. The instrument is composed of a Fourier transform spectrometer and an associated infrared imager. The presentation will describe the spectrometer detection chain architecture, composed by three different detectors cooled in a passive cryo-cooler (so called CBS : Cold Box Subsystem) and associated analog electronics up to digital conversion. It will mainly focus on design choices with regards to environment constraints, implemented technologies, and associated performances. CNES is leading the IASI program in collaboration with EUMETSAT. The instrument Prime is ALCATEL SPACE responsible, notably, of the detection chain architecture. SAGEM SA provides the detector package (so called CAU : Cold Acquisition Unit).

  10. High-level language computer architecture

    CERN Document Server

    Chu, Yaohan

    1975-01-01

    High-Level Language Computer Architecture offers a tutorial on high-level language computer architecture, including von Neumann architecture and syntax-oriented architecture as well as direct and indirect execution architecture. Design concepts of Japanese-language data processing systems are discussed, along with the architecture of stack machines and the SYMBOL computer system. The conceptual design of a direct high-level language processor is also described.Comprised of seven chapters, this book first presents a classification of high-level language computer architecture according to the pr

  11. The Peronist festival: pathways and appropriations between photography, ephemeral architecture and political power

    Directory of Open Access Journals (Sweden)

    Franco Marchionni Sánchez

    2016-08-01

    Full Text Available The aim of this paper is analyzing some of the scenarios, photos, and posters used by the Peronist administration and explain their incidence in wine festivals and their imaginary construction after World War II. This proposal examines the relation between photography and ephemeral architecture mediated by political power, as a part of the strategies developed by the Peronist propaganda apparatus to feed the imaginary surrounding the ‘New Argentina.’ At this particular historical moment, the graphic and photographic records taken into account are a gateway to analyze the ephemeral phenomena that cannot be resumed otherwise. The methodological strategy used is qualitative and exploratory, and its design has a flexible nature. Although these testimonies, reflected in the sources described, do not give us back the possibility of being in direct contact to these experiences, they do allow us to access the set of desires, tensions, frustrations, expectations, debates, the achievements and failures, through which the scenic architecture projects were formulated and developed.   Keywords: Photographic Archives; Ephemeral Architecture; Harvest Festival; Power Relationships; Peronism.   Original title: La fiesta peronista: recorridos y apropiaciones entre fotografía, arquitectura efímera y poder político.

  12. Fission product concentration evolution in sodium pool following a fuel subassembly failure in an LMFBR

    International Nuclear Information System (INIS)

    Natesan, K.; Velusamy, K.; Selvaraj, P.; Kasinathan, N.; Chellapandi, P.; Chetal, S.; Bhoje, S.

    2003-01-01

    During a fuel element failure in a liquid metal cooled fast breeder reactor, the fission products originating from the failed pins mix into the sodium pool. Delayed Neutron Detectors (DND) are provided in the sodium pool to detect such failures by way of detection of delayed neutrons emitted by the fission products. The transient evolution of fission product concentration is governed by the sodium flow distribution in the pool. Transient hydraulic analysis has been carried out using the CFD code PHOENICS to estimate fission product concentration evolution in hot pool. k- ε turbulence model and zero laminar diffusivity for the fission product concentration have been considered in the analysis. Times at which the failures of various fuel subassemblies (SA) are detected by the DND are obtained. It has been found that in order to effectively detect the failure of every fuel SA, a minimum of 8 DND in hot pool are essential

  13. Grid Architecture 2

    Energy Technology Data Exchange (ETDEWEB)

    Taft, Jeffrey D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-01-01

    The report describes work done on Grid Architecture under the auspices of the Department of Electricity Office of Electricity Delivery and Reliability in 2015. As described in the first Grid Architecture report, the primary purpose of this work is to provide stakeholder insight about grid issues so as to enable superior decision making on their part. Doing this requires the creation of various work products, including oft-times complex diagrams, analyses, and explanations. This report provides architectural insights into several important grid topics and also describes work done to advance the science of Grid Architecture as well.

  14. High accuracy amplitude and phase measurements based on a double heterodyne architecture

    International Nuclear Information System (INIS)

    Zhao Danyang; Wang Guangwei; Pan Weimin

    2015-01-01

    In the digital low level RF (LLRF) system of a circular (particle) accelerator, the RF field signal is usually down converted to a fixed intermediate frequency (IF). The ratio of IF and sampling frequency determines the processing required, and differs in various LLRF systems. It is generally desirable to design a universally compatible architecture for different IFs with no change to the sampling frequency and algorithm. A new RF detection method based on a double heterodyne architecture for wide IF range has been developed, which achieves the high accuracy requirement of modern LLRF. In this paper, the relation of IF and phase error is systematically analyzed for the first time and verified by experiments. The effects of temperature drift for 16 h IF detection are inhibited by the amplitude and phase calibrations. (authors)

  15. The Spectrum of Renal Allograft Failure.

    Directory of Open Access Journals (Sweden)

    Sourabh Chand

    Full Text Available Causes of "true" late kidney allograft failure remain unclear as study selection bias and limited follow-up risk incomplete representation of the spectrum.We evaluated all unselected graft failures from 2008-2014 (n = 171; 0-36 years post-transplantation by contemporary classification of indication biopsies "proximate" to failure, DSA assessment, clinical and biochemical data.The spectrum of graft failure changed markedly depending on the timing of allograft failure. Failures within the first year were most commonly attributed to technical failure, acute rejection (with T-cell mediated rejection [TCMR] dominating antibody-mediated rejection [ABMR]. Failures beyond a year were increasingly dominated by ABMR and 'interstitial fibrosis with tubular atrophy' without rejection, infection or recurrent disease ("IFTA". Cases of IFTA associated with inflammation in non-scarred areas (compared with no inflammation or inflammation solely within scarred regions were more commonly associated with episodes of prior rejection, late rejection and nonadherence, pointing to an alloimmune aetiology. Nonadherence and late rejection were common in ABMR and TCMR, particularly Acute Active ABMR. Acute Active ABMR and nonadherence were associated with younger age, faster functional decline, and less hyalinosis on biopsy. Chronic and Chronic Active ABMR were more commonly associated with Class II DSA. C1q-binding DSA, detected in 33% of ABMR episodes, were associated with shorter time to graft failure. Most non-biopsied patients were DSA-negative (16/21; 76.1%. Finally, twelve losses to recurrent disease were seen (16%.This data from an unselected population identifies IFTA alongside ABMR as a very important cause of true late graft failure, with nonadherence-associated TCMR as a phenomenon in some patients. It highlights clinical and immunological characteristics of ABMR subgroups, and should inform clinical practice and individualised patient care.

  16. Towards a semantic web layered architecture

    CSIR Research Space (South Africa)

    Gerber, AJ

    2007-02-01

    Full Text Available as an architectural pattern or architectural style [6, 43]. In this section we give a brief description of the con- cepts software architecture and layered architecture. In ad- dition we provide a summary of a list of criteria for layered architectures identified...- els caused some architectural recurrences to evolve. These are described as architectural patterns [6] or architectural styles [43]. Examples of the best known architectural patterns include, but are not limited to, the client/server architectural...

  17. Usefulness of multimodality imaging for detection of myocardial infarction in patients with advanced kidney failure: case report

    Energy Technology Data Exchange (ETDEWEB)

    Veras, Mariana Ferreira; Azevedo, Jader Cunha de; Gamarski, Moisés; Mesquita, Evandro Tinoco; Mesquita, Cláudio Tinoco, E-mail: fvmari@gmail.com [Hospital Procardíaco, Rio de Janeiro, RJ (Brazil); Alves, José Galvão [Centro Universitário de Volta Redonda, RJ (Brazil)

    2016-01-15

    The third universal definition of acute myocardial infarction (AMI) is based on the elevation of troponin in association with ischemic symptoms, electrocardiographic changes and imaging findings. In patients with chest pain, the diagnosis of AMI is performed by dosing serum markers of myocardial necrosis, particularly troponins, by means of changes in 12 - lead electrocardiogram (ECG) or by identifying changes in the contractile dynamics of the left ventricle to the transthoracic echocardiogram. In some cases, confounding factors may hamper the diagnosis, such as: (a) presence of previous changes in the baseline ECG, especially LBBB; (b) elevations of myocardial necrosis markers (MNM) resulting from situations other than AMI and; (c) old changes in contractility detected by transthoracic ECG. Serum cardiac troponin (Tn) is the most specific and most used MNM for the diagnosis of AMI. Nevertheless, in some situations troponin elevation may not be due to an AMI, as in cases of acute pulmonary embolism, acute pericarditis, severe heart failure, myocarditis, and sepsis and kidney failure. Patients with kidney failure have high probability of concurrent cardiovascular disease. Furthermore, the cross-reacting proteins interfering with skeletal muscle, analytical imprecision and interactions with the dialysis membrane may cause elevation of troponin in 7% to 17% of patients with kidney failure. When in doubt about diagnosis of AMI, {sup 99m}Technetium pyrophosphate myocardial scintigraphy ({sup 99m}Tc-PYP) stands out as a noninvasive method capable of identifying areas of myocardial necrosis, thus helping in the diagnosis of AMI. {sup 99m}Tc-labeled phosphonate agents undergo chemical absorption with calcium. A large influx of calcium occurs during the evolutionary process of AMI. The calcium flows into the intracellular space and myocardial concentration of {sup 99m}Tc-PYP follows such increase, with a maximum peak uptake about 48 to 72 hours after the acute event. The

  18. HIV resistance testing and detected drug resistance in Europe

    DEFF Research Database (Denmark)

    Schultze, Anna; Phillips, Andrew N; Paredes, Roger

    2015-01-01

    to Southern Europe. CONCLUSIONS: Despite a concurrent decline in virological failure and testing, drug resistance was commonly detected. This suggests a selective approach to resistance testing. The regional differences identified indicate that policy aiming to minimize the emergence of resistance......OBJECTIVES: To describe regional differences and trends in resistance testing among individuals experiencing virological failure and the prevalence of detected resistance among those individuals who had a genotypic resistance test done following virological failure. DESIGN: Multinational cohort...... study. METHODS: Individuals in EuroSIDA with virological failure (>1 RNA measurement >500 on ART after >6 months on ART) after 1997 were included. Adjusted odds ratios (aORs) for resistance testing following virological failure and aORs for the detection of resistance among those who had a test were...

  19. Information Integration Architecture Development

    OpenAIRE

    Faulkner, Stéphane; Kolp, Manuel; Nguyen, Duy Thai; Coyette, Adrien; Do, Thanh Tung; 16th International Conference on Software Engineering and Knowledge Engineering

    2004-01-01

    Multi-Agent Systems (MAS) architectures are gaining popularity for building open, distributed, and evolving software required by systems such as information integration applications. Unfortunately, despite considerable work in software architecture during the last decade, few research efforts have aimed at truly defining patterns and languages for designing such multiagent architectures. We propose a modern approach based on organizational structures and architectural description lan...

  20. Architectural Contestation

    NARCIS (Netherlands)

    Merle, J.

    2012-01-01

    This dissertation addresses the reductive reading of Georges Bataille's work done within the field of architectural criticism and theory which tends to set aside the fundamental ‘broken’ totality of Bataille's oeuvre and also to narrowly interpret it as a mere critique of architectural form,

  1. A Versatile Simulation Environment of FTC Architectures for Large Transport Aircraft

    OpenAIRE

    Ossmann, Daniel; Varga, Andreas; Simon, Hecker

    2010-01-01

    We present a simulation environment with 3-D stereo visualization facilities destined for an easy setup and versatile assessment of fault detection and diagnosis based fault tolerant control systems. This environment has been primarily developed as a technology demonstrator of advanced reconfigurable flight control systems and is based on a realistic six degree of freedom flexible aircraft model. The aircraft control system architecture includes a flexible fault detection and diagnosis syste...

  2. Information architecture. Volume 3: Guidance

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-04-01

    The purpose of this document, as presented in Volume 1, The Foundations, is to assist the Department of Energy (DOE) in developing and promulgating information architecture guidance. This guidance is aimed at increasing the development of information architecture as a Departmentwide management best practice. This document describes departmental information architecture principles and minimum design characteristics for systems and infrastructures within the DOE Information Architecture Conceptual Model, and establishes a Departmentwide standards-based architecture program. The publication of this document fulfills the commitment to address guiding principles, promote standard architectural practices, and provide technical guidance. This document guides the transition from the baseline or defacto Departmental architecture through approved information management program plans and budgets to the future vision architecture. This document also represents another major step toward establishing a well-organized, logical foundation for the DOE information architecture.

  3. Software architecture as a set of architectural design decisions

    NARCIS (Netherlands)

    Jansen, Anton; Bosch, Jan; Nord, R; Medvidovic, N; Krikhaar, R; Khrhaar, R; Stafford, J; Bosch, J

    2006-01-01

    Software architectures have high costs for change, are complex, and erode during evolution. We believe these problems are partially due to knowledge vaporization. Currently, almost all the knowledge and information about the design decisions the architecture is based on are implicitly embedded in

  4. Cyber-physical architecture assisted by programmable networking

    DEFF Research Database (Denmark)

    Rubio-Hernan, Jose; Sahay, Rishikesh; De Cicco, Luca

    2018-01-01

    Cyber‐physical technologies are prone to attacks in addition to faults and failures. The issue of protecting cyber‐physical systems should be tackled by jointly addressing security at both cyber and physical domains in order to promptly detect and mitigate cyber‐physical threats. Toward this end...

  5. Design of the measurements validation procedure and the expert system architecture for a cogeneration internal combustion engine

    International Nuclear Information System (INIS)

    Barelli, L.; Bidini, G.

    2005-01-01

    A research activity has been initiated to study the development of a diagnostic methodology, for the optimization of energy efficiency and the maximization of the operational time in those conditions, based on artificial intelligence (AI) techniques such as artificial neural network (ANN) and fuzzy logic. The diagnostic procedure, developed specifically for the cogeneration plant located at the Engineering Department of the University of Perugia, must be characterized by a modular architecture to obtain a flexible architecture applicable to different systems. The first part of the study deals with the identifying the principal modules and the corresponding variables necessary to evaluate the module 'health state'. Also the consequent upgrade of the monitoring system is described in this paper. Moreover it describes the structure proposed for the diagnostic procedure, consisting of a procedure for measurement validation and a fuzzy logic-based inference system. The first reveals the presence of abnormal conditions and localizes their source distinguishing between system failure and instrumentation malfunctions. The second provides an evaluation of module health state and the classification of the failures which have possibly occurred. The procedure was implemented in C++

  6. The architectural design of networks of protein domain architectures.

    Science.gov (United States)

    Hsu, Chia-Hsin; Chen, Chien-Kuo; Hwang, Ming-Jing

    2013-08-23

    Protein domain architectures (PDAs), in which single domains are linked to form multiple-domain proteins, are a major molecular form used by evolution for the diversification of protein functions. However, the design principles of PDAs remain largely uninvestigated. In this study, we constructed networks to connect domain architectures that had grown out from the same single domain for every single domain in the Pfam-A database and found that there are three main distinctive types of these networks, which suggests that evolution can exploit PDAs in three different ways. Further analysis showed that these three different types of PDA networks are each adopted by different types of protein domains, although many networks exhibit the characteristics of more than one of the three types. Our results shed light on nature's blueprint for protein architecture and provide a framework for understanding architectural design from a network perspective.

  7. HiMoP: A three-component architecture to create more human-acceptable social-assistive robots : Motivational architecture for assistive robots.

    Science.gov (United States)

    Rodríguez-Lera, Francisco J; Matellán-Olivera, Vicente; Conde-González, Miguel Á; Martín-Rico, Francisco

    2018-05-01

    Generation of autonomous behavior for robots is a general unsolved problem. Users perceive robots as repetitive tools that do not respond to dynamic situations. This research deals with the generation of natural behaviors in assistive service robots for dynamic domestic environments, particularly, a motivational-oriented cognitive architecture to generate more natural behaviors in autonomous robots. The proposed architecture, called HiMoP, is based on three elements: a Hierarchy of needs to define robot drives; a set of Motivational variables connected to robot needs; and a Pool of finite-state machines to run robot behaviors. The first element is inspired in Alderfer's hierarchy of needs, which specifies the variables defined in the motivational component. The pool of finite-state machine implements the available robot actions, and those actions are dynamically selected taking into account the motivational variables and the external stimuli. Thus, the robot is able to exhibit different behaviors even under similar conditions. A customized version of the "Speech Recognition and Audio Detection Test," proposed by the RoboCup Federation, has been used to illustrate how the architecture works and how it dynamically adapts and activates robots behaviors taking into account internal variables and external stimuli.

  8. National architectures for the detection of nuclear and radioactive materials at port facilities

    International Nuclear Information System (INIS)

    Ortiz, A.

    2009-01-01

    The basic objective of the national architectures is to protect people and the environment against a possible misuse of nuclear and radioactive materials. This issue has become even more important in recent years because maritime transport currently amounts to 80% of world trade, growing from 83 million shipments in 1990 to 334 million in 2005. (Author)

  9. SEARCH FOR A RELIABLE STORAGE ARCHITECTURE FOR RHIC.

    Energy Technology Data Exchange (ETDEWEB)

    BINELLO,S.; KATZ, R.A.; MORRIS, J.T.

    2007-10-15

    Software used to operate the Relativistic Heavy Ion Collider (RHIC) resides on one operational RAID storage system. This storage system is also used to store data that reflects the status and recent history of accelerator operations. Failure of this system interrupts the operation of the accelerator as backup systems are brought online. In order to increase the reliability of this critical control system component, the storage system architecture has been upgraded to use Storage Area Network (SAN) technology and to introduce redundant components and redundant storage paths. This paper describes the evolution of the storage system, the contributions to reliability that each additional feature has provided, further improvements that are being considered, and real-life experience with the current system.

  10. SEARCH FOR A RELIABLE STORAGE ARCHITECTURE FOR RHIC

    International Nuclear Information System (INIS)

    BINELLO, S.; KATZ, R.A.; MORRIS, J.T.

    2007-01-01

    Software used to operate the Relativistic Heavy Ion Collider (RHIC) resides on one operational RAID storage system. This storage system is also used to store data that reflects the status and recent history of accelerator operations. Failure of this system interrupts the operation of the accelerator as backup systems are brought online. In order to increase the reliability of this critical control system component, the storage system architecture has been upgraded to use Storage Area Network (SAN) technology and to introduce redundant components and redundant storage paths. This paper describes the evolution of the storage system, the contributions to reliability that each additional feature has provided, further improvements that are being considered, and real-life experience with the current system

  11. Definition and Classification of Heart Failure

    Directory of Open Access Journals (Sweden)

    Mitja Lainscak

    2017-01-01

    Full Text Available A review of the definition and classification of heart failure, updated since the recent 2016 European Society of Cardiology guidelines for the diagnosis and treatment of acute and chronic heart failure. Heart failure is defined by the European Society of Cardiology (ESC as a clinical syndrome characterised by symptoms such as shortness of breath, persistent coughing or wheezing, ankle swelling and fatigue, that may be accompanied by the following signs: jugular venous pressure, pulmonary crackles, increased heart rate and peripheral oedema. However, these signs may not be present in the early stages and in patients treated with diuretics. When apparent, they are due to a structural and/or functional cardiac abnormality, leading to systolic and/or diastolic ventricular dysfunction, resulting in a reduced cardiac output and/or elevated intra- cardiac pressures at rest or during stress. According to the most recent ESC guidelines the initial evaluation of patients with suspected heart failure should include a clinical history and physical examination, laboratory assessment, chest radiography, and electrocardiography. Echocardiography can confirm the diagnosis. Beyond detecting myocardial abnormality, other impairments such as abnormalities of the valves, pericardium, endocardium, heart rhythm, and conduction may be found. The identification of the underlying aetiology is pivotal for the diagnosis of heart failure and its treatment. The authors review the definitions and classifications of heart failure.

  12. Decentralized Sliding Mode Observer Based Dual Closed-Loop Fault Tolerant Control for Reconfigurable Manipulator against Actuator Failure

    Science.gov (United States)

    Zhao, Bo; Li, Yuanchun

    2015-01-01

    This paper considers a decentralized fault tolerant control (DFTC) scheme for reconfigurable manipulators. With the appearance of norm-bounded failure, a dual closed-loop trajectory tracking control algorithm is proposed on the basis of the Lyapunov stability theory. Characterized by the modularization property, the actuator failure is estimated by the proposed decentralized sliding mode observer (DSMO). Moreover, the actuator failure can be treated in view of the local joint information, so its control performance degradation is independent of other normal joints. In addition, the presented DFTC scheme is significantly simplified in terms of the structure of the controller due to its dual closed-loop architecture, and its feasibility is highly reflected in the control of reconfigurable manipulators. Finally, the effectiveness of the proposed DFTC scheme is demonstrated using simulations. PMID:26181826

  13. Decentralized Sliding Mode Observer Based Dual Closed-Loop Fault Tolerant Control for Reconfigurable Manipulator against Actuator Failure.

    Directory of Open Access Journals (Sweden)

    Bo Zhao

    Full Text Available This paper considers a decentralized fault tolerant control (DFTC scheme for reconfigurable manipulators. With the appearance of norm-bounded failure, a dual closed-loop trajectory tracking control algorithm is proposed on the basis of the Lyapunov stability theory. Characterized by the modularization property, the actuator failure is estimated by the proposed decentralized sliding mode observer (DSMO. Moreover, the actuator failure can be treated in view of the local joint information, so its control performance degradation is independent of other normal joints. In addition, the presented DFTC scheme is significantly simplified in terms of the structure of the controller due to its dual closed-loop architecture, and its feasibility is highly reflected in the control of reconfigurable manipulators. Finally, the effectiveness of the proposed DFTC scheme is demonstrated using simulations.

  14. Detection of instrument or component failures in a nuclear plant by Luenberger observers

    International Nuclear Information System (INIS)

    Wilburn, N.P.; Colley, R.W.; Alexandro, F.J.; Clark, R.N.

    1985-01-01

    A diagnostic system, which will distinguish between instrument failures (flowmeters, etc.) and component failures (valves, filters, etc.) that show the same symptoms, has been developed for nuclear Plants using Luenberger observers. Luenberger observers are online computer based modules constructed following the technology of Clark [3]. A seventh order model of an FFTF subsystem was constructed using the Advanced Continuous Simulation Language (ACSL) and was used to show through simulation that Luenberger observers can be applied to nuclear systems

  15. Visual search, visual streams, and visual architectures.

    Science.gov (United States)

    Green, M

    1991-10-01

    Most psychological, physiological, and computational models of early vision suggest that retinal information is divided into a parallel set of feature modules. The dominant theories of visual search assume that these modules form a "blackboard" architecture: a set of independent representations that communicate only through a central processor. A review of research shows that blackboard-based theories, such as feature-integration theory, cannot easily explain the existing data. The experimental evidence is more consistent with a "network" architecture, which stresses that: (1) feature modules are directly connected to one another, (2) features and their locations are represented together, (3) feature detection and integration are not distinct processing stages, and (4) no executive control process, such as focal attention, is needed to integrate features. Attention is not a spotlight that synthesizes objects from raw features. Instead, it is better to conceptualize attention as an aperture which masks irrelevant visual information.

  16. Modular formal analysis of the central guardian in the Time-Triggered Architecture

    International Nuclear Information System (INIS)

    Pfeifer, Holger; Henke, Friedrich W. von

    2007-01-01

    The Time-Triggered Protocol TTP/C constitutes the core of the communication level of the Time-Triggered Architecture for dependable real-time systems. TTP/C ensures consistent data distribution, even in the presence of faults occurring to nodes or the communication channel. However, the protocol mechanisms of TTP/C rely on a rather optimistic fault hypothesis. Therefore, an independent component, the central guardian, employs static knowledge about the system to transform arbitrary node failures into failure modes that are covered by the fault hypothesis. This paper presents a modular formal analysis of the communication properties of TTP/C based on the guardian approach. Through a hierarchy of formal models, we give a precise description of the arguments that support the desired correctness properties of TTP/C. First, requirements for correct communication are expressed on an abstract level. By stepwise refinement we show both that these abstract requirements are met under the optimistic fault hypothesis, and how the guardian model allows a broader class of node failures to be tolerated. The models have been developed and mechanically checked using the specification and verification system PVS

  17. Definitions of biochemical failure in prostate cancer following radiation therapy

    International Nuclear Information System (INIS)

    Taylor, Jeremy M.G.; Griffith, Kent A.; Sandler, Howard M.

    2001-01-01

    Purpose: The American Society for Therapeutic Radiology and Oncology (ASTRO) published a consensus panel definition of biochemical failure following radiation therapy for prostate cancer. In this paper, we develop a series of alternative definitions of biochemical failure. Using data from 688 patients, we evaluated the sensitivity and specificity of the various definitions, with respect to a defined 'clinically meaningful' outcome. Methods and Materials: The ASTRO definition of biochemical failure requires 3 consecutive rises in prostate-specific antigen (PSA). We considered several modifications to the standard definition: to require PSA rises of a certain magnitude, to consider 2 instead of 3 rises, to require the final PSA value to be greater than a fixed cutoff level, and to define biochemical failure based on the slope of PSA over 1, 1.5, or 2 years. A clinically meaningful failure is defined as local recurrence, distant metastases, initiation of unplanned hormonal therapy, unplanned radical prostatectomy, or a PSA>25 later than 6 months after radiation. Results: Requiring the final PSA in a series of consecutive rises to be larger than 1.5 ng/mL increased the specificity of biochemical failure. For a fixed specificity, defining biochemical failure based on 2 consecutive rises, or the slope over the last year, could increase the sensitivity by up to approximately 20%, compared to the ASTRO definition. Using a rule based on the slope over the previous year or 2 rises leads to a slightly earlier detection of biochemical failure than does the ASTRO definition. Even with the best rule, only approximately 20% of true failures are biochemically detected more than 1 year before the clinically meaningful event time. Conclusion: There is potential for improvement in the ASTRO consensus definition of biochemical failure. Further research is needed, in studies with long follow-up times, to evaluate the relationship between various definitions of biochemical failure and

  18. Memory architecture

    NARCIS (Netherlands)

    2012-01-01

    A memory architecture is presented. The memory architecture comprises a first memory and a second memory. The first memory has at least a bank with a first width addressable by a single address. The second memory has a plurality of banks of a second width, said banks being addressable by components

  19. Initial Experience of Tomosynthesis-Guided Vacuum-Assisted Biopsies of Tomosynthesis-Detected (2D Mammography and Ultrasound Occult) Architectural Distortions.

    Science.gov (United States)

    Patel, Bhavika K; Covington, Matthew; Pizzitola, Victor J; Lorans, Roxanne; Giurescu, Marina; Eversman, William; Lewin, John

    2018-03-23

    As experience and aptitude in digital breast tomosynthesis (DBT) have increased, radiologists are seeing more areas of architectural distortion (AD) on DBT images compared with standard 2D mammograms. The purpose of this study is to report our experience using tomosynthesis-guided vacuum-assisted biopsies (VABs) for ADs that were occult at 2D mammography and ultrasound and to analyze the positive predictive value for malignancy. We performed a retrospective review of 34 DBT-detected ADs that were occult at mammography and ultrasound. We found a positive predictive value of 26% (nine malignancies in 34 lesions). Eight of the malignancies were invasive and one was ductal carcinoma in situ. The invasive cancers were grade 1 (4/8; 50%), grade 2 (2/8; 25%), or grade 3 (1/8; 13%); information about one invasive cancer was not available. The mean size of the invasive cancers at pathologic examination was 7.5 mm (range, 6-30 mm). Tomosynthesis-guided VAB is a feasible method to sample ADs that are occult at 2D mammography and ultrasound. Tomosynthesis-guided VAB is a minimally invasive method that detected a significant number of carcinomas, most of which were grade 1 cancers. Further studies are needed.

  20. Can You Hear Architecture

    DEFF Research Database (Denmark)

    Ryhl, Camilla

    2016-01-01

    Taking an off set in the understanding of architectural quality being based on multisensory architecture, the paper aims to discuss the current acoustic discourse in inclusive design and its implications to the integration of inclusive design in architectural discourse and practice as well...... as the understanding of user needs. The paper further points to the need to elaborate and nuance the discourse much more, in order to assure inclusion to the many users living with a hearing impairment or, for other reasons, with a high degree of auditory sensitivity. Using the authors’ own research on inclusive...... design and architectural quality for people with a hearing disability and a newly conducted qualitative evaluation research in Denmark as well as architectural theories on multisensory aspects of architectural experiences, the paper uses examples of existing Nordic building cases to discuss the role...

  1. A framework using cluster-based hybrid network architecture for collaborative virtual surgery.

    Science.gov (United States)

    Qin, Jing; Choi, Kup-Sze; Poon, Wai-Sang; Heng, Pheng-Ann

    2009-12-01

    Research on collaborative virtual environments (CVEs) opens the opportunity for simulating the cooperative work in surgical operations. It is however a challenging task to implement a high performance collaborative surgical simulation system because of the difficulty in maintaining state consistency with minimum network latencies, especially when sophisticated deformable models and haptics are involved. In this paper, an integrated framework using cluster-based hybrid network architecture is proposed to support collaborative virtual surgery. Multicast transmission is employed to transmit updated information among participants in order to reduce network latencies, while system consistency is maintained by an administrative server. Reliable multicast is implemented using distributed message acknowledgment based on cluster cooperation and sliding window technique. The robustness of the framework is guaranteed by the failure detection chain which enables smooth transition when participants join and leave the collaboration, including normal and involuntary leaving. Communication overhead is further reduced by implementing a number of management approaches such as computational policies and collaborative mechanisms. The feasibility of the proposed framework is demonstrated by successfully extending an existing standalone orthopedic surgery trainer into a collaborative simulation system. A series of experiments have been conducted to evaluate the system performance. The results demonstrate that the proposed framework is capable of supporting collaborative surgical simulation.

  2. Fuel element failure detection experiments, evaluation of the experiments at KNK II/1 (Intermediate Report)

    CERN Document Server

    Bruetsch, D

    1983-01-01

    In the frame of the fuel element failure detection experiments at KNK II with its first core the measurement devices of INTERATOM were taken into operation in August 1981 and were in operation almost continuously. Since the start-up until the end of the first KNK II core operation plugs with different fuel test areas were inserted in order to test the efficiency of the different measuring devices. The experimental results determined during this test phase and the gained experiences are described in this report and valuated. All three measuring techniques (Xenon adsorption line XAS, gas-chromatograph GC and precipitator PIT) could fulfil the expectations concerning their susceptibility. For XAS and GC the nuclide specific sensitivities as determined during the preliminary tests could be confirmed. For PIT the influences of different parameters on the signal yield could be determined. The sensitivity of the device could not be measured due to a missing reference measuring point.

  3. Vital architecture, slow momentum policy

    DEFF Research Database (Denmark)

    Braae, Ellen Marie

    2010-01-01

    A reflection on the relation between Danish landscape architecture policy and the statements made through current landscape architectural project.......A reflection on the relation between Danish landscape architecture policy and the statements made through current landscape architectural project....

  4. The Architecture Improvement Method: cost management and systematic learning about strategic product architectures

    NARCIS (Netherlands)

    de Weerd-Nederhof, Petronella C.; Wouters, Marc; Teuns, Steven J.A.; Hissel, Paul H.

    2007-01-01

    The architecture improvement method (AIM) is a method for multidisciplinary product architecture improvement, addressing uncertainty and complexity and incorporating feedback loops, facilitating trade-off decision making during the architecture creation process. The research reported in this paper

  5. The ABC Adaptive Fusion Architecture

    DEFF Research Database (Denmark)

    Bunde-Pedersen, Jonathan; Mogensen, Martin; Bardram, Jakob Eyvind

    2006-01-01

    and early implementation of a systemcapable of adapting to its operating environment, choosingthe best fit combination of the client-server and peerto-peer architectures. The architecture creates a seamlessintegration between a centralized hybrid architecture and adecentralized architecture, relying on what...

  6. Architecture humanitarian emergencies

    DEFF Research Database (Denmark)

    Gomez-Guillamon, Maria; Eskemose Andersen, Jørgen; Contreras, Jorge Lobos

    2013-01-01

    Introduced by scientific articles conserning architecture and human rights in light of cultures, emergencies, social equality and sustainability, democracy, economy, artistic development and science into architecture. Concluding in definition of needs for new roles, processes and education of arc......, Architettura di Alghero in Italy, Architecture and Design of Kocaeli University in Turkey, University of Aguascalientes in Mexico, Architectura y Urbanismo of University of Chile and Escuela de Architectura of Universidad Austral in Chile....

  7. Minimalism in architecture: Abstract conceptualization of architecture

    Directory of Open Access Journals (Sweden)

    Vasilski Dragana

    2015-01-01

    Full Text Available Minimalism in architecture contains the idea of the minimum as a leading creative tend to be considered and interpreted in working through phenomena of empathy and abstraction. In the Western culture, the root of this idea is found in empathy of Wilhelm Worringer and abstraction of Kasimir Malevich. In his dissertation, 'Abstraction and Empathy' Worringer presented his thesis on the psychology of style through which he explained the two opposing basic forms: abstraction and empathy. His conclusion on empathy as a psychological basis of observation expression is significant due to the verbal congruence with contemporary minimalist expression. His intuition was enhenced furthermore by figure of Malevich. Abstraction, as an expression of inner unfettered inspiration, has played a crucial role in the development of modern art and architecture of the twentieth century. Abstraction, which is one of the basic methods of learning in psychology (separating relevant from irrelevant features, Carl Jung is used to discover ideas. Minimalism in architecture emphasizes the level of abstraction to which the individual functions are reduced. Different types of abstraction are present: in the form as well as function of the basic elements: walls and windows. The case study is an example of Sou Fujimoto who is unequivocal in its commitment to the autonomy of abstract conceptualization of architecture.

  8. Siting Samplers to Minimize Expected Time to Detection

    Energy Technology Data Exchange (ETDEWEB)

    Walter, Travis; Lorenzetti, David M.; Sohn, Michael D.

    2012-05-02

    We present a probabilistic approach to designing an indoor sampler network for detecting an accidental or intentional chemical or biological release, and demonstrate it for a real building. In an earlier paper, Sohn and Lorenzetti(1) developed a proof of concept algorithm that assumed samplers could return measurements only slowly (on the order of hours). This led to optimal detect to treat architectures, which maximize the probability of detecting a release. This paper develops a more general approach, and applies it to samplers that can return measurements relatively quickly (in minutes). This leads to optimal detect to warn architectures, which minimize the expected time to detection. Using a model of a real, large, commercial building, we demonstrate the approach by optimizing networks against uncertain release locations, source terms, and sampler characteristics. Finally, we speculate on rules of thumb for general sampler placement.

  9. Correlación vibroacústica: detección cognitiva e identificación de fallas. // Vibroacoustic correlation: Failure identification and cognoscitive detection

    Directory of Open Access Journals (Sweden)

    F. Miyara

    2000-03-01

    Full Text Available Se presenta una metodología para la investigación diagnóstica de fallas en máquinas industriales mediante un correladorvibroacústico, dispositivo que permite comparar automáticamente las frecuencias de una señal acústica y una vibratoria. Seconjuga así la habilidad humana para la detección de ruidos anómalos con la identificación objetiva de su procedencia.Palabras claves Acústica, ruido, vibraciones, correlación, fallas, identificación_____________________________________________________________________________AbstractA methodology for the diagnosis investigation of failure in industrial machines by means of a vibroacustic corelator ispresented, device that allow to compare the frecuencies of an acoustic signal and a vibratory one automatically.In this waythe human ability for the detection of anomalous noises is conjugated with the objective identification of their origin.Key words: Acoustic, Noise, Vibrations, Correlation, Failure Identification.

  10. Economics-driven software architecture

    CERN Document Server

    Mistrik, Ivan; Kazman, Rick; Zhang, Yuanyuan

    2014-01-01

    Economics-driven Software Architecture presents a guide for engineers and architects who need to understand the economic impact of architecture design decisions: the long term and strategic viability, cost-effectiveness, and sustainability of applications and systems. Economics-driven software development can increase quality, productivity, and profitability, but comprehensive knowledge is needed to understand the architectural challenges involved in dealing with the development of large, architecturally challenging systems in an economic way. This book covers how to apply economic consider

  11. Fault Management Architectures and the Challenges of Providing Software Assurance

    Science.gov (United States)

    Savarino, Shirley; Fitz, Rhonda; Fesq, Lorraine; Whitman, Gerek

    2015-01-01

    The satellite systems Fault Management (FM) is focused on safety, the preservation of assets, and maintaining the desired functionality of the system. How FM is implemented varies among missions. Common to most is system complexity due to a need to establish a multi-dimensional structure across hardware, software and operations. This structure is necessary to identify and respond to system faults, mitigate technical risks and ensure operational continuity. These architecture, implementation and software assurance efforts increase with mission complexity. Because FM is a systems engineering discipline with a distributed implementation, providing efficient and effective verification and validation (VV) is challenging. A breakout session at the 2012 NASA Independent Verification Validation (IVV) Annual Workshop titled VV of Fault Management: Challenges and Successes exposed these issues in terms of VV for a representative set of architectures. NASA's IVV is funded by NASA's Software Assurance Research Program (SARP) in partnership with NASA's Jet Propulsion Laboratory (JPL) to extend the work performed at the Workshop session. NASA IVV will extract FM architectures across the IVV portfolio and evaluate the data set for robustness, assess visibility for validation and test, and define software assurance methods that could be applied to the various architectures and designs. This work focuses efforts on FM architectures from critical and complex projects within NASA. The identification of particular FM architectures, visibility, and associated VVIVV techniques provides a data set that can enable higher assurance that a satellite system will adequately detect and respond to adverse conditions. Ultimately, results from this activity will be incorporated into the NASA Fault Management Handbook providing dissemination across NASA, other agencies and the satellite community. This paper discusses the approach taken to perform the evaluations and preliminary findings from the

  12. Do Architectural Design Decisions Improve the Understanding of Software Architecture? Two Controlled Experiments

    NARCIS (Netherlands)

    Shahin, M.; Liang, P.; Li, Z.

    2014-01-01

    Architectural design decision (ADD) and its design rationale, as a paradigm shift on documenting and enriching architecture design description, is supposed to facilitate the understanding of architecture and the reasoning behind the design rationale, which consequently improves the architecting

  13. An SOA-based architecture framework

    NARCIS (Netherlands)

    Aalst, van der W.M.P.; Beisiegel, M.; Hee, van K.M.; König, D.; Stahl, C.

    2007-01-01

    We present an Service-Oriented Architecture (SOA)– based architecture framework. The architecture framework is designed to be close to industry standards, especially to the Service Component Architecture (SCA). The framework is language independent and the building blocks of each system, activities

  14. Experience of automation failures in training: effects on trust, automation bias, complacency and performance.

    Science.gov (United States)

    Sauer, Juergen; Chavaillaz, Alain; Wastell, David

    2016-06-01

    This work examined the effects of operators' exposure to various types of automation failures in training. Forty-five participants were trained for 3.5 h on a simulated process control environment. During training, participants either experienced a fully reliable, automatic fault repair facility (i.e. faults detected and correctly diagnosed), a misdiagnosis-prone one (i.e. faults detected but not correctly diagnosed) or a miss-prone one (i.e. faults not detected). One week after training, participants were tested for 3 h, experiencing two types of automation failures (misdiagnosis, miss). The results showed that automation bias was very high when operators trained on miss-prone automation encountered a failure of the diagnostic system. Operator errors resulting from automation bias were much higher when automation misdiagnosed a fault than when it missed one. Differences in trust levels that were instilled by the different training experiences disappeared during the testing session. Practitioner Summary: The experience of automation failures during training has some consequences. A greater potential for operator errors may be expected when an automatic system failed to diagnose a fault than when it failed to detect one.

  15. Rhein-Ruhr architecture

    DEFF Research Database (Denmark)

    2002-01-01

    katalog til udstillingen 'Rhein - Ruhr architecture' Meldahls smedie, 15. marts - 28. april 2002. 99 sider......katalog til udstillingen 'Rhein - Ruhr architecture' Meldahls smedie, 15. marts - 28. april 2002. 99 sider...

  16. Background to the teaching-learning process of visual arts for appreciation of architecture in the second cycle

    Directory of Open Access Journals (Sweden)

    Paula Ester Azuy Chiroles

    2016-12-01

    Full Text Available This article is a systematization of the definitions given to the concept of Plastic Arts from different theoretical and historical-logical study of the teaching and learning of the arts to the appreciation of architecture in the second cycle, in which it is shown that This has been a failure and / or limiting in Primary Education for the formation of school; (For the appreciation of architecture, as part of the manifestations of the arts, offers the best potential to enhance the aesthetic taste and strengthening cultural identity the documentary analysis to various documents of the curriculum of primary education between which are cited: the ministerial resolutions, Curriculum programs, television programming, educational software, methodological guidance curriculum of primary education, methodological preparations corroborate this problem.

  17. Heart failure: when form fails to follow function.

    Science.gov (United States)

    Katz, Arnold M; Rolett, Ellis L

    2016-02-01

    Cardiac performance is normally determined by architectural, cellular, and molecular structures that determine the heart's form, and by physiological and biochemical mechanisms that regulate the function of these structures. Impaired adaptation of form to function in failing hearts contributes to two syndromes initially called systolic heart failure (SHF) and diastolic heart failure (DHF). In SHF, characterized by high end-diastolic volume (EDV), the left ventricle (LV) cannot eject a normal stroke volume (SV); in DHF, with normal or low EDV, the LV cannot accept a normal venous return. These syndromes are now generally defined in terms of ejection fraction (EF): SHF became 'heart failure with reduced ejection fraction' (HFrEF) while DHF became 'heart failure with normal or preserved ejection fraction' (HFnEF or HFpEF). However, EF is a chimeric index because it is the ratio between SV--which measures function, and EDV--which measures form. In SHF the LV dilates when sarcomere addition in series increases cardiac myocyte length, whereas sarcomere addition in parallel can cause concentric hypertrophy in DHF by increasing myocyte thickness. Although dilatation in SHF allows the LV to accept a greater venous return, it increases the energy cost of ejection and initiates a vicious cycle that contributes to progressive dilatation. In contrast, concentric hypertrophy in DHF facilitates ejection but impairs filling and can cause heart muscle to deteriorate. Differences in the molecular signals that initiate dilatation and concentric hypertrophy can explain why many drugs that improve prognosis in SHF have little if any benefit in DHF. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2015. For permissions please email: journals.permissions@oup.com.

  18. Bionics in architecture

    Directory of Open Access Journals (Sweden)

    Sugár Viktória

    2017-04-01

    Full Text Available The adaptation of the forms and phenomena of nature is not a recent concept. Observation of natural mechanisms has been a primary source of innovation since prehistoric ages, which can be perceived through the history of architecture. Currently, this idea is coming to the front again through sustainable architecture and adaptive design. Investigating natural innovations and the clear-outness of evolution during the 20th century led to the creation of a separate scientific discipline, Bionics. Architecture and Bionics are strongly related to each other, since the act of building is as old as the human civilization - moreover its first formal and structural source was obviously the surrounding environment. Present paper discusses the definition of Bionics and its connection with the architecture.

  19. Risk analysis of geothermal power plants using Failure Modes and Effects Analysis (FMEA) technique

    International Nuclear Information System (INIS)

    Feili, Hamid Reza; Akar, Navid; Lotfizadeh, Hossein; Bairampour, Mohammad; Nasiri, Sina

    2013-01-01

    Highlights: • Using Failure Modes and Effects Analysis (FMEA) to find potential failures in geothermal power plants. • We considered 5 major parts of geothermal power plants for risk analysis. • Risk Priority Number (RPN) is calculated for all failure modes. • Corrective actions are recommended to eliminate or decrease the risk of failure modes. - Abstract: Renewable energy plays a key role in the transition toward a low carbon economy and the provision of a secure supply of energy. Geothermal energy is a versatile source as a form of renewable energy that meets popular demand. Since some Geothermal Power Plants (GPPs) face various failures, the requirement of a technique for team engineering to eliminate or decrease potential failures is considerable. Because no specific published record of considering an FMEA applied to GPPs with common failure modes have been found already, in this paper, the utilization of Failure Modes and Effects Analysis (FMEA) as a convenient technique for determining, classifying and analyzing common failures in typical GPPs is considered. As a result, an appropriate risk scoring of occurrence, detection and severity of failure modes and computing the Risk Priority Number (RPN) for detecting high potential failures is achieved. In order to expedite accuracy and ability to analyze the process, XFMEA software is utilized. Moreover, 5 major parts of a GPP is studied to propose a suitable approach for developing GPPs and increasing reliability by recommending corrective actions for each failure mode

  20. Proceedings of the specialists' meeting on steam generator failure and failure propagation experience, held in Aix-en Provence, France, 26-28 September 1990

    International Nuclear Information System (INIS)

    Maupre, J.P.

    1990-09-01

    The 35 participants, representing 7 Member States and one International Organization discussed recent investigations on leak development and propagation in LMFBR steam generators. The meeting was divided into three technical sessions: review of national status on studies of failure propagation (8 papers); propagation experience on reactor steam generators (4 papers); studies of failure propagation: codes, hydrogen detection, tests (11 papers). A separate abstract was prepared for each of these papers

  1. Architectural Prototyping in Industrial Practice

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2008-01-01

    Architectural prototyping is the process of using executable code to investigate stakeholders’ software architecture concerns with respect to a system under development. Previous work has established this as a useful and cost-effective way of exploration and learning of the design space of a system......, in addressing issues regarding quality attributes, in addressing architectural risks, and in addressing the problem of knowledge transfer and conformance. Little work has been reported so far on the actual industrial use of architectural prototyping. In this paper, we report from an ethnographical study...... and focus group involving architects from four companies in which we have focused on architectural prototypes. Our findings conclude that architectural prototypes play an important role in resolving problems experimentally, but less so in exploring alternative solutions. Furthermore, architectural...

  2. VLSI architecture of a K-best detector for MIMO-OFDM wireless communication systems

    International Nuclear Information System (INIS)

    Jian Haifang; Shi Yin

    2009-01-01

    The K-best detector is considered as a promising technique in the MIMO-OFDM detection because of its good performance and low complexity. In this paper, a new K-best VLSI architecture is presented. In the proposed architecture, the metric computation units (MCUs) expand each surviving path only to its partial branches, based on the novel expansion scheme, which can predetermine the branches' ascending order by their local distances. Then a distributed sorter sorts out the new K surviving paths from the expanded branches in pipelines. Compared to the conventional K-best scheme, the proposed architecture can approximately reduce fundamental operations by 50% and 75% for the 16-QAM and the 64-QAM cases, respectively, and, consequently, lower the demand on the hardware resource significantly. Simulation results prove that the proposed architecture can achieve a performance very similar to conventional K-best detectors. Hence, it is an efficient solution to the K-best detector's VLSI implementation for high-throughput MIMO-OFDM systems.

  3. VLSI architecture of a K-best detector for MIMO-OFDM wireless communication systems

    Energy Technology Data Exchange (ETDEWEB)

    Jian Haifang; Shi Yin, E-mail: jhf@semi.ac.c [Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083 (China)

    2009-07-15

    The K-best detector is considered as a promising technique in the MIMO-OFDM detection because of its good performance and low complexity. In this paper, a new K-best VLSI architecture is presented. In the proposed architecture, the metric computation units (MCUs) expand each surviving path only to its partial branches, based on the novel expansion scheme, which can predetermine the branches' ascending order by their local distances. Then a distributed sorter sorts out the new K surviving paths from the expanded branches in pipelines. Compared to the conventional K-best scheme, the proposed architecture can approximately reduce fundamental operations by 50% and 75% for the 16-QAM and the 64-QAM cases, respectively, and, consequently, lower the demand on the hardware resource significantly. Simulation results prove that the proposed architecture can achieve a performance very similar to conventional K-best detectors. Hence, it is an efficient solution to the K-best detector's VLSI implementation for high-throughput MIMO-OFDM systems.

  4. Product Architecture Modularity Strategies

    DEFF Research Database (Denmark)

    Mikkola, Juliana Hsuan

    2003-01-01

    The focus of this paper is to integrate various perspectives on product architecture modularity into a general framework, and also to propose a way to measure the degree of modularization embedded in product architectures. Various trade-offs between modular and integral product architectures...... and how components and interfaces influence the degree of modularization are considered. In order to gain a better understanding of product architecture modularity as a strategy, a theoretical framework and propositions are drawn from various academic literature sources. Based on the literature review......, the following key elements of product architecture are identified: components (standard and new-to-the-firm), interfaces (standardization and specification), degree of coupling, and substitutability. A mathematical function, termed modularization function, is introduced to measure the degree of modularization...

  5. Iraqi architecture in mogul period

    Directory of Open Access Journals (Sweden)

    Hasan Shatha

    2018-01-01

    Full Text Available Iraqi architecture have many periods passed through it until now, each on from these periods have it is architectural style, also through time these styles interacted among us, to creating kind of space forming, space relationships, and architectural elements (detailed treatments, the research problem being from the multi interacted architectural styles causing some of confused of general characteristic to every style, that we could distinguish by it. Research tries to study architecture style through Mogul Conquest to Baghdad. Aim of research follow main characteristic for this architectural style in the Mogul periods on the level of form, elements, and treatments. Research depending on descriptive and analytical all buildings belong to this period, so from analyzing there style by, general form for building, architectural elements, and it architectural treatment, therefore; repeating this procedures to every building we get some similarities, from these similarities we can making conclusion about pure characteristic of the style of these period. Other side, we also discover some Dissimilar in the building periods, these will lead research to make what interacting among styles in this period, after all that we can drew clearly main characteristic of Architectural Style for Mogul Conquest in Baghdad

  6. Intuitionistic fuzzy-based model for failure detection.

    Science.gov (United States)

    Aikhuele, Daniel O; Turan, Faiz B M

    2016-01-01

    In identifying to-be-improved product component(s), the customer/user requirements which are mainly considered, and achieved through customer surveys using the quality function deployment (QFD) tool, often fail to guarantee or cover aspects of the product reliability. Even when they do, there are always many misunderstandings. To improve the product reliability and quality during product redesigning phase and to create that novel product(s) for the customers, the failure information of the existing product, and its component(s) should ordinarily be analyzed and converted to appropriate design knowledge for the design engineer. In this paper, a new intuitionistic fuzzy multi-criteria decision-making method has been proposed. The new approach which is based on an intuitionistic fuzzy TOPSIS model uses an exponential-related function for the computation of the separation measures from the intuitionistic fuzzy positive ideal solution (IFPIS) and intuitionistic fuzzy negative ideal solution (IFNIS) of alternatives. The proposed method has been applied to two practical case studies, and the result from the different cases has been compared with some similar computational approaches in the literature.

  7. Toward an Agile Approach to Managing the Effect of Requirements on Software Architecture during Global Software Development

    OpenAIRE

    Alsahli, Abdulaziz; Khan, Hameed; Alyahya, Sultan

    2016-01-01

    Requirement change management (RCM) is a critical activity during software development because poor RCM results in occurrence of defects, thereby resulting in software failure. To achieve RCM, efficient impact analysis is mandatory. A common repository is a good approach to maintain changed requirements, reusing and reducing effort. Thus, a better approach is needed to tailor knowledge for better change management of requirements and architecture during global software development (GSD).The o...

  8. A COMPARATIVE STUDY OF SYSTEM NETWORK ARCHITECTURE Vs DIGITAL NETWORK ARCHITECTURE

    OpenAIRE

    Seema; Mukesh Arya

    2011-01-01

    The efficient managing system of sources is mandatory for the successful running of any network. Here this paper describes the most popular network architectures one of developed by IBM, System Network Architecture (SNA) and other is Digital Network Architecture (DNA). As we know that the network standards and protocols are needed for the network developers as well as users. Some standards are The IEEE 802.3 standards (The Institute of Electrical and Electronics Engineers 1980) (LAN), IBM Sta...

  9. A model-based prognostic approach to predict interconnect failure using impedance analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Dae Il; Yoon, Jeong Ah [Dept. of System Design and Control Engineering. Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of)

    2016-10-15

    The reliability of electronic assemblies is largely affected by the health of interconnects, such as solder joints, which provide mechanical, electrical and thermal connections between circuit components. During field lifecycle conditions, interconnects are often subjected to a DC open circuit, one of the most common interconnect failure modes, due to cracking. An interconnect damaged by cracking is sometimes extremely hard to detect when it is a part of a daisy-chain structure, neighboring with other healthy interconnects that have not yet cracked. This cracked interconnect may seem to provide a good electrical contact due to the compressive load applied by the neighboring healthy interconnects, but it can cause the occasional loss of electrical continuity under operational and environmental loading conditions in field applications. Thus, cracked interconnects can lead to the intermittent failure of electronic assemblies and eventually to permanent failure of the product or the system. This paper introduces a model-based prognostic approach to quantitatively detect and predict interconnect failure using impedance analysis and particle filtering. Impedance analysis was previously reported as a sensitive means of detecting incipient changes at the surface of interconnects, such as cracking, based on the continuous monitoring of RF impedance. To predict the time to failure, particle filtering was used as a prognostic approach using the Paris model to address the fatigue crack growth. To validate this approach, mechanical fatigue tests were conducted with continuous monitoring of RF impedance while degrading the solder joints under test due to fatigue cracking. The test results showed the RF impedance consistently increased as the solder joints were degraded due to the growth of cracks, and particle filtering predicted the time to failure of the interconnects similarly to their actual timesto- failure based on the early sensitivity of RF impedance.

  10. Peer-To-Peer Architectures in Distributed Data Management Systems for Large Hadron Collider Experiments

    CERN Document Server

    Lo Presti, Giuseppe; Lo Re, G; Orsini, L

    2005-01-01

    The main goal of the presented research is to investigate Peer-to-Peer architectures and to leverage distributed services to support networked autonomous systems. The research work focuses on development and demonstration of technologies suitable for providing autonomy and flexibility in the context of distributed network management and distributed data acquisition. A network management system enables the network administrator to monitor a computer network and properly handle any failure that can arise within the network. An online data acquisition (DAQ) system for high-energy physics experiments has to collect, combine, filter, and store for later analysis a huge amount of data, describing subatomic particles collision events. Both domains have tight constraints which are discussed and tackled in this work. New emerging paradigms have been investigated to design novel middleware architectures for such distributed systems, particularly the Active Networks paradigm and the Peer-to-Peer paradigm. A network man...

  11. Photosensitive-polyimide based method for fabricating various neural electrode architectures

    Directory of Open Access Journals (Sweden)

    Yasuhiro X Kato

    2012-06-01

    Full Text Available An extensive photosensitive polyimide (PSPI-based method for designing and fabricating various neural electrode architectures was developed. The method aims to broaden the design flexibility and expand the fabrication capability for neural electrodes to improve the quality of recorded signals and integrate other functions. After characterizing PSPI’s properties for micromachining processes, we successfully designed and fabricated various neural electrodes even on a non-flat substrate using only one PSPI as an insulation material and without the time-consuming dry etching processes. The fabricated neural electrodes were an electrocorticogram electrode, a mesh intracortical electrode with a unique lattice-like mesh structure to fixate neural tissue, and a guide cannula electrode with recording microelectrodes placed on the curved surface of a guide cannula as a microdialysis probe. In vivo neural recordings using anesthetized rats demonstrated that these electrodes can be used to record neural activities repeatedly without any breakage and mechanical failures, which potentially promises stable recordings for long periods of time. These successes make us believe that this PSPI-based fabrication is a powerful method, permitting flexible design and easy optimization of electrode architectures for a variety of electrophysiological experimental research with improved neural recording performance.

  12. Launch Vehicle Failure Dynamics and Abort Triggering Analysis

    Science.gov (United States)

    Hanson, John M.; Hill, Ashely D.; Beard, Bernard B.

    2011-01-01

    Launch vehicle ascent is a time of high risk for an on-board crew. There are many types of failures that can kill the crew if the crew is still on-board when the failure becomes catastrophic. For some failure scenarios, there is plenty of time for the crew to be warned and to depart, whereas in some there is insufficient time for the crew to escape. There is a large fraction of possible failures for which time is of the essence and a successful abort is possible if the detection and action happens quickly enough. This paper focuses on abort determination based primarily on data already available from the GN&C system. This work is the result of failure analysis efforts performed during the Ares I launch vehicle development program. Derivation of attitude and attitude rate abort triggers to ensure that abort occurs as quickly as possible when needed, but that false positives are avoided, forms a major portion of the paper. Some of the potential failure modes requiring use of these triggers are described, along with analysis used to determine the success rate of getting the crew off prior to vehicle demise.

  13. RATS: Reactive Architectures

    National Research Council Canada - National Science Library

    Christensen, Marc

    2004-01-01

    This project had two goals: To build an emulation prototype board for a tiled architecture and to demonstrate the utility of a global inter-chip free-space photonic interconnection fabric for polymorphous computer architectures (PCA...

  14. Architectural Anthropology

    DEFF Research Database (Denmark)

    Stender, Marie

    Architecture and anthropology have always had a common focus on dwelling, housing, urban life and spatial organisation. Current developments in both disciplines make it even more relevant to explore their boundaries and overlaps. Architects are inspired by anthropological insights and methods......, while recent material and spatial turns in anthropology have also brought an increasing interest in design, architecture and the built environment. Understanding the relationship between the social and the physical is at the heart of both disciplines, and they can obviously benefit from further...... collaboration: How can qualitative anthropological approaches contribute to contemporary architecture? And just as importantly: What can anthropologists learn from architects’ understanding of spatial and material surroundings? Recent theoretical developments in anthropology stress the role of materials...

  15. Travels in Architectural History

    Directory of Open Access Journals (Sweden)

    Davide Deriu

    2016-11-01

    Full Text Available Travel is a powerful force in shaping the perception of the modern world and plays an ever-growing role within architectural and urban cultures. Inextricably linked to political and ideological issues, travel redefines places and landscapes through new transport infrastructures and buildings. Architecture, in turn, is reconstructed through visual and textual narratives produced by scores of modern travellers — including writers and artists along with architects themselves. In the age of the camera, travel is bound up with new kinds of imaginaries; private records and recollections often mingle with official, stereotyped views, as the value of architectural heritage increasingly rests on the mechanical reproduction of its images. Whilst students often learn about architectural history through image collections, the place of the journey in the formation of the architect itself shifts. No longer a lone and passionate antiquarian or an itinerant designer, the modern architect eagerly hops on buses, trains, and planes in pursuit of personal as well as professional interests. Increasingly built on a presumption of mobility, architectural culture integrates travel into cultural debates and design experiments. By addressing such issues from a variety of perspectives, this collection, a special 'Architectural Histories' issue on travel, prompts us to rethink the mobile conditions in which architecture has historically been produced and received.

  16. Federated health information architecture: Enabling healthcare providers and policymakers to use data for decision-making.

    Science.gov (United States)

    Kumar, Manish; Mostafa, Javed; Ramaswamy, Rohit

    2018-05-01

    Health information systems (HIS) in India, as in most other developing countries, support public health management but fail to enable healthcare providers to use data for delivering quality services. Such a failure is surprising, given that the population healthcare data that the system collects are aggregated from patient records. An important reason for this failure is that the health information architecture (HIA) of the HIS is designed primarily to serve the information needs of policymakers and program managers. India has recognised the architectural gaps in its HIS and proposes to develop an integrated HIA. An enabling HIA that attempts to balance the autonomy of local systems with the requirements of a centralised monitoring agency could meet the diverse information needs of various stakeholders. Given the lack of in-country knowledge and experience in designing such an HIA, this case study was undertaken to analyse HIS in the Bihar state of India and to understand whether it would enable healthcare providers, program managers and policymakers to use data for decision-making. Based on a literature review and data collected from interviews with key informants, this article proposes a federated HIA, which has the potential to improve HIS efficiency; provide flexibility for local innovation; cater to the diverse information needs of healthcare providers, program managers and policymakers; and encourage data-based decision-making.

  17. Architectural Knitted Surfaces

    DEFF Research Database (Denmark)

    Mossé, Aurélie

    2010-01-01

    WGSN reports from the Architectural Knitted Surfaces workshop recently held at ShenkarCollege of Engineering and Design, Tel Aviv, which offered a cutting-edge insight into interactive knitted surfaces. With the increasing role of smart textiles in architecture, the Architectural Knitted Surfaces...... workshop brought together architects and interior and textile designers to highlight recent developments in intelligent knitting. The five-day workshop was led by architects Ayelet Karmon and Mette Ramsgaard Thomsen, together with Amir Cang and Eyal Sheffer from the Knitting Laboratory, in collaboration...

  18. Manipulations of Totalitarian Nazi Architecture

    Science.gov (United States)

    Antoszczyszyn, Marek

    2017-10-01

    The paper takes under considerations controversies surrounding German architecture designed during Nazi period between 1933-45. This architecture is commonly criticized for being out of innovation, taste & elementary sense of beauty. Moreover, it has been consequently wiped out from architectural manuals, probably for its undoubted associations with the totalitarian system considered as the most maleficent in the whole history. But in the meantime the architecture of another totalitarian system which appeared to be not less sinister than Nazi one is not stigmatized with such verve. It is Socrealism architecture, developed especially in East Europe & reportedly containing lots of similarities with Nazi architecture. Socrealism totalitarian architecture was never condemned like Nazi one, probably due to politically manipulated propaganda that influenced postwar public opinion. This observation leads to reflection that maybe in the same propaganda way some values of Nazi architecture are still consciously dissembled in order to hide the fact that some rules used by Nazi German architects have been also consciously used after the war. Those are especially manipulations that allegedly Nazi architecture consisted of. The paper provides some definitions around totalitarian manipulations as well as ideological assumptions for their implementation. Finally, the register of confirmed manipulations is provided with use of photo case study.

  19. GAUDI-Architecture design document

    CERN Document Server

    Mato, P

    1998-01-01

    98-064 This document is the result of the architecture design phase for the LHCb event data processing applications project. The architecture of the LHCb software system includes its logical and physical structure which has been forged by all the strategic and tactical decisions applied during development. The strategic decisions should be made explicitly with the considerations for the trade-off of each alternative. The other purpose of this document is that it serves as the main material for the scheduled architecture review that will take place in the next weeks. The architecture review will allow us to identify what are the weaknesses or strengths of the proposed architecture as well as we hope to obtain a list of suggested changes to improve it. All that well before the system is being realized in code. It is in our interest to identify the possible problems at the architecture design phase of the software project before much of the software is implemented. Strategic decisions must be cross checked caref...

  20. Combined failure acoustical diagnosis based on improved frequency domain blind deconvolution

    International Nuclear Information System (INIS)

    Pan, Nan; Wu, Xing; Chi, YiLin; Liu, Xiaoqin; Liu, Chang

    2012-01-01

    According to gear box combined failure extraction in complex sound field, an acoustic fault detection method based on improved frequency domain blind deconvolution was proposed. Follow the frequency-domain blind deconvolution flow, the morphological filtering was firstly used to extract modulation features embedded in the observed signals, then the CFPA algorithm was employed to do complex-domain blind separation, finally the J-Divergence of spectrum was employed as distance measure to resolve the permutation. Experiments using real machine sound signals was carried out. The result demonstrate this algorithm can be efficiently applied to gear box combined failure detection in practice.

  1. Architecture at Hydro-Quebec. L'architecture a Hydro-Quebec

    Energy Technology Data Exchange (ETDEWEB)

    1991-01-01

    Architecture at Hydro-Quebec is concerned not only with combining function and aesthetics in designing buildings and other structures for an electrical utility, but also to satisfy technical and administrative needs and to help solve contemporary problems such as the rational use of energy. Examples are presented of Hydro-Quebec's architectural accomplishments in the design of hydroelectric power stations and their surrounding landscapes, thermal power stations, transmission substations, research and testing facilities, and administrative buildings. It is shown how some buildings are designed to adapt to local environments and to conserve energy. The utility's policy of conserving installations of historic value, such as certain pre-1930 power stations, is illustrated, and aspects of its general architectural policy are outlined. 20 figs.

  2. An Investigation of Digital Instrumentation and Control System Failure Modes

    International Nuclear Information System (INIS)

    Korsah, Kofi; Cetiner, Mustafa Sacit; Muhlheim, Michael David; Poore, Willis P. III

    2010-01-01

    A study sponsored by the Nuclear Regulatory Commission study was conducted to investigate digital instrumentation and control (DI and C) systems and module-level failure modes using a number of databases both in the nuclear and non-nuclear industries. The objectives of the study were to obtain relevant operational experience data to identify generic DI and C system failure modes and failure mechanisms, and to obtain generic insights, with the intent of using results to establish a unified framework for categorizing failure modes and mechanisms. Of the seven databases studied, the Equipment Performance Information Exchange database was found to contain the most useful data relevant to the study. Even so, the general lack of quality relative to the objectives of the study did not allow the development of a unified framework for failure modes and mechanisms of nuclear I and C systems. However, an attempt was made to characterize all the failure modes observed (i.e., without regard to the type of I and C equipment under consideration) into common categories. It was found that all the failure modes identified could be characterized as (a) detectable/preventable before failures, (b) age-related failures, (c) random failures, (d) random/sudden failures, or (e) intermittent failures. The percentage of failure modes characterized as (a) was significant, implying that a significant reduction in system failures could be achieved through improved online monitoring, exhaustive testing prior to installation, adequate configuration control or verification and validation, etc.

  3. Acute renal failure in Yemeni patients

    Directory of Open Access Journals (Sweden)

    Muhamed Al Rohani

    2011-01-01

    Full Text Available Acute renal failure (ARF is defined as a rapid decrease in the glomerular filtration rate, occurring over a period of hours to days. The Science and Technology University Hospital, Sana′a, is a referral hospital that caters to patients from all parts of Yemen. The aim of this study is to have a deeper overview about the epidemiological status of ARF in Yemeni patients and to identify the major causes of ARF in this country. We studied 203 patients with ARF over a period of 24 months. We found that tropical infectious diseases constituted the major causes of ARF, seen in 45.3% of the patients. Malaria was the most important and dominant infectious disease causing ARF. Hypotension secondary to infection or cardiac failure was seen in 28.6% of the patients. Obstructive nephropathy due to urolithiasis or prostate enlargement was the cause of ARF in a small number of patients. ARF was a part of multi-organ failure in 19.7% of the patients, and was accompanied by a high mortality rate. Majority of the patients were managed conservatively, and only 39.9% required dialysis. Our study suggests that early detection of renal failure helps improve the outcome and return of renal function to normal. Mortality was high in patients with malaria and in those with associated hepatocellular failure.

  4. End-to-End Assessment of a Large Aperture Segmented Ultraviolet Optical Infrared (UVOIR) Telescope Architecture

    Science.gov (United States)

    Feinberg, Lee; Rioux, Norman; Bolcar, Matthew; Liu, Alice; Guyon, Oliver; Stark, Chris; Arenberg, Jon

    2016-01-01

    Key challenges of a future large aperture, segmented Ultraviolet Optical Infrared (UVOIR) Telescope capable of performing a spectroscopic survey of hundreds of Exoplanets will be sufficient stability to achieve 10^-10 contrast measurements and sufficient throughput and sensitivity for high yield Exo-Earth spectroscopic detection. Our team has collectively assessed an optimized end to end architecture including a high throughput coronagraph capable of working with a segmented telescope, a cost-effective and heritage based stable segmented telescope, a control architecture that minimizes the amount of new technologies, and an Exo-Earth yield assessment to evaluate potential performance. These efforts are combined through integrated modeling, coronagraph evaluations, and Exo-Earth yield calculations to assess the potential performance of the selected architecture. In addition, we discusses the scalability of this architecture to larger apertures and the technological tall poles to enabling it.

  5. Toward an Agile Approach to Managing the Effect of Requirements on Software Architecture during Global Software Development

    Directory of Open Access Journals (Sweden)

    Abdulaziz Alsahli

    2016-01-01

    Full Text Available Requirement change management (RCM is a critical activity during software development because poor RCM results in occurrence of defects, thereby resulting in software failure. To achieve RCM, efficient impact analysis is mandatory. A common repository is a good approach to maintain changed requirements, reusing and reducing effort. Thus, a better approach is needed to tailor knowledge for better change management of requirements and architecture during global software development (GSD.The objective of this research is to introduce an innovative approach for handling requirements and architecture changes simultaneously during global software development. The approach makes use of Case-Based Reasoning (CBR and agile practices. Agile practices make our approach iterative, whereas CBR stores requirements and makes them reusable. Twin Peaks is our base model, meaning that requirements and architecture are handled simultaneously. For this research, grounded theory has been applied; similarly, interviews from domain experts were conducted. Interview and literature transcripts formed the basis of data collection in grounded theory. Physical saturation of theory has been achieved through a published case study and developed tool. Expert reviews and statistical analysis have been used for evaluation. The proposed approach resulted in effective change management of requirements and architecture simultaneously during global software development.

  6. Towards a Media Architecture

    DEFF Research Database (Denmark)

    Ebsen, Tobias

    2010-01-01

    This text explores the concept of media architecture as a phenomenon of visual culture that describes the use of screen-technology in new spatial configurations in practices of architecture and art. I shall argue that this phenomenon is not necessarily a revolutionary new approach, but rather...... a result of conceptual changes in both modes visual representation and in expressions of architecture. These are changes the may be described as an evolution of ideas and consequent experiments that can be traced back to changes in the history of art and the various styles and ideologies of architecture....

  7. Architectural Engineers

    DEFF Research Database (Denmark)

    Petersen, Rikke Premer

    engineering is addresses from two perspectives – as an educational response and an occupational constellation. Architecture and engineering are two of the traditional design professions and they frequently meet in the occupational setting, but at educational institutions they remain largely estranged....... The paper builds on a multi-sited study of an architectural engineering program at the Technical University of Denmark and an architectural engineering team within an international engineering consultancy based on Denmark. They are both responding to new tendencies within the building industry where...... the role of engineers and architects increasingly overlap during the design process, but their approaches reflect different perceptions of the consequences. The paper discusses some of the challenges that design education, not only within engineering, is facing today: young designers must be equipped...

  8. Knowledge and Architectural Practice

    DEFF Research Database (Denmark)

    Verbeke, Johan

    2017-01-01

    of the level of research methods and will explain that the research methods and processes in creative practice research are very similar to grounded theory which is an established research method in the social sciences. Finally, an argument will be made for a more explicit research attitude in architectural......This paper focuses on the specific knowledge residing in architectural practice. It is based on the research of 35 PhD fellows in the ADAPT-r (Architecture, Design and Art Practice Training-research) project. The ADAPT-r project innovates architectural research in combining expertise from academia...... and from practice in order to highlight and extract the specific kind of knowledge which resides and is developed in architectural practice (creative practice research). The paper will discuss three ongoing and completed PhD projects and focusses on the outcomes and their contribution to the field...

  9. ISLAMIZATION OF CONTEMPORARY ARCHITECTURE: SHIFTING THE PARADIGM OF ISLAMIC ARCHITECTURE

    Directory of Open Access Journals (Sweden)

    Mustapha Ben- Hamouche

    2012-03-01

    Full Text Available Islamic architecture is often thought as a history course and thus finds its material limited to the cataloguing and studying of legacies of successive empires or various geographic regions of the Islamic world. In practice, adherent professionals tend to reproduce high styles such as Umayyad, Abassid, Fatimid, Ottoman, etc., or recycle well known elements such as the minarets, courtyards, and mashrabiyyahs. This approach, endorsed by the present comprehensive Islamic revival, is believed to be the way to defend and revitalize the identity of Muslim societies that was initially affected by colonization and now is being offended by globalization. However, this approach often clashes with the contemporary trends in architecture that do not necessarily oppose the essence of Islamic architecture. Furthermore, it sometimes lead to an erroneous belief that consists of relating a priori forms to Islam and that clashes with the timeless and universal character of the Islamic religion. The key question to be asked then is, beyond this historicist view, what would be an “Islamic architec-ture” of nowadays that originates from the essence of Islam and that responds to contemporary conditions, needs, aspirations of present Muslim societies and individuals. To what extends can Islamic architecture bene-fits from modern progress and contemporary thought in resurrecting itself without loosing its essence. The hypothesis of the study is that, just as early Muslim architecture started from the adoption, use and re-use of early pre-Islamic architectures before reaching originality, this process, called Islamization, could also take place nowadays with the contemporary thought that is mostly developed in Western and non-Islamic environ-ments. Mechanisms in Islam that allowed the “absorption” of pre-existing civilizations should thus structure the islamization approach and serve the scholars and professionals to reach the new Islamic architecture. The

  10. Using simulation to evaluate the performance of resilience strategies and process failures

    Energy Technology Data Exchange (ETDEWEB)

    Levy, Scott N.; Topp, Bryan Embry; Arnold, Dorian C; Ferreira, Kurt Brian; Widener, Patrick; Hoefler, Torsten

    2014-01-01

    Fault-tolerance has been identified as a major challenge for future extreme-scale systems. Current predictions suggest that, as systems grow in size, failures will occur more frequently. Because increases in failure frequency reduce the performance and scalability of these systems, significant effort has been devoted to developing and refining resilience mechanisms to mitigate the impact of failures. However, effective evaluation of these mechanisms has been challenging. Current systems are smaller and have significantly different architectural features (e.g., interconnect, persistent storage) than we expect to see in next-generation systems. To overcome these challenges, we propose the use of simulation. Simulation has been shown to be an effective tool for investigating performance characteristics of applications on future systems. In this work, we: identify the set of system characteristics that are necessary for accurate performance prediction of resilience mechanisms for HPC systems and applications; demonstrate how these system characteristics can be incorporated into an existing large-scale simulator; and evaluate the predictive performance of our modified simulator. We also describe how we were able to optimize the simulator for large temporal and spatial scales-allowing the simulator to run 4x faster and use over 100x less memory.

  11. SABRE: a bio-inspired fault-tolerant electronic architecture

    International Nuclear Information System (INIS)

    Bremner, P; Samie, M; Dragffy, G; Pipe, A G; Liu, Y; Tempesti, G; Timmis, J; Tyrrell, A M

    2013-01-01

    As electronic devices become increasingly complex, ensuring their reliable, fault-free operation is becoming correspondingly more challenging. It can be observed that, in spite of their complexity, biological systems are highly reliable and fault tolerant. Hence, we are motivated to take inspiration for biological systems in the design of electronic ones. In SABRE (self-healing cellular architectures for biologically inspired highly reliable electronic systems), we have designed a bio-inspired fault-tolerant hierarchical architecture for this purpose. As in biology, the foundation for the whole system is cellular in nature, with each cell able to detect faults in its operation and trigger intra-cellular or extra-cellular repair as required. At the next level in the hierarchy, arrays of cells are configured and controlled as function units in a transport triggered architecture (TTA), which is able to perform partial-dynamic reconfiguration to rectify problems that cannot be solved at the cellular level. Each TTA is, in turn, part of a larger multi-processor system which employs coarser grain reconfiguration to tolerate faults that cause a processor to fail. In this paper, we describe the details of operation of each layer of the SABRE hierarchy, and how these layers interact to provide a high systemic level of fault tolerance. (paper)

  12. Detecting Slow Deformation Signals Preceding Dynamic Failure: A New Strategy For The Mitigation Of Natural Hazards (SAFER)

    Science.gov (United States)

    Vinciguerra, Sergio; Colombero, Chiara; Comina, Cesare; Ferrero, Anna Maria; Mandrone, Giuseppe; Umili, Gessica; Fiaschi, Andrea; Saccorotti, Gilberto

    2015-04-01

    Rock slope monitoring is a major aim in territorial risk assessment and mitigation. The high velocity that usually characterizes the failure phase of rock instabilities makes the traditional instruments based on slope deformation measurements not applicable for early warning systems. The use of "site specific" microseismic monitoring systems, with particular reference to potential destabilizing factors, such as rainfalls and temperature changes, can allow to detect pre-failure signals in unstable sectors within the rock mass and to predict the possible acceleration to the failure. We deployed a microseismic monitoring system in October 2013 developed by the University of Turin/Compagnia San Paolo and consisting of a network of 4 triaxial 4.5 Hz seismometers connected to a 12 channel data logger on an unstable patch of the Madonna del Sasso, Italian Western Alps. The initial characterization based on geomechanical and geophysical tests allowed to understand the instability mechanism and to design a 'large aperture' configuration which encompasses the entire unstable rock and can monitor subtle changes of the mechanical properties of the medium. Stability analysis showed that the stability of the slope is due to rock bridges. A continuous recording at 250 Hz sampling frequency (switched in March 2014 to 1 kHz for improving the first arrival time picking and obtain wider frequency content information) and a trigger recording based on a STA/LTA (Short Time Average over Long Time Average) detection algorithm have been used. More than 2000 events with different waveforms, duration and frequency content have been recorded between November 2013 and March 2014. By inspecting the acquired events we identified the key parameters for a reliable distinction among the nature of each signal, i.e. the signal shape in terms of amplitude, duration, kurtosis and the frequency content in terms of range of maximum frequency content, frequency distribution in spectrograms. Four main

  13. System and method for detecting a faulty object in a system

    Science.gov (United States)

    Gunnels, John A.; Gustavson, Fred Gehrung; Engle, Robert Daniel

    2010-12-14

    A method (and system) for detecting at least one faulty object in a system including a plurality of objects in communication with each other in an n-dimensional architecture, includes probing a first plane of objects in the n-dimensional architecture and probing at least one other plane of objects in the n-dimensional architecture which would result in identifying a faulty object in the system.

  14. Architecture design of the multi-functional wavelet-based ECG microprocessor for realtime detection of abnormal cardiac events.

    Science.gov (United States)

    Cheng, Li-Fang; Chen, Tung-Chien; Chen, Liang-Gee

    2012-01-01

    Most of the abnormal cardiac events such as myocardial ischemia, acute myocardial infarction (AMI) and fatal arrhythmia can be diagnosed through continuous electrocardiogram (ECG) analysis. According to recent clinical research, early detection and alarming of such cardiac events can reduce the time delay to the hospital, and the clinical outcomes of these individuals can be greatly improved. Therefore, it would be helpful if there is a long-term ECG monitoring system with the ability to identify abnormal cardiac events and provide realtime warning for the users. The combination of the wireless body area sensor network (BASN) and the on-sensor ECG processor is a possible solution for this application. In this paper, we aim to design and implement a digital signal processor that is suitable for continuous ECG monitoring and alarming based on the continuous wavelet transform (CWT) through the proposed architectures--using both programmable RISC processor and application specific integrated circuits (ASIC) for performance optimization. According to the implementation results, the power consumption of the proposed processor integrated with an ASIC for CWT computation is only 79.4 mW. Compared with the single-RISC processor, about 91.6% of the power reduction is achieved.

  15. On experiments to detect possible failures of relativity theory

    International Nuclear Information System (INIS)

    Rodrigues Junior, W.A.; Tiomno, J.

    1985-01-01

    Two recently proposed experiments by Kolen and Torr, designed to show failures of Einstein's Special Relativity (SR) are analysed. It is pointed out that these papers contain a number of imprecisions and misconceptions which are cleared out. Also the very spread misconception about anysotropy of propagation of light in vacuum in Lorentz Aether Theory (LAT) is analysed showing that the anysotropy is only a coordinate effect. Comparison of the correct results in LAT theory, leading to violation of SR, with new theoretical and experimental results of Torr et al is made. Some of these new results are shown to be incorrect and/or inconsistent with both SR and LAT. (Author) [pt

  16. Virtual Sensor for Failure Detection, Identification and Recovery in the Transition Phase of a Morphing Aircraft

    Directory of Open Access Journals (Sweden)

    Guillermo Heredia

    2010-03-01

    Full Text Available The Helicopter Adaptive Aircraft (HADA is a morphing aircraft which is able to take-off as a helicopter and, when in forward flight, unfold the wings that are hidden under the fuselage, and transfer the power from the main rotor to a propeller, thus morphing from a helicopter to an airplane. In this process, the reliable folding and unfolding of the wings is critical, since a failure may determine the ability to perform a mission, and may even be catastrophic. This paper proposes a virtual sensor based Fault Detection, Identification and Recovery (FDIR system to increase the reliability of the HADA aircraft. The virtual sensor is able to capture the nonlinear interaction between the folding/unfolding wings aerodynamics and the HADA airframe using the navigation sensor measurements. The proposed FDIR system has been validated using a simulation model of the HADA aircraft, which includes real phenomena as sensor noise and sampling characteristics and turbulence and wind perturbations.

  17. Virtual sensor for failure detection, identification and recovery in the transition phase of a morphing aircraft.

    Science.gov (United States)

    Heredia, Guillermo; Ollero, Aníbal

    2010-01-01

    The Helicopter Adaptive Aircraft (HADA) is a morphing aircraft which is able to take-off as a helicopter and, when in forward flight, unfold the wings that are hidden under the fuselage, and transfer the power from the main rotor to a propeller, thus morphing from a helicopter to an airplane. In this process, the reliable folding and unfolding of the wings is critical, since a failure may determine the ability to perform a mission, and may even be catastrophic. This paper proposes a virtual sensor based Fault Detection, Identification and Recovery (FDIR) system to increase the reliability of the HADA aircraft. The virtual sensor is able to capture the nonlinear interaction between the folding/unfolding wings aerodynamics and the HADA airframe using the navigation sensor measurements. The proposed FDIR system has been validated using a simulation model of the HADA aircraft, which includes real phenomena as sensor noise and sampling characteristics and turbulence and wind perturbations.

  18. Instrument failure monitoring in nuclear power systems

    International Nuclear Information System (INIS)

    Tylee, J.L.

    1982-01-01

    Methods of monitoring dynamic systems for instrument failures were developed and evaluated. In particular, application of these methods to nuclear power plant components is addressed. For a linear system, statistical tests on the innovations sequence of a Kalman filter driven by all system measurements provides a failure detection decision and identifies any failed sensor. This sequence (in an unfailed system) is zero-mean with calculable covariance; hence, any major deviation from these properties is assumed to be due to an instrument failure. Once a failure is identified, the failed instrument is replaced with an optimal estimate of the measured parameter. This failure accommodation is accomplished using optimally combined data from a bank of accommodation Kalman filters (one for each sensor), each driven by a single measurement. Using such a sensor replacement allows continued system operation under failed conditions and provides a system operator with information otherwise unavailable. To demonstrate monitor performance, a liner failure monitor was developed for the pressurizer in the Loss-of-Fluid Test (LOFT) reactor plant. LOFT is a small-scale pressurized water reactor (PWR) research facility located at the Idaho National Engineering Laboratory. A linear, third-order model of the pressurizer dynamics was developed from first principles and validated. Using data from the LOFT L6 test series, numerous actual and simulated water level, pressure, and temperature sensor failures were employed to illustrate monitor capabilities. Failure monitor design was applied to nonlinear dynamic systems by replacing all monitor linear Kalman filters with extended Kalman filters. A nonlinear failure monitor was derived for LOFT reactor instrumentation. A sixth-order reactor model, including descriptions of reactor kinetics, fuel rod heat transfer, and core coolant dynamics, was obtained and verified with test data

  19. The Political Economy of Architectural Research : Dutch Architecture, Architects and the City, 2000-2012

    NARCIS (Netherlands)

    Djalali, A.

    2016-01-01

    The status of architectural research has not yet been clearly defined. Nevertheless, architectural research has surely become a core element in the profession of architecture. In fact, the tendency seem for architects to be less and less involved with building design and construction services, which

  20. Materials Driven Architectural Design and Representation

    DEFF Research Database (Denmark)

    Kruse Aagaard, Anders

    2015-01-01

    This paper aims to outline a framework for a deeper connection between experimentally obtained material knowledge and architectural design. While materials and architecture in the process of realisation are tightly connected, architectural design and representation are often distanced from...... another role in relation to architectural production. It is, in this paper, the intention to point at material research as an active initiator in explorative approaches to architectural design methods and architectural representation. This paper will point at the inclusion of tangible and experimental...... material research in the early phases of architectural design and to that of the architectural set of tools and representation. The paper will through use of existing research and the author’s own material research and practice suggest a way of using a combination of digital drawing, digital fabrication...