WorldWideScience

Sample records for model providing reliable

  1. Bring Your Own Device - Providing Reliable Model of Data Access

    Directory of Open Access Journals (Sweden)

    Stąpór Paweł

    2016-10-01

    Full Text Available The article presents a model of Bring Your Own Device (BYOD as a model network, which provides the user reliable access to network resources. BYOD is a model dynamically developing, which can be applied in many areas. Research network has been launched in order to carry out the test, in which as a service of BYOD model Work Folders service was used. This service allows the user to synchronize files between the device and the server. An access to the network is completed through the wireless communication by the 802.11n standard. Obtained results are shown and analyzed in this article.

  2. Can a deterministic spatial microsimulation model provide reliable small-area estimates of health behaviours? An example of smoking prevalence in New Zealand.

    Science.gov (United States)

    Smith, Dianna M; Pearce, Jamie R; Harland, Kirk

    2011-03-01

    Models created to estimate neighbourhood level health outcomes and behaviours can be difficult to validate as prevalence is often unknown at the local level. This paper tests the reliability of a spatial microsimulation model, using a deterministic reweighting method, to predict smoking prevalence in small areas across New Zealand. The difference in the prevalence of smoking between those estimated by the model and those calculated from census data is less than 20% in 1745 out of 1760 areas. The accuracy of these results provides users with greater confidence to utilize similar approaches in countries where local-level smoking prevalence is unknown.

  3. Supply chain reliability modelling

    Directory of Open Access Journals (Sweden)

    Eugen Zaitsev

    2012-03-01

    Full Text Available Background: Today it is virtually impossible to operate alone on the international level in the logistics business. This promotes the establishment and development of new integrated business entities - logistic operators. However, such cooperation within a supply chain creates also many problems related to the supply chain reliability as well as the optimization of the supplies planning. The aim of this paper was to develop and formulate the mathematical model and algorithms to find the optimum plan of supplies by using economic criterion and the model for the probability evaluating of non-failure operation of supply chain. Methods: The mathematical model and algorithms to find the optimum plan of supplies were developed and formulated by using economic criterion and the model for the probability evaluating of non-failure operation of supply chain. Results and conclusions: The problem of ensuring failure-free performance of goods supply channel analyzed in the paper is characteristic of distributed network systems that make active use of business process outsourcing technologies. The complex planning problem occurring in such systems that requires taking into account the consumer's requirements for failure-free performance in terms of supply volumes and correctness can be reduced to a relatively simple linear programming problem through logical analysis of the structures. The sequence of the operations, which should be taken into account during the process of the supply planning with the supplier's functional reliability, was presented.

  4. Reliability Modeling of Wind Turbines

    DEFF Research Database (Denmark)

    Kostandyan, Erik

    and uncertainties are quantified. Further, estimation of annual failure probability for structural components taking into account possible faults in electrical or mechanical systems is considered. For a representative structural failure mode, a probabilistic model is developed that incorporates grid loss failures...... components. Thus, models of reliability should be developed and applied in order to quantify the residual life of the components. Damage models based on physics of failure combined with stochastic models describing the uncertain parameters are imperative for development of cost-optimal decision tools...... for Operation & Maintenance planning. Concentrating efforts on development of such models, this research is focused on reliability modeling of Wind Turbine critical subsystems (especially the power converter system). For reliability assessment of these components, structural reliability methods are applied...

  5. PROVIDING RELIABILITY OF HUMAN RESOURCES IN PRODUCTION MANAGEMENT PROCESS

    Directory of Open Access Journals (Sweden)

    Anna MAZUR

    2014-07-01

    Full Text Available People are the most valuable asset of an organization and the results of a company mostly depends on them. The human factor can also be a weak link in the company and cause of the high risk for many of the processes. Reliability of the human factor in the process of the manufacturing process will depend on many factors. The authors include aspects of human error, safety culture, knowledge, communication skills, teamwork and leadership role in the developed model of reliability of human resources in the management of the production process. Based on the case study and the results of research and observation of the author present risk areas defined in a specific manufacturing process and the results of evaluation of the reliability of human resources in the process.

  6. Leveraging Cloud Technology to Provide a Responsive, Reliable and Scalable Backend for the Virtual Ice Sheet Laboratory Using the Ice Sheet System Model and Amazon's Elastic Compute Cloud

    Science.gov (United States)

    Perez, G. L.; Larour, E. Y.; Halkides, D. J.; Cheng, D. L. C.

    2015-12-01

    The Virtual Ice Sheet Laboratory(VISL) is a Cryosphere outreach effort byscientists at the Jet Propulsion Laboratory(JPL) in Pasadena, CA, Earth and SpaceResearch(ESR) in Seattle, WA, and the University of California at Irvine (UCI), with the goal of providing interactive lessons for K-12 and college level students,while conforming to STEM guidelines. At the core of VISL is the Ice Sheet System Model(ISSM), an open-source project developed jointlyat JPL and UCI whose main purpose is to model the evolution of the polar ice caps in Greenland and Antarctica. By using ISSM, VISL students have access tostate-of-the-art modeling software that is being used to conduct scientificresearch by users all over the world. However, providing this functionality isby no means simple. The modeling of ice sheets in response to sea and atmospheric temperatures, among many other possible parameters, requiressignificant computational resources. Furthermore, this service needs to beresponsive and capable of handling burst requests produced by classrooms ofstudents. Cloud computing providers represent a burgeoning industry. With majorinvestments by tech giants like Amazon, Google and Microsoft, it has never beeneasier or more affordable to deploy computational elements on-demand. This isexactly what VISL needs and ISSM is capable of. Moreover, this is a promisingalternative to investing in expensive and rapidly devaluing hardware.

  7. Hybrid reliability model for fatigue reliability analysis of steel bridges

    Institute of Scientific and Technical Information of China (English)

    曹珊珊; 雷俊卿

    2016-01-01

    A kind of hybrid reliability model is presented to solve the fatigue reliability problems of steel bridges. The cumulative damage model is one kind of the models used in fatigue reliability analysis. The parameter characteristics of the model can be described as probabilistic and interval. The two-stage hybrid reliability model is given with a theoretical foundation and a solving algorithm to solve the hybrid reliability problems. The theoretical foundation is established by the consistency relationships of interval reliability model and probability reliability model with normally distributed variables in theory. The solving process is combined with the definition of interval reliability index and the probabilistic algorithm. With the consideration of the parameter characteristics of theS−N curve, the cumulative damage model with hybrid variables is given based on the standards from different countries. Lastly, a case of steel structure in the Neville Island Bridge is analyzed to verify the applicability of the hybrid reliability model in fatigue reliability analysis based on the AASHTO.

  8. Reliability Modeling of Wind Turbines

    DEFF Research Database (Denmark)

    Kostandyan, Erik

    Cost reductions for offshore wind turbines are a substantial requirement in order to make offshore wind energy more competitive compared to other energy supply methods. During the 20 – 25 years of wind turbines useful life, Operation & Maintenance costs are typically estimated to be a quarter...... the actions should be made and the type of actions requires knowledge on the accumulated damage or degradation state of the wind turbine components. For offshore wind turbines, the action times could be extended due to weather restrictions and result in damage or degradation increase of the remaining...... for Operation & Maintenance planning. Concentrating efforts on development of such models, this research is focused on reliability modeling of Wind Turbine critical subsystems (especially the power converter system). For reliability assessment of these components, structural reliability methods are applied...

  9. Applying reliability models to the maintenance of Space Shuttle software

    Science.gov (United States)

    Schneidewind, Norman F.

    1992-01-01

    Software reliability models provide the software manager with a powerful tool for predicting, controlling, and assessing the reliability of software during maintenance. We show how a reliability model can be effectively employed for reliability prediction and the development of maintenance strategies using the Space Shuttle Primary Avionics Software Subsystem as an example.

  10. Reliability block diagrams to model disease management.

    Science.gov (United States)

    Sonnenberg, A; Inadomi, J M; Bauerfeind, P

    1999-01-01

    Studies of diagnostic or therapeutic procedures in the management of any given disease tend to focus on one particular aspect of the disease and ignore the interaction between the multitude of factors that determine its final outcome. The present article introduces a mathematical model that accounts for the joint contribution of various medical and non-medical components to the overall disease outcome. A reliability block diagram is used to model patient compliance, endoscopic screening, and surgical therapy for dysplasia in Barrett's esophagus. The overall probability of a patient with a Barrett's esophagus to comply with a screening program, be correctly diagnosed with dysplasia, and undergo successful therapy is 37%. The reduction in the overall success rate, despite the fact that the majority of components are assumed to function with reliability rates of 80% or more, is a reflection of the multitude of serial subsystems involved in disease management. Each serial component influences the overall success rate in a linear fashion. Building multiple parallel pathways into the screening program raises its overall success rate to 91%. Parallel arrangements render systems less sensitive to diagnostic or therapeutic failures. A reliability block diagram provides the means to model the contributions of many heterogeneous factors to disease outcome. Since no medical system functions perfectly, redundancy provided by parallel subsystems assures a greater overall reliability.

  11. Overcoming some limitations of imprecise reliability models

    DEFF Research Database (Denmark)

    Kozine, Igor; Krymsky, Victor

    2011-01-01

    The application of imprecise reliability models is often hindered by the rapid growth in imprecision that occurs when many components constitute a system and by the fact that time to failure is bounded from above. The latter results in the necessity to explicitly introduce an upper bound on time...... to failure which is in reality a rather arbitrary value. The practical meaning of the models of this kind is brought to question. We suggest an approach that overcomes the issue of having to impose an upper bound on time to failure and makes the calculated lower and upper reliability measures more precise....... The main assumption consists in that failure rate is bounded. Langrage method is used to solve the non-linear program. Finally, an example is provided....

  12. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG&G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  13. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  14. A Software Reliability Model Using Quantile Function

    Directory of Open Access Journals (Sweden)

    Bijamma Thomas

    2014-01-01

    Full Text Available We study a class of software reliability models using quantile function. Various distributional properties of the class of distributions are studied. We also discuss the reliability characteristics of the class of distributions. Inference procedures on parameters of the model based on L-moments are studied. We apply the proposed model to a real data set.

  15. Virtual private networks can provide reliable IT connections.

    Science.gov (United States)

    Kabachinski, Jeff

    2006-01-01

    A VPN is a private network that uses a public network, such as the Internet, to connect remote sites and users together. Instead of using a dedicated hard-wired connection as in a trusted connection or leased lines, a VPN uses a virtual connection routed through the Internet from the organization's private network to the remote site or employee. Typical VPN services allow for security in terms of data encryption as well as means to authenticate, authorize, and account for all the traffic. VPN services allow the organization to use whatever network operating system they wish as it also encapsulate your data into the protocols needed to transport data across public lines. The intention of this IT World article was to give the reader an introduction to VPNs. Keep in mind that there are no standard models for a VPN. You're likely to come across many vendors presenting the virtues of their VPN applications and devices when you Google "VPN." However the general uses, concepts, and principles outlined here should give you a fighting chance to read through the marketing language in the online ads and "white papers."

  16. Analysis on Some of Software Reliability Models

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Software reliability & maintainability evaluation tool (SRMET 3.0) is introducted in detail in this paper,which was developed by Software Evaluation and Test Center of China Aerospace Mechanical Corporation. SRMET 3.0is supported by seven soft ware reliability models and four software maintainability models. Numerical characteristicsfor all those models are deeply studied in this paper, and corresponding numerical algorithms for each model are alsogiven in the paper.

  17. Delivery Time Reliability Model of Logistics Network

    OpenAIRE

    Liusan Wu; Qingmei Tan; Yuehui Zhang

    2013-01-01

    Natural disasters like earthquake and flood will surely destroy the existing traffic network, usually accompanied by delivery delay or even network collapse. A logistics-network-related delivery time reliability model defined by a shortest-time entropy is proposed as a means to estimate the actual delivery time reliability. The less the entropy is, the stronger the delivery time reliability remains, and vice versa. The shortest delivery time is computed separately based on two different assum...

  18. SOFTWARE RELIABILITY MODEL FOR COMPONENT INTERACTION MODE

    Institute of Scientific and Technical Information of China (English)

    Wang Qiang; Lu Yang; Xu Zijun; Han Jianghong

    2011-01-01

    With the rapid progress of component technology,the software development methodology of gathering a large number of components for designing complex software systems has matured.But,how to assess the application reliability accurately with the information of system architecture and the components reliabilities together has become a knotty problem.In this paper,the defects in formal description of software architecture and the limitations in existed model assumptions are both analyzed.Moreover,a new software reliability model called Component Interaction Mode (CIM) is proposed.With this model,the problem for existed component-based software reliability analysis models that cannot deal with the cases of component interaction with non-failure independent and non-random control transition is resolved.At last,the practice examples are presented to illustrate the effectiveness of this model

  19. Modeling of reliable multicasting services

    DEFF Research Database (Denmark)

    Barkauskaite, Monika; Zhang, Jiang; Wessing, Henrik

    2010-01-01

    This paper addresses network survivability for Multicast transport over MPLS-TP ring topology networks. Protection mechanisms standardized for unicast are not fully suitable for multicast point-to-multipoint transmission and multicast schemes are not standardized yet. Therefore, this paper...... investigates one of the proficient protection schemes and uses OPNET Modeler for analyzing and designing networks with the chosen protection method. For failure detection and protection switching initiation, the OAM (Operation, Administration and Maintenance) functions will be added to the system model. From...

  20. Assessment of stochastically updated finite element models using reliability indicator

    Science.gov (United States)

    Hua, X. G.; Wen, Q.; Ni, Y. Q.; Chen, Z. Q.

    2017-01-01

    Finite element (FE) model updating techniques have been a viable approach to correcting an initial mathematical model based on test data. Validation of the updated FE models is usually conducted by comparing model predictions with independent test data that have not been used for model updating. This approach of model validation cannot be readily applied in the case of a stochastically updated FE model. In recognizing that structural reliability is a major decision factor throughout the lifecycle of a structure, this study investigates the use of structural reliability as a measure for assessing the quality of stochastically updated FE models. A recently developed perturbation method for stochastic FE model updating is first applied to attain the stochastically updated models by using the measured modal parameters with uncertainty. The reliability index and failure probability for predefined limit states are computed for the initial and the stochastically updated models, respectively, and are compared with those obtained from the 'true' model to assess the quality of the two models. Numerical simulation of a truss bridge is provided as an example. The simulated modal parameters involving different uncertainty magnitudes are used to update an initial model of the bridge. It is shown that the reliability index obtained from the updated model is much closer to true reliability index than that obtained from the initial model in the case of small uncertainty magnitude; in the case of large uncertainty magnitude, the reliability index computed from the initial model rather than from the updated model is closer to the true value. The present study confirms the usefulness of measurement-calibrated FE models and at the same time also highlights the importance of the uncertainty reduction in test data for reliable model updating and reliability evaluation.

  1. Towards a reliable animal model of migraine

    DEFF Research Database (Denmark)

    Olesen, Jes; Jansen-Olesen, Inger

    2012-01-01

    The pharmaceutical industry shows a decreasing interest in the development of drugs for migraine. One of the reasons for this could be the lack of reliable animal models for studying the effect of acute and prophylactic migraine drugs. The infusion of glyceryl trinitrate (GTN) is the best validated...... and most studied human migraine model. Several attempts have been made to transfer this model to animals. The different variants of this model are discussed as well as other recent models....

  2. Space Vehicle Reliability Modeling in DIORAMA

    Energy Technology Data Exchange (ETDEWEB)

    Tornga, Shawn Robert [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-12

    When modeling system performance of space based detection systems it is important to consider spacecraft reliability. As space vehicles age the components become prone to failure for a variety of reasons such as radiation damage. Additionally, some vehicles may lose the ability to maneuver once they exhaust fuel supplies. Typically failure is divided into two categories: engineering mistakes and technology surprise. This document will report on a method of simulating space vehicle reliability in the DIORAMA framework.

  3. Delivery Time Reliability Model of Logistics Network

    Directory of Open Access Journals (Sweden)

    Liusan Wu

    2013-01-01

    Full Text Available Natural disasters like earthquake and flood will surely destroy the existing traffic network, usually accompanied by delivery delay or even network collapse. A logistics-network-related delivery time reliability model defined by a shortest-time entropy is proposed as a means to estimate the actual delivery time reliability. The less the entropy is, the stronger the delivery time reliability remains, and vice versa. The shortest delivery time is computed separately based on two different assumptions. If a path is concerned without capacity restriction, the shortest delivery time is positively related to the length of the shortest path, and if a path is concerned with capacity restriction, a minimax programming model is built to figure up the shortest delivery time. Finally, an example is utilized to confirm the validity and practicality of the proposed approach.

  4. An interval-valued reliability model with bounded failure rates

    DEFF Research Database (Denmark)

    Kozine, Igor; Krymsky, Victor

    2012-01-01

    The approach to deriving interval-valued reliability measures described in this paper is distinctive from other imprecise reliability models in that it overcomes the issue of having to impose an upper bound on time to failure. It rests on the presupposition that a constant interval-valued failure...... function if only partial failure information is available. An example is provided. © 2012 Copyright Taylor and Francis Group, LLC....

  5. A random effects generalized linear model for reliability compositive evaluation

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    This paper first proposes a random effects generalized linear model to evaluate the storage life of one kind of high reliable and small sample-sized products by combining multi-sources information of products coming from the same population but stored at different environments. The relevant algorithms are also provided. Simulation results manifest the soundness and effectiveness of the proposed model.

  6. A random effects generalized linear model for reliability compositive evaluation

    Institute of Scientific and Technical Information of China (English)

    ZHAO Hui; YU Dan

    2009-01-01

    This paper first proposes a random effects generalized linear model to evaluate the storage life of one kind of high reliable and small sample-sized products by combining multi-sources information of products coming from the same population but stored at different environments.The relevant algorithms are also provided.Simulation results manifest the soundness and effectiveness of the proposed model.

  7. A Censored Nonparametric Software Reliability Model

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    This paper analyses the effct of censoring on the estimation of failure rate, and presents a framework of a censored nonparametric software reliability model. The model is based on nonparametric testing of failure rate monotonically decreasing and weighted kernel failure rate estimation under the constraint of failure rate monotonically decreasing. Not only does the model have the advantages of little assumptions and weak constraints, but also the residual defects number of the software system can be estimated. The numerical experiment and real data analysis show that the model performs well with censored data.

  8. AX-5 space suit reliability model

    Science.gov (United States)

    Reinhardt, AL; Magistad, John

    1990-01-01

    The AX-5 is an all metal Extra-vehicular (EVA) space suit currently under consideration for use on Space Station Freedom. A reliability model was developed based on the suit's unique design and on projected joint cycle requirements. Three AX-5 space suit component joints were cycled under simulated load conditions in accordance with NASA's advanced space suit evaluation plan. This paper will describe the reliability model developed, the results of the cycle testing, and an interpretation of the model and test results in terms of projected Mean Time Between Failure for the AX-5. A discussion of the maintenance implications and life cycle for the AX-5 based on this projection is also included.

  9. Quantitative metal magnetic memory reliability modeling for welded joints

    Science.gov (United States)

    Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng

    2016-03-01

    Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.

  10. Stochastic models in reliability and maintenance

    CERN Document Server

    2002-01-01

    Our daily lives can be maintained by the high-technology systems. Computer systems are typical examples of such systems. We can enjoy our modern lives by using many computer systems. Much more importantly, we have to maintain such systems without failure, but cannot predict when such systems will fail and how to fix such systems without delay. A stochastic process is a set of outcomes of a random experiment indexed by time, and is one of the key tools needed to analyze the future behavior quantitatively. Reliability and maintainability technologies are of great interest and importance to the maintenance of such systems. Many mathematical models have been and will be proposed to describe reliability and maintainability systems by using the stochastic processes. The theme of this book is "Stochastic Models in Reliability and Main­ tainability. " This book consists of 12 chapters on the theme above from the different viewpoints of stochastic modeling. Chapter 1 is devoted to "Renewal Processes," under which cla...

  11. Reliability Analysis and Modeling of ZigBee Networks

    Science.gov (United States)

    Lin, Cheng-Min

    The architecture of ZigBee networks focuses on developing low-cost, low-speed ubiquitous communication between devices. The ZigBee technique is based on IEEE 802.15.4, which specifies the physical layer and medium access control (MAC) for a low rate wireless personal area network (LR-WPAN). Currently, numerous wireless sensor networks have adapted the ZigBee open standard to develop various services to promote improved communication quality in our daily lives. The problem of system and network reliability in providing stable services has become more important because these services will be stopped if the system and network reliability is unstable. The ZigBee standard has three kinds of networks; star, tree and mesh. The paper models the ZigBee protocol stack from the physical layer to the application layer and analyzes these layer reliability and mean time to failure (MTTF). Channel resource usage, device role, network topology and application objects are used to evaluate reliability in the physical, medium access control, network, and application layers, respectively. In the star or tree networks, a series system and the reliability block diagram (RBD) technique can be used to solve their reliability problem. However, a division technology is applied here to overcome the problem because the network complexity is higher than that of the others. A mesh network using division technology is classified into several non-reducible series systems and edge parallel systems. Hence, the reliability of mesh networks is easily solved using series-parallel systems through our proposed scheme. The numerical results demonstrate that the reliability will increase for mesh networks when the number of edges in parallel systems increases while the reliability quickly drops when the number of edges and the number of nodes increase for all three networks. More use of resources is another factor impact on reliability decreasing. However, lower network reliability will occur due to

  12. Statistical models and methods for reliability and survival analysis

    CERN Document Server

    Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo

    2013-01-01

    Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical

  13. The Impact of Process Capability on Service Reliability for Critical Infrastructure Providers

    Science.gov (United States)

    Houston, Clemith J., Jr.

    2013-01-01

    This study investigated the relationship between organizational processes that have been identified as promoting resiliency and their impact on service reliability within the scope of critical infrastructure providers. The importance of critical infrastructure to the nation is evident from the body of research and is supported by instances where…

  14. Cost Calculation Model for Logistics Service Providers

    Directory of Open Access Journals (Sweden)

    Zoltán Bokor

    2012-11-01

    Full Text Available The exact calculation of logistics costs has become a real challenge in logistics and supply chain management. It is essential to gain reliable and accurate costing information to attain efficient resource allocation within the logistics service provider companies. Traditional costing approaches, however, may not be sufficient to reach this aim in case of complex and heterogeneous logistics service structures. So this paper intends to explore the ways of improving the cost calculation regimes of logistics service providers and show how to adopt the multi-level full cost allocation technique in logistics practice. After determining the methodological framework, a sample cost calculation scheme is developed and tested by using estimated input data. Based on the theoretical findings and the experiences of the pilot project it can be concluded that the improved costing model contributes to making logistics costing more accurate and transparent. Moreover, the relations between costs and performances also become more visible, which enhances the effectiveness of logistics planning and controlling significantly

  15. Modeling and Simulation Reliable Spacecraft On-Board Computing

    Science.gov (United States)

    Park, Nohpill

    1999-01-01

    The proposed project will investigate modeling and simulation-driven testing and fault tolerance schemes for Spacecraft On-Board Computing, thereby achieving reliable spacecraft telecommunication. A spacecraft communication system has inherent capabilities of providing multipoint and broadcast transmission, connectivity between any two distant nodes within a wide-area coverage, quick network configuration /reconfiguration, rapid allocation of space segment capacity, and distance-insensitive cost. To realize the capabilities above mentioned, both the size and cost of the ground-station terminals have to be reduced by using reliable, high-throughput, fast and cost-effective on-board computing system which has been known to be a critical contributor to the overall performance of space mission deployment. Controlled vulnerability of mission data (measured in sensitivity), improved performance (measured in throughput and delay) and fault tolerance (measured in reliability) are some of the most important features of these systems. The system should be thoroughly tested and diagnosed before employing a fault tolerance into the system. Testing and fault tolerance strategies should be driven by accurate performance models (i.e. throughput, delay, reliability and sensitivity) to find an optimal solution in terms of reliability and cost. The modeling and simulation tools will be integrated with a system architecture module, a testing module and a module for fault tolerance all of which interacting through a centered graphical user interface.

  16. Reliability modeling and analysis of smart power systems

    CERN Document Server

    Karki, Rajesh; Verma, Ajit Kumar

    2014-01-01

    The volume presents the research work in understanding, modeling and quantifying the risks associated with different ways of implementing smart grid technology in power systems in order to plan and operate a modern power system with an acceptable level of reliability. Power systems throughout the world are undergoing significant changes creating new challenges to system planning and operation in order to provide reliable and efficient use of electrical energy. The appropriate use of smart grid technology is an important drive in mitigating these problems and requires considerable research acti

  17. Suitability Analysis of Continuous-Use Reliability Growth Projection Models

    Science.gov (United States)

    2015-03-26

    exists for all types, shapes, and sizes. The primary focus of this study is a comparison of reliability growth projection models designed for...requirements to use reliability growth models, recent studies have noted trends in reliability failures throughout the DoD. In [14] Dr. Michael Gilmore...so a strict exponential distribu- tion was used to stay within their assumptions. In reality, however, reliability growth models often must be used

  18. From eggs to bites: do ovitrap data provide reliable estimates of Aedes albopictus biting females?

    Directory of Open Access Journals (Sweden)

    Mattia Manica

    2017-03-01

    probability obtained by introducing these estimates in risk models were similar to those based on females/HLC (R0 > 1 in 86% and 40% of sampling dates for Chikungunya and Zika, respectively; R0  1 for Chikungunya is also to be expected when few/no eggs/day are collected by ovitraps. Discussion This work provides the first evidence of the possibility to predict mean number of adult biting Ae. albopictus females based on mean number of eggs and to compute the threshold of eggs/ovitrap associated to epidemiological risk of arbovirus transmission in the study area. Overall, however, the large confidence intervals in the model predictions represent a caveat regarding the reliability of monitoring schemes based exclusively on ovitrap collections to estimate numbers of biting females and plan control interventions.

  19. From eggs to bites: do ovitrap data provide reliable estimates of Aedes albopictus biting females?

    Science.gov (United States)

    Manica, Mattia; Rosà, Roberto; Della Torre, Alessandra; Caputo, Beniamino

    2017-01-01

    in risk models were similar to those based on females/HLC (R0 > 1 in 86% and 40% of sampling dates for Chikungunya and Zika, respectively; R0  1 for Chikungunya is also to be expected when few/no eggs/day are collected by ovitraps. This work provides the first evidence of the possibility to predict mean number of adult biting Ae. albopictus females based on mean number of eggs and to compute the threshold of eggs/ovitrap associated to epidemiological risk of arbovirus transmission in the study area. Overall, however, the large confidence intervals in the model predictions represent a caveat regarding the reliability of monitoring schemes based exclusively on ovitrap collections to estimate numbers of biting females and plan control interventions.

  20. From eggs to bites: do ovitrap data provide reliable estimates of Aedes albopictus biting females?

    Science.gov (United States)

    Manica, Mattia; Rosà, Roberto; della Torre, Alessandra

    2017-01-01

    introducing these estimates in risk models were similar to those based on females/HLC (R0 > 1 in 86% and 40% of sampling dates for Chikungunya and Zika, respectively; R0  1 for Chikungunya is also to be expected when few/no eggs/day are collected by ovitraps. Discussion This work provides the first evidence of the possibility to predict mean number of adult biting Ae. albopictus females based on mean number of eggs and to compute the threshold of eggs/ovitrap associated to epidemiological risk of arbovirus transmission in the study area. Overall, however, the large confidence intervals in the model predictions represent a caveat regarding the reliability of monitoring schemes based exclusively on ovitrap collections to estimate numbers of biting females and plan control interventions. PMID:28321362

  1. Markerless motion capture can provide reliable 3D gait kinematics in the sagittal and frontal plane.

    Science.gov (United States)

    Sandau, Martin; Koblauch, Henrik; Moeslund, Thomas B; Aanæs, Henrik; Alkjær, Tine; Simonsen, Erik B

    2014-09-01

    Estimating 3D joint rotations in the lower extremities accurately and reliably remains unresolved in markerless motion capture, despite extensive studies in the past decades. The main problems have been ascribed to the limited accuracy of the 3D reconstructions. Accordingly, the purpose of the present study was to develop a new approach based on highly detailed 3D reconstructions in combination with a translational and rotational unconstrained articulated model. The highly detailed 3D reconstructions were synthesized from an eight camera setup using a stereo vision approach. The subject specific articulated model was generated with three rotational and three translational degrees of freedom for each limb segment and without any constraints to the range of motion. This approach was tested on 3D gait analysis and compared to a marker based method. The experiment included ten healthy subjects in whom hip, knee and ankle joint were analysed. Flexion/extension angles as well as hip abduction/adduction closely resembled those obtained from the marker based system. However, the internal/external rotations, knee abduction/adduction and ankle inversion/eversion were less reliable.

  2. Markerless motion capture can provide reliable 3D gait kinematics in the sagittal and frontal plane

    DEFF Research Database (Denmark)

    Sandau, Martin; Koblauch, Henrik; Moeslund, Thomas B.

    2014-01-01

    of the present study was to develop a new approach based on highly detailed 3D reconstructions in combination with a translational and rotational unconstrained articulated model. The highly detailed 3D reconstructions were synthesized from an eight camera setup using a stereo vision approach. The subject......Estimating 3D joint rotations in the lower extremities accurately and reliably remains unresolved in markerless motion capture, despite extensive studies in the past decades. The main problems have been ascribed to the limited accuracy of the 3D reconstructions. Accordingly, the purpose...... specific articulated model was generated with three rotational and three translational degrees of freedom for each limb segment and without any constraints to the range of motion. This approach was tested on 3D gait analysis and compared to a marker based method. The experiment included ten healthy...

  3. Modeling Reliability Growth in Accelerated Stress Testing

    Science.gov (United States)

    2013-12-01

    projection models for both continuous use and discrete use systems found anywhere in the literature. The review comprises a synopsis of over 80...pertaining to the research that may have been unfamiliar to the reader. The Chapter has provided a synopsis of the research accomplished in the fields of...Cox, "Analysis of the probability and risk of cause specific failure," International Journal of Radiology Oncology, Biology, Physics, vol. 29, no. 5

  4. Reliability modelling - PETROBRAS 2010 integrated gas supply chain

    Energy Technology Data Exchange (ETDEWEB)

    Faertes, Denise; Heil, Luciana; Saker, Leonardo; Vieira, Flavia; Risi, Francisco; Domingues, Joaquim; Alvarenga, Tobias; Carvalho, Eduardo; Mussel, Patricia

    2010-09-15

    The purpose of this paper is to present the innovative reliability modeling of Petrobras 2010 integrated gas supply chain. The model represents a challenge in terms of complexity and software robustness. It was jointly developed by PETROBRAS Gas and Power Department and Det Norske Veritas. It was carried out with the objective of evaluating security of supply of 2010 gas network design that was conceived to connect Brazilian Northeast and Southeast regions. To provide best in class analysis, state of the art software was used to quantify the availability and the efficiency of the overall network and its individual components.

  5. Equivalent reliability polynomials modeling EAS and their geometries

    Directory of Open Access Journals (Sweden)

    Hassan Zahir Abdul Haddi

    2015-07-01

    Full Text Available In this paper we shall introduce two equivalent techniques in order to evaluate reliability analysis of electrical aircrafts systems (EAS: (i graph theory technique, and (ii simplifying diffeomorphism technique. Geometric modeling of reliability models is based on algebraic hypersurfaces, whose intrinsic properties are able to select those models which are relevant for applications. The basic idea is to cover the reliability hypersurfaces by exponentially decay curves. Most of the calculations made in this paper have used Maple and Matlab software.

  6. A literature review on inventory modeling with reliability consideration

    Directory of Open Access Journals (Sweden)

    Imtiaz Ahmed

    2014-01-01

    Full Text Available Inventories are the materials stored either waiting for processing or experiencing processing and in some cases for future delivery. Inventories are treated both as blessings and evil. As they are like money placed in a drawer, assets tied up in investments, incurring costs for the care of the stored material and also subject to spoilage and obsolescence there have been a spate of programs developed by industries, all aimed at reducing inventory levels and increasing efficiency on the shop floor. Nevertheless, they do have positive purposes such as stable source of input required for production, less replenishment and may reduce ordering costs because of economies of scale. Finished goods inventories provide for better customer service. So formulating a suitable inventory model is one of the major concerns for an industry. Again considering reliability of any process is an important trend in the current research activities. Inventory models could be both deterministic and probabilistic and both of which must account for the reliability of the associated production process. This paper discusses the major works in the field of inventory modeling driven by reliability considerations, which ranges from the very beginning to latest works just published.

  7. Reliability models of belt drive systems under slipping failure mode

    Directory of Open Access Journals (Sweden)

    Peng Gao

    2017-01-01

    Full Text Available Conventional reliability assessment and reliability-based optimal design of belt drive are based on the stress–strength interference model. However, the stress–strength interference model is essentially a static model, and the sensitivity analysis of belt drive reliability with respect to design parameters needs further investigations. In this article, time-dependent factors that contribute the dynamic characteristics of reliability are pointed out. Moreover, dynamic reliability models and failure rate models of belt drive systems under the failure mode of slipping are developed. Furthermore, dynamic sensitivity models of belt drive reliability based on the proposed dynamic reliability models are proposed. In addition, numerical examples are given to illustrate the proposed models and analyze the influences of design parameters on dynamic characteristics of reliability, failure rate, and sensitivity functions. The results show that the statistical properties of design parameters have different influences on reliability and failure rate of belt drive in cases of different values of design parameters and different operational durations.

  8. Combined HW/SW Reliability Models.

    Science.gov (United States)

    1982-04-01

    Stone, C. J. (1972). Introduction to Stochastic Processes . New York: Houghton Mifflin. Jelinski, Z. and Moranda, P. (1972). Software reliability...research. Statistical Computer Performance Evaluation, New York: Academic Press, 465-484. Kannan, D. (1979). An Introduction to Stochastic Processes . New

  9. Singularity of Some Software Reliability Models and Parameter Estimation Method

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    According to the principle, “The failure data is the basis of software reliability analysis”, we built a software reliability expert system (SRES) by adopting the artificial intelligence technology. By reasoning out the conclusion from the fitting results of failure data of a software project, the SRES can recommend users “the most suitable model” as a software reliability measurement model. We believe that the SRES can overcome the inconsistency in applications of software reliability models well. We report investigation results of singularity and parameter estimation methods of experimental models in SRES.

  10. Reliability physics and engineering time-to-failure modeling

    CERN Document Server

    McPherson, J W

    2013-01-01

    Reliability Physics and Engineering provides critically important information that is needed for designing and building reliable cost-effective products. Key features include:  ·       Materials/Device Degradation ·       Degradation Kinetics ·       Time-To-Failure Modeling ·       Statistical Tools ·       Failure-Rate Modeling ·       Accelerated Testing ·       Ramp-To-Failure Testing ·       Important Failure Mechanisms for Integrated Circuits ·       Important Failure Mechanisms for  Mechanical Components ·       Conversion of Dynamic  Stresses into Static Equivalents ·       Small Design Changes Producing Major Reliability Improvements ·       Screening Methods ·       Heat Generation and Dissipation ·       Sampling Plans and Confidence Intervals This textbook includes numerous example problems with solutions. Also, exercise problems along with the answers are included at the end of each chapter. Relia...

  11. Using the Weibull distribution reliability, modeling and inference

    CERN Document Server

    McCool, John I

    2012-01-01

    Understand and utilize the latest developments in Weibull inferential methods While the Weibull distribution is widely used in science and engineering, most engineers do not have the necessary statistical training to implement the methodology effectively. Using the Weibull Distribution: Reliability, Modeling, and Inference fills a gap in the current literature on the topic, introducing a self-contained presentation of the probabilistic basis for the methodology while providing powerful techniques for extracting information from data. The author explains the use of the Weibull distribution

  12. A Note on Structural Equation Modeling Estimates of Reliability

    Science.gov (United States)

    Yang, Yanyun; Green, Samuel B.

    2010-01-01

    Reliability can be estimated using structural equation modeling (SEM). Two potential problems with this approach are that estimates may be unstable with small sample sizes and biased with misspecified models. A Monte Carlo study was conducted to investigate the quality of SEM estimates of reliability by themselves and relative to coefficient…

  13. Nanowire growth process modeling and reliability models for nanodevices

    Science.gov (United States)

    Fathi Aghdam, Faranak

    Nowadays, nanotechnology is becoming an inescapable part of everyday life. The big barrier in front of its rapid growth is our incapability of producing nanoscale materials in a reliable and cost-effective way. In fact, the current yield of nano-devices is very low (around 10 %), which makes fabrications of nano-devices very expensive and uncertain. To overcome this challenge, the first and most important step is to investigate how to control nano-structure synthesis variations. The main directions of reliability research in nanotechnology can be classified either from a material perspective or from a device perspective. The first direction focuses on restructuring materials and/or optimizing process conditions at the nano-level (nanomaterials). The other direction is linked to nano-devices and includes the creation of nano-electronic and electro-mechanical systems at nano-level architectures by taking into account the reliability of future products. In this dissertation, we have investigated two topics on both nano-materials and nano-devices. In the first research work, we have studied the optimization of one of the most important nanowire growth processes using statistical methods. Research on nanowire growth with patterned arrays of catalyst has shown that the wire-to-wire spacing is an important factor affecting the quality of resulting nanowires. To improve the process yield and the length uniformity of fabricated nanowires, it is important to reduce the resource competition between nanowires during the growth process. We have proposed a physical-statistical nanowire-interaction model considering the shadowing effect and shared substrate diffusion area to determine the optimal pitch that would ensure the minimum competition between nanowires. A sigmoid function is used in the model, and the least squares estimation method is used to estimate the model parameters. The estimated model is then used to determine the optimal spatial arrangement of catalyst arrays

  14. Reliability models applicable to space telescope solar array assembly system

    Science.gov (United States)

    Patil, S. A.

    1986-01-01

    A complex system may consist of a number of subsystems with several components in series, parallel, or combination of both series and parallel. In order to predict how well the system will perform, it is necessary to know the reliabilities of the subsystems and the reliability of the whole system. The objective of the present study is to develop mathematical models of the reliability which are applicable to complex systems. The models are determined by assuming k failures out of n components in a subsystem. By taking k = 1 and k = n, these models reduce to parallel and series models; hence, the models can be specialized to parallel, series combination systems. The models are developed by assuming the failure rates of the components as functions of time and as such, can be applied to processes with or without aging effects. The reliability models are further specialized to Space Telescope Solar Arrray (STSA) System. The STSA consists of 20 identical solar panel assemblies (SPA's). The reliabilities of the SPA's are determined by the reliabilities of solar cell strings, interconnects, and diodes. The estimates of the reliability of the system for one to five years are calculated by using the reliability estimates of solar cells and interconnects given n ESA documents. Aging effects in relation to breaks in interconnects are discussed.

  15. Modeling of humidity-related reliability in enclosures with electronics

    DEFF Research Database (Denmark)

    Hygum, Morten Arnfeldt; Popok, Vladimir

    2015-01-01

    Reliability of electronics that operate outdoor is strongly affected by environmental factors such as temperature and humidity. Fluctuations of these parameters can lead to water condensation inside enclosures. Therefore, modelling of humidity distribution in a container with air and freely exposed...... to predict humidity-related reliability of a printed circuit board (PCB) located in a cabinet by combining structural reliability methods and non-linear diffusion models. This framework can, thus, be used for reliability prediction from a climatic point-of-view. The proposed numerical approach is then tested...

  16. A Model of Ship Auxiliary System for Reliable Ship Propulsion

    Directory of Open Access Journals (Sweden)

    Dragan Martinović

    2012-03-01

    Full Text Available The main purpose of a vessel is to transport goods and passengers at minimum cost. Out of the analysis of relevant global databases on ship machinery failures, it is obvious that the most frequent failures occur precisely on the generator-running diesel engines. Any failure in the electrical system can leave the ship without propulsion, even if the main engine is working properly. In that case, the consequences could be devastating: higher running expenses, damage to the ship, oil spill or substantial marine pollution. These are the reasons why solutions that will prevent the ship being unable to manoeuvre during her exploitation should be implemented. Therefore, it is necessary to define a propulsion restoration model which would not depend on the primary electrical energy. The paper provides a model of the marine auxiliary system for more reliable propulsion. This includes starting, reversing and stopping of the propulsion engine. The proposed solution of reliable propulsion model based on the use of a shaft generator and an excitation engine enables the restoration of propulsion following total failure of the electrical energy primary production system, and the self-propelled ship navigation. A ship is an important factor in the Technology of Transport, and the implementation of this model increases safety, reduces downtime, and significantly decreases hazards of pollution damage.KEYWORDSreliable propulsion, failure, ship auxiliary system, control, propulsion restoration

  17. Development of Model for Providing Feasible Scholarship

    Directory of Open Access Journals (Sweden)

    Harry Dhika

    2016-05-01

    Full Text Available The current work focuses on the development of a model to determine a feasible scholarship recipient on the basis of the naiv¨e Bayes’ method using very simple and limited attributes. Those attributes are the applicants academic year, represented by their semester, academic performance, represented by their GPa, socioeconomic ability, which represented the economic capability to attend a higher education institution, and their level of social involvement. To establish and evaluate the model performance, empirical data are collected, and the data of 100 students are divided into 80 student data for the model training and the remaining of 20 student data are for the model testing. The results suggest that the model is capable to provide recommendations for the potential scholarship recipient at the level of accuracy of 95%.

  18. Charge transport model to predict intrinsic reliability for dielectric materials

    Energy Technology Data Exchange (ETDEWEB)

    Ogden, Sean P. [Howard P. Isermann Department of Chemical and Biological Engineering, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States); GLOBALFOUNDRIES, 400 Stonebreak Rd. Ext., Malta, New York 12020 (United States); Borja, Juan; Plawsky, Joel L., E-mail: plawsky@rpi.edu; Gill, William N. [Howard P. Isermann Department of Chemical and Biological Engineering, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States); Lu, T.-M. [Department of Physics, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States); Yeap, Kong Boon [GLOBALFOUNDRIES, 400 Stonebreak Rd. Ext., Malta, New York 12020 (United States)

    2015-09-28

    Several lifetime models, mostly empirical in nature, are used to predict reliability for low-k dielectrics used in integrated circuits. There is a dispute over which model provides the most accurate prediction for device lifetime at operating conditions. As a result, there is a need to transition from the use of these largely empirical models to one built entirely on theory. Therefore, a charge transport model was developed to predict the device lifetime of low-k interconnect systems. The model is based on electron transport and donor-type defect formation. Breakdown occurs when a critical defect concentration accumulates, resulting in electron tunneling and the emptying of positively charged traps. The enhanced local electric field lowers the barrier for electron injection into the dielectric, causing a positive feedforward failure. The charge transport model is able to replicate experimental I-V and I-t curves, capturing the current decay at early stress times and the rapid current increase at failure. The model is based on field-driven and current-driven failure mechanisms and uses a minimal number of parameters. All the parameters have some theoretical basis or have been measured experimentally and are not directly used to fit the slope of the time-to-failure versus applied field curve. Despite this simplicity, the model is able to accurately predict device lifetime for three different sources of experimental data. The simulation's predictions at low fields and very long lifetimes show that the use of a single empirical model can lead to inaccuracies in device reliability.

  19. Models of travel time and reliability for freight transport

    Energy Technology Data Exchange (ETDEWEB)

    Terziev, M.N.; Roberts, P.O.

    1976-12-01

    The model produces a probability distribution of the trip time associated with the shipment of freight between a given origin and destination by a given mode and route. Using distributions of the type produced by the model, it is possible to determine two important measures of the quality of service offered by the carrier. These measures are the main travel time and the reliability of delivery. The reliability measure describes the spread of the travel-time distribution. The model described herein was developed originally as part of the railroad rationalization study conducted at MIT and sponsored by the Federal Railroad Administration. This work built upon earlier research in railroad reliability models. Because of the predominantly rail background of this model, the initial discussion focuses on the problem of modeling rail-trip-time reliability. Then, it is shown that the model can also be used to study truck and barge operations.

  20. Developing Fast and Reliable Flood Models

    DEFF Research Database (Denmark)

    Thrysøe, Cecilie; Toke, Jens; Borup, Morten

    2016-01-01

    State-of-the-art flood modelling in urban areas are based on distributed physically based models. However, their usage is impeded by high computational demands and numerical instabilities, which make calculations both difficult and time consuming. To address these challenges we develop and test...... is modelled by response surface surrogates, which are empirical data driven models. These are trained using the volume-discharge relations by piecewise linear functions. (ii) The surface flooding is modelled by lower-fidelity physically based surrogates, which are based on surface depressions and flow paths....... A surrogate model is set up for a case study area in Aarhus, Denmark, to replace a MIKE FLOOD model. The drainage surrogates are able to reproduce the MIKE URBAN results for a set of rain inputs. The coupled drainage-surface surrogate model lacks details in the surface description which reduces its overall...

  1. Modelling and Simulation of Scraper Reliability for Maintenance

    Institute of Scientific and Technical Information of China (English)

    HUANG Liang-pei; LU Zhong-hai; GONG Zheng-li

    2011-01-01

    A scraper conveyor is a kind of heavy machinery which can continuously transport goods and widely used in mines, ports and store enterprises. Since scraper failure rate directly affects production costs and production capacity, the evaluation and the prediction of scraper conveyor reliability are important for these enterprises. In this paper, the reliabilities of different parts are classified and discussed according to their structural characteristics and different failure factors. Based on the component's time-to-failure density function, the reliability model of scraper chain is constructed to track the age distribution of part population and the reliability change of the scraper chain. Based on the stress-strength interference model, considering the decrease of strength due to fatigue failure, the dynamic reliability model of such component as gear, axis is developed to observe the change of the part reliability with the service time of scraper. Finally, system reliability model of the scraper is established for the maintenance to simulate and calculate the scraper reliability.

  2. Transformer real-time reliability model based on operating conditions

    Institute of Scientific and Technical Information of China (English)

    HE Jian; CHENG Lin; SUN Yuan-zhang

    2007-01-01

    Operational reliability evaluation theory reflects real-time reliability level of power system. The component failure rate varies with operating conditions. The impact of real-time operating conditions such as ambient temperature and transformer MVA (megavolt-ampere) loading on transformer insulation life is studied in this paper. The formula of transformer failure rate based on the winding hottest-spot temperature (HST) is given. Thus the real-time reliability model of transformer based on operating conditions is presented. The work is illustrated using the 1979 IEEE Reliability Test System. The changes of operating conditions are simulated by using hourly load curve and temperature curve, so the curves of real-time reliability indices are obtained by using operational reliability evaluation.

  3. Modeling and Simulation of Sensor-to-Sink Data Transport Reliability in WSNs

    Directory of Open Access Journals (Sweden)

    Faisal Karim Shaikh

    2012-01-01

    Full Text Available The fundamental functionality of WSN (Wireless Sensor Networks is to transport data from sensor nodes to the sink. To increase the fault tolerance, inherent sensor node redundancy in WSN can be exploited but the reliability guarantees are not ensured. The data transport process in WSN is devised as a set of operations on raw data generated in response to user requirements. The different operations filter the raw data to rationalize the reliable transport. Accordingly, we provide reliability models for various data transport semantics. In this paper we argue for the effectiveness of the proposed reliability models by comparing analytically and via simulations in TOSSIM.

  4. BUILDING MODEL ANALYSIS APPLICATIONS WITH THE JOINT UNIVERSAL PARAMETER IDENTIFICATION AND EVALUATION OF RELIABILITY (JUPITER) API

    Science.gov (United States)

    The open-source, public domain JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) API (Application Programming Interface) provides conventions and Fortran-90 modules to develop applications (computer programs) for analyzing process models. The input ...

  5. Models for Battery Reliability and Lifetime

    Energy Technology Data Exchange (ETDEWEB)

    Smith, K.; Wood, E.; Santhanagopalan, S.; Kim, G. H.; Neubauer, J.; Pesaran, A.

    2014-03-01

    Models describing battery degradation physics are needed to more accurately understand how battery usage and next-generation battery designs can be optimized for performance and lifetime. Such lifetime models may also reduce the cost of battery aging experiments and shorten the time required to validate battery lifetime. Models for chemical degradation and mechanical stress are reviewed. Experimental analysis of aging data from a commercial iron-phosphate lithium-ion (Li-ion) cell elucidates the relative importance of several mechanical stress-induced degradation mechanisms.

  6. An Interactive Whiteboard Model Survey: Reliable Development

    Directory of Open Access Journals (Sweden)

    Bih-Yaw Shih

    2012-04-01

    Full Text Available Applications and practices of interactive whiteboards (IWBs in school learning is important focus and development trend for developmented countries in recent years. There are rare researches and discussions about IWB teaching materials for course teaching and teaching effectiveness. As for the aspect of academic studies, there is more practical teaching sharing for subjects such as language learning, mathematical learning and physical science learning; however, it is rarely seen empirical research on the application of IWB for educational acceptances of interactive whiteboards. Based on its imporatances, we summarize previous literatures to establish a theoretical model for interactive whiteboards (IWBs. Variables in this model are then discussed to find out the interaction between each other. The contribution of the study develops an innovative model for educational acceptances of interactive whiteboards using hybrid TAM, ECM, and Flow models.

  7. Modeling service time reliability in urban ferry system

    Science.gov (United States)

    Chen, Yifan; Luo, Sida; Zhang, Mengke; Shen, Hanxia; Xin, Feifei; Luo, Yujie

    2017-09-01

    The urban ferry system can carry a large number of travelers, which may alleviate the pressure on road traffic. As an indicator of its service quality, service time reliability (STR) plays an essential part in attracting travelers to the ferry system. A wide array of studies have been conducted to analyze the STR of land transportation. However, the STR of ferry systems has received little attention in the transportation literature. In this study, a model was established to obtain the STR in urban ferry systems. First, the probability density function (PDF) of the service time provided by ferry systems was constructed. Considering the deficiency of the queuing theory, this PDF was determined by Bayes’ theorem. Then, to validate the function, the results of the proposed model were compared with those of the Monte Carlo simulation. With the PDF, the reliability could be determined mathematically by integration. Results showed how the factors including the frequency, capacity, time schedule and ferry waiting time affected the STR under different degrees of congestion in ferry systems. Based on these results, some strategies for improving the STR were proposed. These findings are of great significance to increasing the share of ferries among various urban transport modes.

  8. Construction of a reliable model pyranometer for irradiance ...

    African Journals Online (AJOL)

    USER

    2010-03-22

    Mar 22, 2010 ... design, construction and testing of a reliable model pyranometer (RMP001) was done in Mubi,. Adamawa ... Pyranometers are widely used in meteorology, climate- .... It is calculated that an appropriate value for the capa-.

  9. Reliability Modeling and Analysis of SCI Topological Network

    Directory of Open Access Journals (Sweden)

    Hongzhe Xu

    2012-03-01

    Full Text Available The problem of reliability modeling on the Scalable Coherent Interface (SCI rings and topological network is studied. The reliability models of three SCI rings are developed and the factors which influence the reliability of SCI rings are studied. By calculating the shortest path matrix and the path quantity matrix of different types SCI network topology, the communication characteristics of SCI network are obtained. For the situations of the node-damage and edge-damage, the survivability of SCI topological network is studied.

  10. MODELING HUMAN RELIABILITY ANALYSIS USING MIDAS

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring; Donald D. Dudenhoeffer; Bruce P. Hallbert; Brian F. Gore

    2006-05-01

    This paper summarizes an emerging collaboration between Idaho National Laboratory and NASA Ames Research Center regarding the utilization of high-fidelity MIDAS simulations for modeling control room crew performance at nuclear power plants. The key envisioned uses for MIDAS-based control room simulations are: (i) the estimation of human error with novel control room equipment and configurations, (ii) the investigative determination of risk significance in recreating past event scenarios involving control room operating crews, and (iii) the certification of novel staffing levels in control rooms. It is proposed that MIDAS serves as a key component for the effective modeling of risk in next generation control rooms.

  11. Can high resolution 3D topographic surveys provide reliable grain size estimates in gravel bed rivers?

    Science.gov (United States)

    Pearson, E.; Smith, M. W.; Klaar, M. J.; Brown, L. E.

    2017-09-01

    High resolution topographic surveys such as those provided by Structure-from-Motion (SfM) contain a wealth of information that is not always exploited in the generation of Digital Elevation Models (DEMs). In particular, several authors have related sub-metre scale topographic variability (or 'surface roughness') to sediment grain size by deriving empirical relationships between the two. In fluvial applications, such relationships permit rapid analysis of the spatial distribution of grain size over entire river reaches, providing improved data to drive three-dimensional hydraulic models, allowing rapid geomorphic monitoring of sub-reach river restoration projects, and enabling more robust characterisation of riverbed habitats. However, comparison of previously published roughness-grain-size relationships shows substantial variability between field sites. Using a combination of over 300 laboratory and field-based SfM surveys, we demonstrate the influence of inherent survey error, irregularity of natural gravels, particle shape, grain packing structure, sorting, and form roughness on roughness-grain-size relationships. Roughness analysis from SfM datasets can accurately predict the diameter of smooth hemispheres, though natural, irregular gravels result in a higher roughness value for a given diameter and different grain shapes yield different relationships. A suite of empirical relationships is presented as a decision tree which improves predictions of grain size. By accounting for differences in patch facies, large improvements in D50 prediction are possible. SfM is capable of providing accurate grain size estimates, although further refinement is needed for poorly sorted gravel patches, for which c-axis percentiles are better predicted than b-axis percentiles.

  12. A Reliability Based Model for Wind Turbine Selection

    Directory of Open Access Journals (Sweden)

    A.K. Rajeevan

    2013-06-01

    Full Text Available A wind turbine generator output at a specific site depends on many factors, particularly cut- in, rated and cut-out wind speed parameters. Hence power output varies from turbine to turbine. The objective of this paper is to develop a mathematical relationship between reliability and wind power generation. The analytical computation of monthly wind power is obtained from weibull statistical model using cubic mean cube root of wind speed. Reliability calculation is based on failure probability analysis. There are many different types of wind turbinescommercially available in the market. From reliability point of view, to get optimum reliability in power generation, it is desirable to select a wind turbine generator which is best suited for a site. The mathematical relationship developed in this paper can be used for site-matching turbine selection in reliability point of view.

  13. Reliability assessment using degradation models: bayesian and classical approaches

    Directory of Open Access Journals (Sweden)

    Marta Afonso Freitas

    2010-04-01

    Full Text Available Traditionally, reliability assessment of devices has been based on (accelerated life tests. However, for highly reliable products, little information about reliability is provided by life tests in which few or no failures are typically observed. Since most failures arise from a degradation mechanism at work for which there are characteristics that degrade over time, one alternative is monitor the device for a period of time and assess its reliability from the changes in performance (degradation observed during that period. The goal of this article is to illustrate how degradation data can be modeled and analyzed by using "classical" and Bayesian approaches. Four methods of data analysis based on classical inference are presented. Next we show how Bayesian methods can also be used to provide a natural approach to analyzing degradation data. The approaches are applied to a real data set regarding train wheels degradation.Tradicionalmente, o acesso à confiabilidade de dispositivos tem sido baseado em testes de vida (acelerados. Entretanto, para produtos altamente confiáveis, pouca informação a respeito de sua confiabilidade é fornecida por testes de vida no quais poucas ou nenhumas falhas são observadas. Uma vez que boa parte das falhas é induzida por mecanismos de degradação, uma alternativa é monitorar o dispositivo por um período de tempo e acessar sua confiabilidade através das mudanças em desempenho (degradação observadas durante aquele período. O objetivo deste artigo é ilustrar como dados de degradação podem ser modelados e analisados utilizando-se abordagens "clássicas" e Bayesiana. Quatro métodos de análise de dados baseados em inferência clássica são apresentados. A seguir, mostramos como os métodos Bayesianos podem também ser aplicados para proporcionar uma abordagem natural à análise de dados de degradação. As abordagens são aplicadas a um banco de dados real relacionado à degradação de rodas de trens.

  14. System Reliability Analysis Capability and Surrogate Model Application in RAVEN

    Energy Technology Data Exchange (ETDEWEB)

    Rabiti, Cristian; Alfonsi, Andrea; Huang, Dongli; Gleicher, Frederick; Wang, Bei; Adbel-Khalik, Hany S.; Pascucci, Valerio; Smith, Curtis L.

    2015-11-01

    This report collect the effort performed to improve the reliability analysis capabilities of the RAVEN code and explore new opportunity in the usage of surrogate model by extending the current RAVEN capabilities to multi physics surrogate models and construction of surrogate models for high dimensionality fields.

  15. A Novel OBDD-Based Reliability Evaluation Algorithm for Wireless Sensor Networks on the Multicast Model

    Directory of Open Access Journals (Sweden)

    Zongshuai Yan

    2015-01-01

    Full Text Available The two-terminal reliability calculation for wireless sensor networks (WSNs is a #P-hard problem. The reliability calculation of WSNs on the multicast model provides an even worse combinatorial explosion of node states with respect to the calculation of WSNs on the unicast model; many real WSNs require the multicast model to deliver information. This research first provides a formal definition for the WSN on the multicast model. Next, a symbolic OBDD_Multicast algorithm is proposed to evaluate the reliability of WSNs on the multicast model. Furthermore, our research on OBDD_Multicast construction avoids the problem of invalid expansion, which reduces the number of subnetworks by identifying the redundant paths of two adjacent nodes and s-t unconnected paths. Experiments show that the OBDD_Multicast both reduces the complexity of the WSN reliability analysis and has a lower running time than Xing’s OBDD- (ordered binary decision diagram- based algorithm.

  16. Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.

    Science.gov (United States)

    Chatzis, Sotirios P; Andreou, Andreas S

    2015-11-01

    Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.

  17. Effective stimuli for constructing reliable neuron models.

    Directory of Open Access Journals (Sweden)

    Shaul Druckmann

    2011-08-01

    Full Text Available The rich dynamical nature of neurons poses major conceptual and technical challenges for unraveling their nonlinear membrane properties. Traditionally, various current waveforms have been injected at the soma to probe neuron dynamics, but the rationale for selecting specific stimuli has never been rigorously justified. The present experimental and theoretical study proposes a novel framework, inspired by learning theory, for objectively selecting the stimuli that best unravel the neuron's dynamics. The efficacy of stimuli is assessed in terms of their ability to constrain the parameter space of biophysically detailed conductance-based models that faithfully replicate the neuron's dynamics as attested by their ability to generalize well to the neuron's response to novel experimental stimuli. We used this framework to evaluate a variety of stimuli in different types of cortical neurons, ages and animals. Despite their simplicity, a set of stimuli consisting of step and ramp current pulses outperforms synaptic-like noisy stimuli in revealing the dynamics of these neurons. The general framework that we propose paves a new way for defining, evaluating and standardizing effective electrical probing of neurons and will thus lay the foundation for a much deeper understanding of the electrical nature of these highly sophisticated and non-linear devices and of the neuronal networks that they compose.

  18. Quasi-Bayesian software reliability model with small samples

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jin; TU Jun-xiang; CHEN Zhuo-ning; YAN Xiao-guang

    2009-01-01

    In traditional Bayesian software reliability models,it was assume that all probabilities are precise.In practical applications the parameters of the probability distributions are often under uncertainty due to strong dependence on subjective information of experts' judgments on sparse statistical data.In this paper,a quasi-Bayesian software reliability model using interval-valued probabilities to clearly quantify experts' prior beliefs on possible intervals of the parameters of the probability distributions is presented.The model integrates experts' judgments with statistical data to obtain more convincible assessments of software reliability with small samples.For some actual data sets,the presented model yields better predictions than the Jelinski-Moranda (JM) model using maximum likelihood (ML).

  19. Modeling and Analysis of Component Faults and Reliability

    DEFF Research Database (Denmark)

    Le Guilly, Thibaut; Olsen, Petur; Ravn, Anders Peter;

    2016-01-01

    that are automatically generated. The stochastic information on the faults is used to estimate the reliability of the fault affected system. The reliability is given with respect to properties of the system state space. We illustrate the process on a concrete example using the Uppaal model checker for validating...... the ideal system model and the fault modeling. Then the statistical version of the tool, UppaalSMC, is used to find reliability estimates.......This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets...

  20. Reliable Estimation of Prediction Uncertainty for Physicochemical Property Models.

    Science.gov (United States)

    Proppe, Jonny; Reiher, Markus

    2017-07-11

    One of the major challenges in computational science is to determine the uncertainty of a virtual measurement, that is the prediction of an observable based on calculations. As highly accurate first-principles calculations are in general unfeasible for most physical systems, one usually resorts to parameteric property models of observables, which require calibration by incorporating reference data. The resulting predictions and their uncertainties are sensitive to systematic errors such as inconsistent reference data, parametric model assumptions, or inadequate computational methods. Here, we discuss the calibration of property models in the light of bootstrapping, a sampling method that can be employed for identifying systematic errors and for reliable estimation of the prediction uncertainty. We apply bootstrapping to assess a linear property model linking the (57)Fe Mössbauer isomer shift to the contact electron density at the iron nucleus for a diverse set of 44 molecular iron compounds. The contact electron density is calculated with 12 density functionals across Jacob's ladder (PWLDA, BP86, BLYP, PW91, PBE, M06-L, TPSS, B3LYP, B3PW91, PBE0, M06, TPSSh). We provide systematic-error diagnostics and reliable, locally resolved uncertainties for isomer-shift predictions. Pure and hybrid density functionals yield average prediction uncertainties of 0.06-0.08 mm s(-1) and 0.04-0.05 mm s(-1), respectively, the latter being close to the average experimental uncertainty of 0.02 mm s(-1). Furthermore, we show that both model parameters and prediction uncertainty depend significantly on the composition and number of reference data points. Accordingly, we suggest that rankings of density functionals based on performance measures (e.g., the squared coefficient of correlation, r(2), or the root-mean-square error, RMSE) should not be inferred from a single data set. This study presents the first statistically rigorous calibration analysis for theoretical M

  1. A New Approach to Provide Reliable Data Systems Without Using Space-Qualified Electronic Components

    Science.gov (United States)

    Häbel, W.

    This paper describes the present situation and the expected trends with regard to the availability of electronic components, their quality levels, technology trends and sensitivity to the space environment. Many recognized vendors have already discontinued their MIL production line and state of the art components will in many cases not be offered in this quality level because of the shrinking market. It becomes therefore obvious that new methods need to be considered "How to build reliable Data Systems for space applications without High-Rel parts". One of the most promising approaches is the identification, masking and suppression of faults by developing Fault Tolerant Computer systems which is described in this paper.

  2. Singularity of Software Reliability Models LVLM and LVQM

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    According to the principle, “The failure data is the basis of software reliabilityanalysis”, we built a software reliability expert system (SRES) by adopting the artificialtechnology. By reasoning out the conclusion from the fitting results of failure data of asoftware project, the SRES can recommend users “the most suitable model”as a softwarereliability measurement model. We believe that the SRES can overcome the inconsistency inapplications of software reliability models well. We report investigation results of singularity and parameter estimation methods of models, LVLM and LVQM.

  3. Learning reliable manipulation strategies without initial physical models

    Science.gov (United States)

    Christiansen, Alan D.; Mason, Matthew T.; Mitchell, Tom M.

    1990-01-01

    A description is given of a robot, possessing limited sensory and effectory capabilities but no initial model of the effects of its actions on the world, that acquires such a model through exploration, practice, and observation. By acquiring an increasingly correct model of its actions, it generates increasingly successful plans to achieve its goals. In an apparently nondeterministic world, achieving reliability requires the identification of reliable actions and a preference for using such actions. Furthermore, by selecting its training actions carefully, the robot can significantly improve its learning rate.

  4. Coverage Modeling and Reliability Analysis Using Multi-state Function

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Fault tree analysis is an effective method for predicting the reliability of a system. It gives a pictorial representation and logical framework for analyzing the reliability. Also, it has been used for a long time as an effective method for the quantitative and qualitative analysis of the failure modes of critical systems. In this paper, we propose a new general coverage model (GCM) based on hardware independent faults. Using this model, an effective software tool can be constructed to detect, locate and recover fault from the faulty system. This model can be applied to identify the key component that can cause the failure of the system using failure mode effect analysis (FMEA).

  5. Modeling HVDC links in composite reliability evaluation: issues and solutions

    Energy Technology Data Exchange (ETDEWEB)

    Reis, Lineu B. de [Sao Paulo Univ., SP (Brazil). Escola Politecnica; Ramos, Dorel S. [Centrais Eletricas de Sao Paulo, SP (Brazil); Morozowski Filho, Marciano [Santa Catarina Univ., Florianopolis, SC (Brazil)

    1992-12-31

    This paper deals with theoretical and practical aspects of HVDC link modeling for composite (generation and transmission) system reliability evaluation purposes. The conceptual framework used in the analysis, as well as the practical aspects, are illustrated through an application example. Initially, two distinct HVDC link operation models are described: synchronous and asynchronous. An analysis of the most significant internal failure modes and their effects on HVDC link transmission capability is presented and a reliability model is proposed. Finally, a historical performance data of the Itaipu HVDC system is shown. 6 refs., 5 figs., 8 tabs.

  6. Design of a Human Reliability Assessment model for structural engineering

    NARCIS (Netherlands)

    De Haan, J.; Terwel, K.C.; Al-Jibouri, S.H.S.

    2013-01-01

    It is generally accepted that humans are the “weakest link” in structural design and construction processes. Despite this, few models are available to quantify human error within engineering processes. This paper demonstrates the use of a quantitative Human Reliability Assessment model within struct

  7. Echolocation detections and digital video surveys provide reliable estimates of the relative density of harbour porpoises

    National Research Council Canada - National Science Library

    Williamson, Laura D; Brookes, Kate L; Scott, Beth E; Graham, Isla M; Bradbury, Gareth; Hammond, Philip S; Thompson, Paul M; McPherson, Jana

    2016-01-01

    ...‐based visual surveys. Surveys of cetaceans using acoustic loggers or digital cameras provide alternative methods to estimate relative density that have the potential to reduce cost and provide a verifiable record of all detections...

  8. Exponential order statistic models of software reliability growth

    Science.gov (United States)

    Miller, D. R.

    1986-01-01

    Failure times of a software reliability growth process are modeled as order statistics of independent, nonidentically distributed exponential random variables. The Jelinsky-Moranda, Goel-Okumoto, Littlewood, Musa-Okumoto Logarithmic, and Power Law models are all special cases of Exponential Order Statistic Models, but there are many additional examples also. Various characterizations, properties and examples of this class of models are developed and presented.

  9. Reliability Modeling and Optimization Using Fuzzy Logic and Chaos Theory

    Directory of Open Access Journals (Sweden)

    Alexander Rotshtein

    2012-01-01

    Full Text Available Fuzzy sets membership functions integrated with logistic map as the chaos generator were used to create reliability bifurcations diagrams of the system with redundancy of the components. This paper shows that increasing in the number of redundant components results in a postponement of the moment of the first bifurcation which is considered as most contributing to the loss of the reliability. The increasing of redundancy also provides the shrinkage of the oscillation orbit of the level of the system’s membership to reliable state. The paper includes the problem statement of redundancy optimization under conditions of chaotic behavior of influencing parameters and genetic algorithm of this problem solving. The paper shows the possibility of chaos-tolerant systems design with the required level of reliability.

  10. Reliability Analysis of Wireless Sensor Networks Using Markovian Model

    Directory of Open Access Journals (Sweden)

    Jin Zhu

    2012-01-01

    Full Text Available This paper investigates reliability analysis of wireless sensor networks whose topology is switching among possible connections which are governed by a Markovian chain. We give the quantized relations between network topology, data acquisition rate, nodes' calculation ability, and network reliability. By applying Lyapunov method, sufficient conditions of network reliability are proposed for such topology switching networks with constant or varying data acquisition rate. With the conditions satisfied, the quantity of data transported over wireless network node will not exceed node capacity such that reliability is ensured. Our theoretical work helps to provide a deeper understanding of real-world wireless sensor networks, which may find its application in the fields of network design and topology control.

  11. Reliability Modeling of Microelectromechanical Systems Using Neural Networks

    Science.gov (United States)

    Perera. J. Sebastian

    2000-01-01

    Microelectromechanical systems (MEMS) are a broad and rapidly expanding field that is currently receiving a great deal of attention because of the potential to significantly improve the ability to sense, analyze, and control a variety of processes, such as heating and ventilation systems, automobiles, medicine, aeronautical flight, military surveillance, weather forecasting, and space exploration. MEMS are very small and are a blend of electrical and mechanical components, with electrical and mechanical systems on one chip. This research establishes reliability estimation and prediction for MEMS devices at the conceptual design phase using neural networks. At the conceptual design phase, before devices are built and tested, traditional methods of quantifying reliability are inadequate because the device is not in existence and cannot be tested to establish the reliability distributions. A novel approach using neural networks is created to predict the overall reliability of a MEMS device based on its components and each component's attributes. The methodology begins with collecting attribute data (fabrication process, physical specifications, operating environment, property characteristics, packaging, etc.) and reliability data for many types of microengines. The data are partitioned into training data (the majority) and validation data (the remainder). A neural network is applied to the training data (both attribute and reliability); the attributes become the system inputs and reliability data (cycles to failure), the system output. After the neural network is trained with sufficient data. the validation data are used to verify the neural networks provided accurate reliability estimates. Now, the reliability of a new proposed MEMS device can be estimated by using the appropriate trained neural networks developed in this work.

  12. Web software reliability modeling with random impulsive shocks

    Institute of Scientific and Technical Information of China (English)

    Jianfeng Yang; Ming Zhao; Wensheng Hu

    2014-01-01

    As the web-server based business is rapidly developed and popularized, how to evaluate and improve the reliability of web-servers has been extremely important. Although a large num-ber of software reliability growth models (SRGMs), including those combined with multiple change-points (CPs), have been available, these conventional SRGMs cannot be directly applied to web soft-ware reliability analysis because of the complex web operational profile. To characterize the web operational profile precisely, it should be realized that the workload of a web server is normal y non-homogeneous and often observed with the pattern of random impulsive shocks. A web software reliability model with random im-pulsive shocks and its statistical analysis method are developed. In the proposed model, the web server workload is characterized by a geometric Brownian motion process. Based on a real data set from IIS server logs of ICRMS website (www.icrms.cn), the proposed model is demonstrated to be powerful for estimating impulsive shocks and web software reliability.

  13. Structural hybrid reliability index and its convergent solving method based on random–fuzzy–interval reliability model

    Directory of Open Access Journals (Sweden)

    Hai An

    2016-08-01

    Full Text Available Aiming to resolve the problems of a variety of uncertainty variables that coexist in the engineering structure reliability analysis, a new hybrid reliability index to evaluate structural hybrid reliability, based on the random–fuzzy–interval model, is proposed in this article. The convergent solving method is also presented. First, the truncated probability reliability model, the fuzzy random reliability model, and the non-probabilistic interval reliability model are introduced. Then, the new hybrid reliability index definition is presented based on the random–fuzzy–interval model. Furthermore, the calculation flowchart of the hybrid reliability index is presented and it is solved using the modified limit-step length iterative algorithm, which ensures convergence. And the validity of convergent algorithm for the hybrid reliability model is verified through the calculation examples in literature. In the end, a numerical example is demonstrated to show that the hybrid reliability index is applicable for the wear reliability assessment of mechanisms, where truncated random variables, fuzzy random variables, and interval variables coexist. The demonstration also shows the good convergence of the iterative algorithm proposed in this article.

  14. MCTSSA Software Reliability Handbook, Volume II: Data Collection Demonstration and Software Reliability Modeling for a Multi-Function Distributed System

    OpenAIRE

    Schneidewind, Norman F.

    1997-01-01

    The purpose of this handbook is threefold. Specifically, it: Serves as a reference guide for implementing standard software reliability practices at Marine Corps Tactical Systems Support Activity and aids in applying the software reliability model; Serves as a tool for managing the software reliability program; and Serves as a training aid. U.S. Marine Corps Tactical Systems Support Activity, Camp Pendleton, CA. RLACH

  15. Why We Need Reliable, Valid, and Appropriate Learning Disability Assessments: The Perspective of a Postsecondary Disability Service Provider

    Science.gov (United States)

    Wolforth, Joan

    2012-01-01

    This paper discusses issues regarding the validity and reliability of psychoeducational assessments provided to Disability Services Offices at Canadian Universities. Several vignettes illustrate some current issues and the potential consequences when university students are given less than thorough disability evaluations and ascribed diagnoses.…

  16. Exponentiated Weibull distribution approach based inflection S-shaped software reliability growth model

    Directory of Open Access Journals (Sweden)

    B.B. Sagar

    2016-09-01

    Full Text Available The aim of this paper was to estimate the number of defects in software and remove them successfully. This paper incorporates Weibull distribution approach along with inflection S-shaped Software Reliability Growth Models (SRGM. In this combination two parameter Weibull distribution methodology is used. Relative Prediction Error (RPE is calculated to predict the validity criterion of the developed model. Experimental results on actual data from five data sets are compared with two other existing models, which expose that the proposed software reliability growth model predicts better estimation to remove the defects. This paper presents best software reliability growth model with including feature of both Weibull distribution and inflection S-shaped SRGM to estimate the defects of software system, and provide help to researchers and software industries to develop highly reliable software products.

  17. Modelling application for cognitive reliability and error analysis method

    Directory of Open Access Journals (Sweden)

    Fabio De Felice

    2013-10-01

    Full Text Available The automation of production systems has delegated to machines the execution of highly repetitive and standardized tasks. In the last decade, however, the failure of the automatic factory model has led to partially automated configurations of production systems. Therefore, in this scenario, centrality and responsibility of the role entrusted to the human operators are exalted because it requires problem solving and decision making ability. Thus, human operator is the core of a cognitive process that leads to decisions, influencing the safety of the whole system in function of their reliability. The aim of this paper is to propose a modelling application for cognitive reliability and error analysis method.

  18. Livers provide a reliable matrix for real-time PCR confirmation of avian botulism.

    Science.gov (United States)

    Le Maréchal, Caroline; Ballan, Valentine; Rouxel, Sandra; Bayon-Auboyer, Marie-Hélène; Baudouard, Marie-Agnès; Morvan, Hervé; Houard, Emmanuelle; Poëzevara, Typhaine; Souillard, Rozenn; Woudstra, Cédric; Le Bouquin, Sophie; Fach, Patrick; Chemaly, Marianne

    2016-04-01

    Diagnosis of avian botulism is based on clinical symptoms, which are indicative but not specific. Laboratory investigations are therefore required to confirm clinical suspicions and establish a definitive diagnosis. Real-time PCR methods have recently been developed for the detection of Clostridium botulinum group III producing type C, D, C/D or D/C toxins. However, no study has been conducted to determine which types of matrices should be analyzed for laboratory confirmation using this approach. This study reports on the comparison of different matrices (pooled intestinal contents, livers, spleens and cloacal swabs) for PCR detection of C. botulinum. Between 2013 and 2015, 63 avian botulism suspicions were tested and 37 were confirmed as botulism. Analysis of livers using real-time PCR after enrichment led to the confirmation of 97% of the botulism outbreaks. Using the same method, spleens led to the confirmation of 90% of botulism outbreaks, cloacal swabs of 93% and pooled intestinal contents of 46%. Liver appears to be the most reliable type of matrix for laboratory confirmation using real-time PCR analysis.

  19. Analysis of Gumbel Model for Software Reliability Using Bayesian Paradigm

    Directory of Open Access Journals (Sweden)

    Raj Kumar

    2012-12-01

    Full Text Available In this paper, we have illustrated the suitability of Gumbel Model for software reliability data. The model parameters are estimated using likelihood based inferential procedure: classical as well as Bayesian. The quasi Newton-Raphson algorithm is applied to obtain the maximum likelihood estimates and associated probability intervals. The Bayesian estimates of the parameters of Gumbel model are obtained using Markov Chain Monte Carlo(MCMC simulation method in OpenBUGS(established software for Bayesian analysis using Markov Chain Monte Carlo methods. The R functions are developed to study the statistical properties, model validation and comparison tools of the model and the output analysis of MCMC samples generated from OpenBUGS. Details of applying MCMC to parameter estimation for the Gumbel model are elaborated and a real software reliability data set is considered to illustrate the methods of inference discussed in this paper.

  20. Finite State Machine Based Evaluation Model for Web Service Reliability Analysis

    CERN Document Server

    M, Thirumaran; Abarna, S; P, Lakshmi

    2011-01-01

    Now-a-days they are very much considering about the changes to be done at shorter time since the reaction time needs are decreasing every moment. Business Logic Evaluation Model (BLEM) are the proposed solution targeting business logic automation and facilitating business experts to write sophisticated business rules and complex calculations without costly custom programming. BLEM is powerful enough to handle service manageability issues by analyzing and evaluating the computability and traceability and other criteria of modified business logic at run time. The web service and QOS grows expensively based on the reliability of the service. Hence the service provider of today things that reliability is the major factor and any problem in the reliability of the service should overcome then and there in order to achieve the expected level of reliability. In our paper we propose business logic evaluation model for web service reliability analysis using Finite State Machine (FSM) where FSM will be extended to analy...

  1. Open Source Software Reliability Growth Model by Considering Change- Point

    Directory of Open Access Journals (Sweden)

    Mashaallah Basirzadeh

    2012-01-01

    Full Text Available The modeling technique for Software Reliability is reaching its prosperity. Software reliability growth models have been used extensively for closed source software. The design and development of open source software (OSS is different from closed source software. We observed some basic characteristics for open source software like (i more instructions execution and code coverage taking place with respect to time, (ii release early, release often (iii frequent addition of patches (iv heterogeneity in fault density and effort expenditure (v Frequent release activities seem to have changed the bug dynamics significantly (vi Bug reporting on bug tracking system drastically increases and decreases. Due to this reason bug reported on bug tracking system keeps an irregular state and fluctuations. Therefore, fault detection/removal process can not be smooth and may be changed at some time point called change-point. In this paper, an instructions executed dependent software reliability growth model has been developed by considering change-point in order to cater diverse and huge user profile, irregular state of bug tracking system and heterogeneity in fault distribution. We have analyzed actual software failure count data to show numerical examples of software reliability assessment for the OSS. We also compare our model with the conventional in terms of goodness-of-fit for actual data. We have shown that the proposed model can assist improvement of quality for OSS systems developed under the open source project.

  2. Modeling and Forecasting (Un)Reliable Realized Covariances for More Reliable Financial Decisions

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Patton, Andrew J.; Quaedvlieg, Rogier

    We propose a new framework for modeling and forecasting common financial risks based on (un)reliable realized covariance measures constructed from high-frequency intraday data. Our new approach explicitly incorporates the effect of measurement errors and time-varying attenuation biases...... turnover and statistically superior positions compared to existing procedures. Translating these statistical improvements into economic gains, we find that under empirically realistic assumptions a risk-averse investor would be willing to pay up to 170 basis points per year to shift to using the new class...

  3. Fuse Modeling for Reliability Study of Power Electronics Circuits

    DEFF Research Database (Denmark)

    Bahman, Amir Sajjad; Iannuzzo, Francesco; Blaabjerg, Frede

    2017-01-01

    This paper describes a comprehensive modeling approach on reliability of fuses used in power electronic circuits. When fuses are subjected to current pulses, cyclic temperature stress is introduced to the fuse element and will wear out the component. Furthermore, the fuse may be used in a large...

  4. Fuse Modeling for Reliability Study of Power Electronics Circuits

    DEFF Research Database (Denmark)

    Bahman, Amir Sajjad; Iannuzzo, Francesco; Blaabjerg, Frede

    2017-01-01

    This paper describes a comprehensive modeling approach on reliability of fuses used in power electronic circuits. When fuses are subjected to current pulses, cyclic temperature stress is introduced to the fuse element and will wear out the component. Furthermore, the fuse may be used in a large...

  5. Semigroup Method for a Mathematical Model in Reliability Analysis

    Institute of Scientific and Technical Information of China (English)

    Geni Gupur; LI Xue-zhi

    2001-01-01

    The system which consists of a reliable machine, an unreliable machine and a storage buffer with infinite many workpieces has been studied. The existence of a unique positive time-dependent solution of the model corresponding to the system has been obtained by using C0-semigroup theory of linear operators in functional analysis.

  6. Effective turbulence models and fatigue reliability in wind farms

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Frandsen, Sten Tronæs; Tarp-Johansen, N.J.

    2008-01-01

    intensity in wakes behind wind turbines can imply a significant reduction in the fatigue lifetime of wind turbines placed in wakes. Ill this paper the design code model ill the wind turbine code [IEC 61400-1, Wind turbine generator systems - Part 1: Safety requirements. 2005] is evaluated from...... a probabilistic point of view, including the importance of modeling the SN-curve by a bi-linear model. Fatigue models relevant for welded, cast steel and fiber reinforced details are considered. Further, the influence on the fatigue reliability is investigated from modeling the fatigue response by a stochastic...

  7. Stochastic Differential Equation-Based Flexible Software Reliability Growth Model

    Directory of Open Access Journals (Sweden)

    P. K. Kapur

    2009-01-01

    Full Text Available Several software reliability growth models (SRGMs have been developed by software developers in tracking and measuring the growth of reliability. As the size of software system is large and the number of faults detected during the testing phase becomes large, so the change of the number of faults that are detected and removed through each debugging becomes sufficiently small compared with the initial fault content at the beginning of the testing phase. In such a situation, we can model the software fault detection process as a stochastic process with continuous state space. In this paper, we propose a new software reliability growth model based on Itô type of stochastic differential equation. We consider an SDE-based generalized Erlang model with logistic error detection function. The model is estimated and validated on real-life data sets cited in literature to show its flexibility. The proposed model integrated with the concept of stochastic differential equation performs comparatively better than the existing NHPP-based models.

  8. Maintenance overtime policies in reliability theory models with random working cycles

    CERN Document Server

    Nakagawa, Toshio

    2015-01-01

    This book introduces a new concept of replacement in maintenance and reliability theory. Replacement overtime, where replacement occurs at the first completion of a working cycle over a planned time, is a new research topic in maintenance theory and also serves to provide a fresh optimization technique in reliability engineering. In comparing replacement overtime with standard and random replacement techniques theoretically and numerically, 'Maintenance Overtime Policies in Reliability Theory' highlights the key benefits to be gained by adopting this new approach and shows how they can be applied to inspection policies, parallel systems and cumulative damage models. Utilizing the latest research in replacement overtime by internationally recognized experts, readers are introduced to new topics and methods, and learn how to practically apply this knowledge to actual reliability models. This book will serve as an essential guide to a new subject of study for graduate students and researchers and also provides a...

  9. Probabilistic Modeling of Fatigue Damage Accumulation for Reliability Prediction

    Directory of Open Access Journals (Sweden)

    Vijay Rathod

    2011-01-01

    Full Text Available A methodology for probabilistic modeling of fatigue damage accumulation for single stress level and multistress level loading is proposed in this paper. The methodology uses linear damage accumulation model of Palmgren-Miner, a probabilistic S-N curve, and an approach for a one-to-one transformation of probability density functions to achieve the objective. The damage accumulation is modeled as a nonstationary process as both the expected damage accumulation and its variability change with time. The proposed methodology is then used for reliability prediction under single stress level and multistress level loading, utilizing dynamic statistical model of cumulative fatigue damage. The reliability prediction under both types of loading is demonstrated with examples.

  10. Strength Reliability Analysis of Turbine Blade Using Surrogate Models

    Directory of Open Access Journals (Sweden)

    Wei Duan

    2014-05-01

    Full Text Available There are many stochastic parameters that have an effect on the reliability of steam turbine blades performance in practical operation. In order to improve the reliability of blade design, it is necessary to take these stochastic parameters into account. In this study, a variable cross-section twisted blade is investigated and geometrical parameters, material parameters and load parameters are considered as random variables. A reliability analysis method as a combination of a Finite Element Method (FEM, a surrogate model and Monte Carlo Simulation (MCS, is applied to solve the blade reliability analysis. Based on the blade finite element parametrical model and the experimental design, two kinds of surrogate models, Polynomial Response Surface (PRS and Artificial Neural Network (ANN, are applied to construct the approximation analytical expressions between the blade responses (including maximum stress and deflection and random input variables, which act as a surrogate of finite element solver to drastically reduce the number of simulations required. Then the surrogate is used for most of the samples needed in the Monte Carlo method and the statistical parameters and cumulative distribution functions of the maximum stress and deflection are obtained by Monte Carlo simulation. Finally, the probabilistic sensitivities analysis, which combines the magnitude of the gradient and the width of the scatter range of the random input variables, is applied to evaluate how much the maximum stress and deflection of the blade are influenced by the random nature of input parameters.

  11. Providing Reliability Services through Demand Response: A Prelimnary Evaluation of the Demand Response Capabilities of Alcoa Inc.

    Energy Technology Data Exchange (ETDEWEB)

    Starke, Michael R [ORNL; Kirby, Brendan J [ORNL; Kueck, John D [ORNL; Todd, Duane [Alcoa; Caulfield, Michael [Alcoa; Helms, Brian [Alcoa

    2009-02-01

    Demand response is the largest underutilized reliability resource in North America. Historic demand response programs have focused on reducing overall electricity consumption (increasing efficiency) and shaving peaks but have not typically been used for immediate reliability response. Many of these programs have been successful but demand response remains a limited resource. The Federal Energy Regulatory Commission (FERC) report, 'Assessment of Demand Response and Advanced Metering' (FERC 2006) found that only five percent of customers are on some form of demand response program. Collectively they represent an estimated 37,000 MW of response potential. These programs reduce overall energy consumption, lower green house gas emissions by allowing fossil fuel generators to operate at increased efficiency and reduce stress on the power system during periods of peak loading. As the country continues to restructure energy markets with sophisticated marginal cost models that attempt to minimize total energy costs, the ability of demand response to create meaningful shifts in the supply and demand equations is critical to creating a sustainable and balanced economic response to energy issues. Restructured energy market prices are set by the cost of the next incremental unit of energy, so that as additional generation is brought into the market, the cost for the entire market increases. The benefit of demand response is that it reduces overall demand and shifts the entire market to a lower pricing level. This can be very effective in mitigating price volatility or scarcity pricing as the power system responds to changing demand schedules, loss of large generators, or loss of transmission. As a global producer of alumina, primary aluminum, and fabricated aluminum products, Alcoa Inc., has the capability to provide demand response services through its manufacturing facilities and uniquely through its aluminum smelting facilities. For a typical aluminum smelter

  12. Reliability-based design optimization with progressive surrogate models

    Science.gov (United States)

    Kanakasabai, Pugazhendhi; Dhingra, Anoop K.

    2014-12-01

    Reliability-based design optimization (RBDO) has traditionally been solved as a nested (bilevel) optimization problem, which is a computationally expensive approach. Unilevel and decoupled approaches for solving the RBDO problem have also been suggested in the past to improve the computational efficiency. However, these approaches also require a large number of response evaluations during optimization. To alleviate the computational burden, surrogate models have been used for reliability evaluation. These approaches involve construction of surrogate models for the reliability computation at each point visited by the optimizer in the design variable space. In this article, a novel approach to solving the RBDO problem is proposed based on a progressive sensitivity surrogate model. The sensitivity surrogate models are built in the design variable space outside the optimization loop using the kriging method or the moving least squares (MLS) method based on sample points generated from low-discrepancy sampling (LDS) to estimate the most probable point of failure (MPP). During the iterative deterministic optimization, the MPP is estimated from the surrogate model for each design point visited by the optimizer. The surrogate sensitivity model is also progressively updated for each new iteration of deterministic optimization by adding new points and their responses. Four example problems are presented showing the relative merits of the kriging and MLS approaches and the overall accuracy and improved efficiency of the proposed approach.

  13. Reliability block diagrams to model the management of colorectal cancer.

    Science.gov (United States)

    Sonnenberg, A; Inadomi, J M

    1999-02-01

    The present study aims to show how various medical and nonmedical components contribute to success and failure in the management of colorectal cancer. The first encounter, subsequent diagnosis, and surgical therapy of a patient with Dukes B sigmoid cancer is modeled as a reliability block diagram with a serial and parallel arrangement of various components. The overall probability of a patient with new-onset colorectal cancer to visit a physician, be correctly diagnosed, and undergo successful therapy is 69%. The reduction in the overall success, despite the fact that the majority of components are assumed to function with failure rates of 5% or less, is a reflection of the multitude of serial subsystems involved in the management of the patient. In contrast, the parallel arrangement of subsystems results in a relative insensitivity of the overall system to failure, a greater stability, and an improved performance. Since no medical system functions perfectly, redundancy associated with parallel subsystems assures a better overall outcome. System analysis of health care provides a means to improve its performance.

  14. Using Model Replication to Improve the Reliability of Agent-Based Models

    Science.gov (United States)

    Zhong, Wei; Kim, Yushim

    The basic presupposition of model replication activities for a computational model such as an agent-based model (ABM) is that, as a robust and reliable tool, it must be replicable in other computing settings. This assumption has recently gained attention in the community of artificial society and simulation due to the challenges of model verification and validation. Illustrating the replication of an ABM representing fraudulent behavior in a public service delivery system originally developed in the Java-based MASON toolkit for NetLogo by a different author, this paper exemplifies how model replication exercises provide unique opportunities for model verification and validation process. At the same time, it helps accumulate best practices and patterns of model replication and contributes to the agenda of developing a standard methodological protocol for agent-based social simulation.

  15. NHPP-Based Software Reliability Models Using Equilibrium Distribution

    Science.gov (United States)

    Xiao, Xiao; Okamura, Hiroyuki; Dohi, Tadashi

    Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.

  16. Simulation modeling of reliability and efficiency of mine ventilation systems

    Energy Technology Data Exchange (ETDEWEB)

    Ushakov, V.K. (Moskovskii Gornyi Institut (USSR))

    1991-06-01

    Discusses a method developed by the MGI institute for computerized simulation of operation of ventilation systems used in deep underground coal mines. The modeling is aimed at assessment of system reliability and efficiency (probability of failure-free operation and stable air distribution). The following stages of the simulation procedure are analyzed: development of a scheme of the ventilation system (type, aerodynamic characteristics and parameters that describe system elements, e.g. ventilation tunnels, ventilation equipment, main blowers etc., dynamics of these parameters depending among others on mining and geologic conditions), development of mathematical models that describe system characteristics as well as external factors and their effects on the system, development of a structure of the simulated ventilation system, development of an algorithm, development of the final computer program for simulation of a mine ventilation system. Use of the model for forecasting reliability of air supply and efficiency of mine ventilation is discussed. 2 refs.

  17. Can ambulatory blood-pressure monitoring provide reliable indices of arterial stiffness?

    Science.gov (United States)

    Gosse, Philippe; Papaioanou, Georgios; Coulon, Paul; Reuter, Sylvain; Lemetayer, Philippe; Safar, Michel

    2007-08-01

    The use of ambulatory recordings of blood pressure (BP) was proposed to estimate arterial stiffness (AS). We compared the relative value of the ambulatory AS index (AASI), and of the slope of pulse pressure (PP) according to mean BP (MBP) obtained from 24-h ambulatory BP monitoring, to the monitoring of the arrival time of Korotkoff sounds (QKD interval) in the prediction of cardiovascular (CV) events. Twenty-four-hour ambulatory BP and QKD monitoring were recorded at baseline, before antihypertensive treatment of hypertensive patients in our Bordeaux cohort. From these recordings, the AASI, the PP/MBP slope, and the theoretical value of the QKD for a systolic pressure of 100 mm Hg and a heart rate of 60 beats/min (QKD100-60) were calculated. The patients were then given antihypertensive treatment and followed by their family physicians, who were unaware of the QKD, AASI, and PP/MBP slope results. Regular updates on patients were obtained. The reproducibility of measurements was studied in 38 normal subjects evaluated on two occasions. The reproducibility of the AASI and the PP/MBP slope was less than that of BP over 24 h and of QKD100-60. The cohort comprised 469 patients. With an average follow-up of 70+/-39 months, 62 CV complications, including 13 deaths, were recorded. In the monovariate analysis, age, PP over 24 h, QKD100-60, AASI, and the PP/MBP slope were significantly related to the occurrence of complications. In the multivariate analysis, when age and PP over 24 were included in the model, only QKD100-60 remained significantly linked to CV events. Our data support the value of the AASI as an indirect estimate of AS and as an element in the evaluation of CV risk in hypertensive patients. However, the reproducibility of this index is less, and its predictive value for complications is poorer, than that of QKD100-60, a parameter that we believe is more closely linked to AS.

  18. Specification and Design of a Fault Recovery Model for the Reliable Multicast Protocol

    Science.gov (United States)

    Montgomery, Todd; Callahan, John R.; Whetten, Brian

    1996-01-01

    The Reliable Multicast Protocol (RMP) provides a unique, group-based model for distributed programs that need to handle reconfiguration events at the application layer. This model, called membership views, provides an abstraction in which events such as site failures, network partitions, and normal join-leave events are viewed as group reformations. RMP provides access to this model through an application programming interface (API) that notifies an application when a group is reformed as the result of a some event. RMP provides applications with reliable delivery of messages using an underlying IP Multicast media to other group members in a distributed environment even in the case of reformations. A distributed application can use various Quality of Service (QoS) levels provided by RMP to tolerate group reformations. This paper explores the implementation details of the mechanisms in RMP that provide distributed applications with membership view information and fault recovery capabilities.

  19. The establishment of reliability model for LED lamps

    Science.gov (United States)

    Jian, Hao; Lei, Jing; Yao, Wang; Qun, Gao; Hongliang, Ke; Xiaoxun, Wang; Yanchao, Zhang; Qiang, Sun; Zhijun, Xu

    2016-07-01

    In order to verify which of the distributions and established methods of reliability model are more suitable for the analysis of the accelerated aging of LED lamp, three established methods (approximate method, analytical method and two-stage method) of reliability model are used to analyze the experimental data under the condition of the Weibull distribution and Lognormal distribution, in this paper. Ten LED lamps are selected for the accelerated aging experiment and the luminous fluxes are measured at an accelerated aging temperature. AIC information criterion is adopted in the evaluation of the models. The results show that the accuracies of the analytical method and the two-stage method are higher than that of the approximation method, with the widths of confidence intervals of unknown parameters of the reliability model being the smallest for the two-stage method. In a comparison between the two types of distributions, the accuracies are nearly identical. Project supported by the National High Technology Research and Development Program of China (Nos. 2015AA03A101, 2013AA03A116), the Cuican Project of Chinese Academy of Sciences (No. KZCC-EW-102), and the Jilin Province Science and Technology Development Plan Item (No. 20130206018GX).

  20. Human Performance Modeling for Dynamic Human Reliability Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Boring, Ronald Laurids [Idaho National Laboratory; Joe, Jeffrey Clark [Idaho National Laboratory; Mandelli, Diego [Idaho National Laboratory

    2015-08-01

    Part of the U.S. Department of Energy’s (DOE’s) Light Water Reac- tor Sustainability (LWRS) Program, the Risk-Informed Safety Margin Charac- terization (RISMC) Pathway develops approaches to estimating and managing safety margins. RISMC simulations pair deterministic plant physics models with probabilistic risk models. As human interactions are an essential element of plant risk, it is necessary to integrate human actions into the RISMC risk framework. In this paper, we review simulation based and non simulation based human reliability analysis (HRA) methods. This paper summarizes the founda- tional information needed to develop a feasible approach to modeling human in- teractions in RISMC simulations.

  1. Testing the reliability of ice-cream cone model

    Science.gov (United States)

    Pan, Zonghao; Shen, Chenglong; Wang, Chuanbing; Liu, Kai; Xue, Xianghui; Wang, Yuming; Wang, Shui

    2015-04-01

    Coronal Mass Ejections (CME)'s properties are important to not only the physical scene itself but space-weather prediction. Several models (such as cone model, GCS model, and so on) have been raised to get rid of the projection effects within the properties observed by spacecraft. According to SOHO/ LASCO observations, we obtain the 'real' 3D parameters of all the FFHCMEs (front-side full halo Coronal Mass Ejections) within the 24th solar cycle till July 2012, by the ice-cream cone model. Considering that the method to obtain 3D parameters from the CME observations by multi-satellite and multi-angle has higher accuracy, we use the GCS model to obtain the real propagation parameters of these CMEs in 3D space and compare the results with which by ice-cream cone model. Then we could discuss the reliability of the ice-cream cone model.

  2. Markov Chain Modelling of Reliability Analysis and Prediction under Mixed Mode Loading

    Institute of Scientific and Technical Information of China (English)

    SINGH Salvinder; ABDULLAH Shahrum; NIK MOHAMED Nik Abdullah; MOHD NOORANI Mohd Salmi

    2015-01-01

    The reliability assessment for an automobile crankshaft provides an important understanding in dealing with the design life of the component in order to eliminate or reduce the likelihood of failure and safety risks. The failures of the crankshafts are considered as a catastrophic failure that leads towards a severe failure of the engine block and its other connecting subcomponents. The reliability of an automotive crankshaft under mixed mode loading using the Markov Chain Model is studied. The Markov Chain is modelled by using a two-state condition to represent the bending and torsion loads that would occur on the crankshaft. The automotive crankshaft represents a good case study of a component under mixed mode loading due to the rotating bending and torsion stresses. An estimation of the Weibull shape parameter is used to obtain the probability density function, cumulative distribution function, hazard and reliability rate functions, the bathtub curve and the mean time to failure. The various properties of the shape parameter is used to model the failure characteristic through the bathtub curve is shown. Likewise, an understanding of the patterns posed by the hazard rate onto the component can be used to improve the design and increase the life cycle based on the reliability and dependability of the component. The proposed reliability assessment provides an accurate, efficient, fast and cost effective reliability analysis in contrast to costly and lengthy experimental techniques.

  3. Aircraft conceptual design modelling incorporating reliability and maintainability predictions

    OpenAIRE

    Vaziry-Zanjany , Mohammad Ali (F)

    1996-01-01

    A computer assisted conceptual aircraft design program has been developed (CACAD). It has an optimisation capability, with extensive break-down in maintenance costs. CACAD's aim is to optimise the size, and configurations of turbofan-powered transport aircraft. A methodology was developed to enhance the reliability of current aircraft systems, and was applied to avionics systems. R&M models of thermal management were developed and linked with avionics failure rate and its ma...

  4. DESIGNING, MODELLING AND OPTIMISING OF AN INTEGRATED RELIABILITY REDUNDANT SYSTEM

    Directory of Open Access Journals (Sweden)

    G. Sankaraiah

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: The reliability of a system is generally treated as a function of cost; but in many real-life situations reliability will depend on a variety of factors. It is therefore interesting to probe the hidden impact of constraints apart from cost – such as weight, volume, and space. This paper attempts to study the impact of multiple constraints on system reliability. For the purposes of analysis, an integrated redundant reliability system is considered, modelled and solved by applying a Lagrangian multiplier that gives a real valued solution for the number of components, for its reliability at each stage, and for the system. The problem is further studied by using a heuristic algorithm and an integer programming method, and is validated by sensitivity analysis to present an integer solution.

    AFRIKAANSE OPSOMMING: Die betroubaarheid van ‘n sisteem word normaalweg as ‘n funksie van koste beskou, alhoewel dit in baie gevalle afhang van ‘n verskeidenheid faktore. Dit is dus interessant om die verskuilde impak van randvoorwaardes soos massa, volume en ruimte te ondersoek. Hierdie artikel poog om die impak van meervoudige randvoorwaardes op sisteem-betroubaarheid te bestudeer. Vir die ontleding, word ‘n geïntegreerde betroubaarheid-sisteem met oortolligheid beskou, gemodelleer en opgelos aan die hand van ‘n Lagrange-vermenigvuldiger. Die problem word verder bestudeer deur gebruik te maak van ‘n heuristiese algoritme en heeltalprogrammering asook gevalideer by wyse van ‘n sensitiwiteitsanalise sodat ‘n heeltaloplossing voorgehou kan word.

  5. Reliability Assessment of IGBT Modules Modeled as Systems with Correlated Components

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2013-01-01

    configuration. The estimated system reliability by the proposed method is a conservative estimate. Application of the suggested method could be extended for reliability estimation of systems composing of welding joints, bolts, bearings, etc. The reliability model incorporates the correlation between...

  6. SIERRA - A 3-D device simulator for reliability modeling

    Science.gov (United States)

    Chern, Jue-Hsien; Arledge, Lawrence A., Jr.; Yang, Ping; Maeda, John T.

    1989-05-01

    SIERRA is a three-dimensional general-purpose semiconductor-device simulation program which serves as a foundation for investigating integrated-circuit (IC) device and reliability issues. This program solves the Poisson and continuity equations in silicon under dc, transient, and small-signal conditions. Executing on a vector/parallel minisupercomputer, SIERRA utilizes a matrix solver which uses an incomplete LU (ILU) preconditioned conjugate gradient square (CGS, BCG) method. The ILU-CGS method provides a good compromise between memory size and convergence rate. The authors have observed a 5x to 7x speedup over standard direct methods in simulations of transient problems containing highly coupled Poisson and continuity equations such as those found in reliability-oriented simulations. The application of SIERRA to parasitic CMOS latchup and dynamic random-access memory single-event-upset studies is described.

  7. Power Electronic Packaging Design, Assembly Process, Reliability and Modeling

    CERN Document Server

    Liu, Yong

    2012-01-01

    Power Electronic Packaging presents an in-depth overview of power electronic packaging design, assembly,reliability and modeling. Since there is a drastic difference between IC fabrication and power electronic packaging, the book systematically introduces typical power electronic packaging design, assembly, reliability and failure analysis and material selection so readers can clearly understand each task's unique characteristics. Power electronic packaging is one of the fastest growing segments in the power electronic industry, due to the rapid growth of power integrated circuit (IC) fabrication, especially for applications like portable, consumer, home, computing and automotive electronics. This book also covers how advances in both semiconductor content and power advanced package design have helped cause advances in power device capability in recent years. The author extrapolates the most recent trends in the book's areas of focus to highlight where further improvement in materials and techniques can d...

  8. Reliability modeling of hydraulic system of drum shearer machine

    Institute of Scientific and Technical Information of China (English)

    SEYED HADI Hoseinie; MOHAMMAD Ataie; REZA Khalookakaei; UDAY Kumar

    2011-01-01

    The hydraulic system plays an important role in supplying power and its transition to other working parts of a coal shearer machine.In this paper,the reliability of the hydraulic system of a drum shearer was analyzed.A case study was done in the Tabas Coal Mine in Iran for failure data collection.The results of the statistical analysis show that the time between failures (TBF)data of this system followed the 3-parameters Weibull distribution.There is about a 54% chance that the hydraulic system of the drum shearer will not fail for the first 50 h of operation.The developed model shows that the reliability of the hydraulic system reduces to a zero value after approximately 1 650 hours of operation.The failure rate of this system decreases when time increases.Therefore,corrective maintenance(run-to-failure)was selected as the best maintenance strategy for it.

  9. Reliability modeling of digital RPS with consideration of undetected software faults

    Energy Technology Data Exchange (ETDEWEB)

    Khalaquzzaman, M.; Lee, Seung Jun; Jung, Won Dea [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Man Cheol [Chung Ang Univ., Seoul (Korea, Republic of)

    2013-10-15

    This paper provides overview of different software reliability methodologies and proposes a technic for estimating the reliability of RPS with consideration of undetected software faults. Software reliability analysis of safety critical software has been challenging despite spending a huge effort for developing large number of software reliability models, and no consensus yet to attain on an appropriate modeling methodology. However, it is realized that the combined application of BBN based SDLC fault prediction method and random black-box testing of software would provide better ground for reliability estimation of safety critical software. Digitalizing the reactor protection system of nuclear power plant has been initiated several decades ago and now full digitalization has been adopted in the new generation of NPPs around the world because digital I and C systems have many better technical features like easier configurability and maintainability over analog I and C systems. Digital I and C systems are also drift-free and incorporation of new features is much easier. Rules and regulation for safe operation of NPPs are established and has been being practiced by the operators as well as regulators of NPPs to ensure safety. The failure mechanism of hardware and analog systems well understood and the risk analysis methods for these components and systems are well established. However, digitalization of I and C system in NPP introduces some crisis and uncertainty in reliability analysis methods of the digital systems/components because software failure mechanisms are still unclear.

  10. Reliability analysis and prediction of mixed mode load using Markov Chain Model

    Science.gov (United States)

    Nikabdullah, N.; Singh, S. S. K.; Alebrahim, R.; Azizi, M. A.; K, Elwaleed A.; Noorani, M. S. M.

    2014-06-01

    The aim of this paper is to present the reliability analysis and prediction of mixed mode loading by using a simple two state Markov Chain Model for an automotive crankshaft. The reliability analysis and prediction for any automotive component or structure is important for analyzing and measuring the failure to increase the design life, eliminate or reduce the likelihood of failures and safety risk. The mechanical failures of the crankshaft are due of high bending and torsion stress concentration from high cycle and low rotating bending and torsional stress. The Markov Chain was used to model the two states based on the probability of failure due to bending and torsion stress. In most investigations it revealed that bending stress is much serve than torsional stress, therefore the probability criteria for the bending state would be higher compared to the torsion state. A statistical comparison between the developed Markov Chain Model and field data was done to observe the percentage of error. The reliability analysis and prediction was derived and illustrated from the Markov Chain Model were shown in the Weibull probability and cumulative distribution function, hazard rate and reliability curve and the bathtub curve. It can be concluded that Markov Chain Model has the ability to generate near similar data with minimal percentage of error and for a practical application; the proposed model provides a good accuracy in determining the reliability for the crankshaft under mixed mode loading.

  11. Reliability analysis and prediction of mixed mode load using Markov Chain Model

    Energy Technology Data Exchange (ETDEWEB)

    Nikabdullah, N. [Department of Mechanical and Materials Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia and Institute of Space Science (ANGKASA), Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia (Malaysia); Singh, S. S. K.; Alebrahim, R.; Azizi, M. A. [Department of Mechanical and Materials Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia (Malaysia); K, Elwaleed A. [Institute of Space Science (ANGKASA), Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia (Malaysia); Noorani, M. S. M. [School of Mathematical Sciences, Faculty of Science and Technology, Universiti Kebangsaan Malaysia (Malaysia)

    2014-06-19

    The aim of this paper is to present the reliability analysis and prediction of mixed mode loading by using a simple two state Markov Chain Model for an automotive crankshaft. The reliability analysis and prediction for any automotive component or structure is important for analyzing and measuring the failure to increase the design life, eliminate or reduce the likelihood of failures and safety risk. The mechanical failures of the crankshaft are due of high bending and torsion stress concentration from high cycle and low rotating bending and torsional stress. The Markov Chain was used to model the two states based on the probability of failure due to bending and torsion stress. In most investigations it revealed that bending stress is much serve than torsional stress, therefore the probability criteria for the bending state would be higher compared to the torsion state. A statistical comparison between the developed Markov Chain Model and field data was done to observe the percentage of error. The reliability analysis and prediction was derived and illustrated from the Markov Chain Model were shown in the Weibull probability and cumulative distribution function, hazard rate and reliability curve and the bathtub curve. It can be concluded that Markov Chain Model has the ability to generate near similar data with minimal percentage of error and for a practical application; the proposed model provides a good accuracy in determining the reliability for the crankshaft under mixed mode loading.

  12. A fast, reliable algorithm for computing frequency responses of state space models

    Science.gov (United States)

    Wette, Matt

    1991-01-01

    Computation of frequency responses for large order systems described by time invariant state space systems often provides a bottleneck in control system analysis. It is shown that banding the A-matrix in the state space model can effectively reduce the computation time for such systems while maintaining reliability in the results produced.

  13. Stochastic reliability and maintenance modeling essays in honor of Professor Shunji Osaki on his 70th birthday

    CERN Document Server

    Nakagawa, Toshio

    2013-01-01

    In honor of the work of Professor Shunji Osaki, Stochastic Reliability and Maintenance Modeling provides a comprehensive study of the legacy of and ongoing research in stochastic reliability and maintenance modeling. Including associated application areas such as dependable computing, performance evaluation, software engineering, communication engineering, distinguished researchers review and build on the contributions over the last four decades by Professor Shunji Osaki. Fundamental yet significant research results are presented and discussed clearly alongside new ideas and topics on stochastic reliability and maintenance modeling to inspire future research. Across 15 chapters readers gain the knowledge and understanding to apply reliability and maintenance theory to computer and communication systems. Stochastic Reliability and Maintenance Modeling is ideal for graduate students and researchers in reliability engineering, and workers, managers and engineers engaged in computer, maintenance and management wo...

  14. Fracture mechanics models developed for piping reliability assessment in light water reactors: piping reliability project

    Energy Technology Data Exchange (ETDEWEB)

    Harris, D.O.; Lim, E.Y.; Dedhia, D.D.; Woo, H.H.; Chou, C.K.

    1982-06-01

    The efforts concentrated on modifications of the stratified Monte Carlo code called PRAISE (Piping Reliability Analysis Including Seismic Events) to make it more widely applicable to probabilistic fracture mechanics analysis of nuclear reactor piping. Pipe failures are considered to occur as the result of crack-like defects introduced during fabrication, that escape detection during inspections. The code modifications allow the following factors in addition to those considered in earlier work to be treated: other materials, failure criteria and subcritical crack growth characteristic; welding residual and vibratory stresses; and longitudinal welds (the original version considered only circumferential welds). The fracture mechanics background for the code modifications is included, and details of the modifications themselves provided. Additionally, an updated version of the PRAISE user's manual is included. The revised code, known as PRAISE-B was then applied to a variety of piping problems, including various size lines subject to stress corrosion cracking and vibratory stresses. Analyses including residual stresses and longitudinal welds were also performed.

  15. Reliability prediction from burn-in data fit to reliability models

    CERN Document Server

    Bernstein, Joseph

    2014-01-01

    This work will educate chip and system designers on a method for accurately predicting circuit and system reliability in order to estimate failures that will occur in the field as a function of operating conditions at the chip level. This book will combine the knowledge taught in many reliability publications and illustrate how to use the knowledge presented by the semiconductor manufacturing companies in combination with the HTOL end-of-life testing that is currently performed by the chip suppliers as part of their standard qualification procedure and make accurate reliability predictions. Th

  16. Reliability Measure Model for Assistive Care Loop Framework Using Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Venki Balasubramanian

    2010-01-01

    Full Text Available Body area wireless sensor networks (BAWSNs are time-critical systems that rely on the collective data of a group of sensor nodes. Reliable data received at the sink is based on the collective data provided by all the source sensor nodes and not on individual data. Unlike conventional reliability, the definition of retransmission is inapplicable in a BAWSN and would only lead to an elapsed data arrival that is not acceptable for time-critical application. Time-driven applications require high data reliability to maintain detection and responses. Hence, the transmission reliability for the BAWSN should be based on the critical time. In this paper, we develop a theoretical model to measure a BAWSN's transmission reliability, based on the critical time. The proposed model is evaluated through simulation and then compared with the experimental results conducted in our existing Active Care Loop Framework (ACLF. We further show the effect of the sink buffer in transmission reliability after a detailed study of various other co-existing parameters.

  17. Operation reliability assessment for cutting tools by applying a proportional covariate model to condition monitoring information.

    Science.gov (United States)

    Cai, Gaigai; Chen, Xuefeng; Li, Bing; Chen, Baojia; He, Zhengjia

    2012-09-25

    The reliability of cutting tools is critical to machining precision and production efficiency. The conventional statistic-based reliability assessment method aims at providing a general and overall estimation of reliability for a large population of identical units under given and fixed conditions. However, it has limited effectiveness in depicting the operational characteristics of a cutting tool. To overcome this limitation, this paper proposes an approach to assess the operation reliability of cutting tools. A proportional covariate model is introduced to construct the relationship between operation reliability and condition monitoring information. The wavelet packet transform and an improved distance evaluation technique are used to extract sensitive features from vibration signals, and a covariate function is constructed based on the proportional covariate model. Ultimately, the failure rate function of the cutting tool being assessed is calculated using the baseline covariate function obtained from a small sample of historical data. Experimental results and a comparative study show that the proposed method is effective for assessing the operation reliability of cutting tools.

  18. Technique for Early Reliability Prediction of Software Components Using Behaviour Models

    Science.gov (United States)

    Ali, Awad; N. A. Jawawi, Dayang; Adham Isa, Mohd; Imran Babar, Muhammad

    2016-01-01

    Behaviour models are the most commonly used input for predicting the reliability of a software system at the early design stage. A component behaviour model reveals the structure and behaviour of the component during the execution of system-level functionalities. There are various challenges related to component reliability prediction at the early design stage based on behaviour models. For example, most of the current reliability techniques do not provide fine-grained sequential behaviour models of individual components and fail to consider the loop entry and exit points in the reliability computation. Moreover, some of the current techniques do not tackle the problem of operational data unavailability and the lack of analysis results that can be valuable for software architects at the early design stage. This paper proposes a reliability prediction technique that, pragmatically, synthesizes system behaviour in the form of a state machine, given a set of scenarios and corresponding constraints as input. The state machine is utilized as a base for generating the component-relevant operational data. The state machine is also used as a source for identifying the nodes and edges of a component probabilistic dependency graph (CPDG). Based on the CPDG, a stack-based algorithm is used to compute the reliability. The proposed technique is evaluated by a comparison with existing techniques and the application of sensitivity analysis to a robotic wheelchair system as a case study. The results indicate that the proposed technique is more relevant at the early design stage compared to existing works, and can provide a more realistic and meaningful prediction. PMID:27668748

  19. Technique for Early Reliability Prediction of Software Components Using Behaviour Models.

    Science.gov (United States)

    Ali, Awad; N A Jawawi, Dayang; Adham Isa, Mohd; Imran Babar, Muhammad

    Behaviour models are the most commonly used input for predicting the reliability of a software system at the early design stage. A component behaviour model reveals the structure and behaviour of the component during the execution of system-level functionalities. There are various challenges related to component reliability prediction at the early design stage based on behaviour models. For example, most of the current reliability techniques do not provide fine-grained sequential behaviour models of individual components and fail to consider the loop entry and exit points in the reliability computation. Moreover, some of the current techniques do not tackle the problem of operational data unavailability and the lack of analysis results that can be valuable for software architects at the early design stage. This paper proposes a reliability prediction technique that, pragmatically, synthesizes system behaviour in the form of a state machine, given a set of scenarios and corresponding constraints as input. The state machine is utilized as a base for generating the component-relevant operational data. The state machine is also used as a source for identifying the nodes and edges of a component probabilistic dependency graph (CPDG). Based on the CPDG, a stack-based algorithm is used to compute the reliability. The proposed technique is evaluated by a comparison with existing techniques and the application of sensitivity analysis to a robotic wheelchair system as a case study. The results indicate that the proposed technique is more relevant at the early design stage compared to existing works, and can provide a more realistic and meaningful prediction.

  20. System reliability assessment with an approximate reasoning model

    Energy Technology Data Exchange (ETDEWEB)

    Eisenhawer, S.W.; Bott, T.F.; Helm, T.M.; Boerigter, S.T.

    1998-12-31

    The projected service life of weapons in the US nuclear stockpile will exceed the original design life of their critical components. Interim metrics are needed to describe weapon states for use in simulation models of the nuclear weapons complex. The authors present an approach to this problem based upon the theory of approximate reasoning (AR) that allows meaningful assessments to be made in an environment where reliability models are incomplete. AR models are designed to emulate the inference process used by subject matter experts. The emulation is based upon a formal logic structure that relates evidence about components. This evidence is translated using natural language expressions into linguistic variables that describe membership in fuzzy sets. The authors introduce a metric that measures the acceptability of a weapon to nuclear deterrence planners. Implication rule bases are used to draw a series of forward chaining inferences about the acceptability of components, subsystems and individual weapons. They describe each component in the AR model in some detail and illustrate its behavior with a small example. The integration of the acceptability metric into a prototype model to simulate the weapons complex is also described.

  1. Nonlinear Mixed-Effects Models for Repairable Systems Reliability

    Institute of Scientific and Technical Information of China (English)

    TAN Fu-rong; JIANG Zhi-bin; KUO Way; Suk Joo BAE

    2007-01-01

    Mixed-effects models, also called random-effects models, are a regression type of analysis which enables the analyst to not only describe the trend over time within each subject, but also to describe the variation among different subjects. Nonlinear mixed-effects models provide a powerful and flexible tool for handling the unbalanced count data. In this paper, nonlinear mixed-effects models are used to analyze the failure data from a repairable system with multiple copies. By using this type of models, statistical inferences about the population and all copies can be made when accounting for copy-to-copy variance. Results of fitting nonlinear mixed-effects models to nine failure-data sets show that the nonlinear mixed-effects models provide a useful tool for analyzing the failure data from multi-copy repairable systems.

  2. A Markov chain model for reliability growth and decay

    Science.gov (United States)

    Siegrist, K.

    1982-01-01

    A mathematical model is developed to describe a complex system undergoing a sequence of trials in which there is interaction between the internal states of the system and the outcomes of the trials. For example, the model might describe a system undergoing testing that is redesigned after each failure. The basic assumptions for the model are that the state of the system after a trial depends probabilistically only on the state before the trial and on the outcome of the trial and that the outcome of a trial depends probabilistically only on the state of the system before the trial. It is shown that under these basic assumptions, the successive states form a Markov chain and the successive states and outcomes jointly form a Markov chain. General results are obtained for the transition probabilities, steady-state distributions, etc. A special case studied in detail describes a system that has two possible state ('repaired' and 'unrepaired') undergoing trials that have three possible outcomes ('inherent failure', 'assignable-cause' 'failure' and 'success'). For this model, the reliability function is computed explicitly and an optimal repair policy is obtained.

  3. Uncertainty quantification and reliability assessment in operational oil spill forecast modeling system.

    Science.gov (United States)

    Hou, Xianlong; Hodges, Ben R; Feng, Dongyu; Liu, Qixiao

    2017-03-15

    As oil transport increasing in the Texas bays, greater risks of ship collisions will become a challenge, yielding oil spill accidents as a consequence. To minimize the ecological damage and optimize rapid response, emergency managers need to be informed with how fast and where oil will spread as soon as possible after a spill. The state-of-the-art operational oil spill forecast modeling system improves the oil spill response into a new stage. However uncertainty due to predicted data inputs often elicits compromise on the reliability of the forecast result, leading to misdirection in contingency planning. Thus understanding the forecast uncertainty and reliability become significant. In this paper, Monte Carlo simulation is implemented to provide parameters to generate forecast probability maps. The oil spill forecast uncertainty is thus quantified by comparing the forecast probability map and the associated hindcast simulation. A HyosPy-based simple statistic model is developed to assess the reliability of an oil spill forecast in term of belief degree. The technologies developed in this study create a prototype for uncertainty and reliability analysis in numerical oil spill forecast modeling system, providing emergency managers to improve the capability of real time operational oil spill response and impact assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Numerical Model based Reliability Estimation of Selective Laser Melting Process

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2014-01-01

    Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....

  5. Fatigue reliability based on residual strength model with hybrid uncertain parameters

    Institute of Scientific and Technical Information of China (English)

    Jun Wang; Zhi-Ping Qiu

    2012-01-01

    The aim of this paper is to evaluate the fatigue reliability with hybrid uncertain parameters based on a residual strength model.By solving the non-probabilistic setbased reliability problem and analyzing the reliability with randomness,the fatigue reliability with hybrid parameters can be obtained.The presented hybrid model can adequately consider all uncertainties affecting the fatigue reliability with hybrid uncertain parameters.A comparison among the presented hybrid model,non-probabilistic set-theoretic model and the conventional random model is made through two typical numerical examples.The results show that the presented hybrid model,which can ensure structural security,is effective and practical.

  6. Reliability reallocation models as a support tools in traffic safety analysis.

    Science.gov (United States)

    Bačkalić, Svetlana; Jovanović, Dragan; Bačkalić, Todor

    2014-04-01

    One of the essential questions placed before a road authority is where to act first, i.e. which road sections should be treated in order to achieve the desired level of reliability of a particular road, while this is at the same time the subject of this research. The paper shows how the reliability reallocation theory can be applied in safety analysis of a road consisting of sections. The model has been successfully tested using two apportionment techniques - ARINC and the minimum effort algorithm. The given methods were applied in the traffic safety analysis as a basic step, for the purpose of achieving a higher level of reliability. The previous methods used for selecting hazardous locations do not provide precise values for the required frequency of accidents, i.e. the time period between the occurrences of two accidents. In other words, they do not allow for the establishment of a connection between a precise demand for increased reliability (expressed as a percentage) and the selection of particular road sections for further analysis. The paper shows that reallocation models can also be applied in road safety analysis, or more precisely, as part of the measures for increasing their level of safety. A tool has been developed for selecting road sections for treatment on the basis of a precisely defined increase in the level of reliability of a particular road, i.e. the mean time between the occurrences of two accidents.

  7. Reliability of Coulomb stress changes inferred from correlated uncertainties of finite-fault source models

    KAUST Repository

    Woessner, J.

    2012-07-14

    Static stress transfer is one physical mechanism to explain triggered seismicity. Coseismic stress-change calculations strongly depend on the parameterization of the causative finite-fault source model. These models are uncertain due to uncertainties in input data, model assumptions, and modeling procedures. However, fault model uncertainties have usually been ignored in stress-triggering studies and have not been propagated to assess the reliability of Coulomb failure stress change (ΔCFS) calculations. We show how these uncertainties can be used to provide confidence intervals for co-seismic ΔCFS-values. We demonstrate this for the MW = 5.9 June 2000 Kleifarvatn earthquake in southwest Iceland and systematically map these uncertainties. A set of 2500 candidate source models from the full posterior fault-parameter distribution was used to compute 2500 ΔCFS maps. We assess the reliability of the ΔCFS-values from the coefficient of variation (CV) and deem ΔCFS-values to be reliable where they are at least twice as large as the standard deviation (CV ≤ 0.5). Unreliable ΔCFS-values are found near the causative fault and between lobes of positive and negative stress change, where a small change in fault strike causes ΔCFS-values to change sign. The most reliable ΔCFS-values are found away from the source fault in the middle of positive and negative ΔCFS-lobes, a likely general pattern. Using the reliability criterion, our results support the static stress-triggering hypothesis. Nevertheless, our analysis also suggests that results from previous stress-triggering studies not considering source model uncertainties may have lead to a biased interpretation of the importance of static stress-triggering.

  8. Achieving a high-reliability organization through implementation of the ARCC model for systemwide sustainability of evidence-based practice.

    Science.gov (United States)

    Melnyk, Bernadette Mazurek

    2012-01-01

    High-reliability health care organizations are those that provide care that is safe and one that minimizes errors while achieving exceptional performance in quality and safety. This article presents major concepts and characteristics of a patient safety culture and a high-reliability health care organization and explains how building a culture of evidence-based practice can assist organizations in achieving high reliability. The ARCC (Advancing Research and Clinical practice through close Collaboration) model for systemwide implementation and sustainability of evidence-based practice is highlighted as a key strategy in achieving high reliability in health care organizations.

  9. Comprehensive Care For Joint Replacement Model - Provider Data

    Data.gov (United States)

    U.S. Department of Health & Human Services — Comprehensive Care for Joint Replacement Model - provider data. This data set includes provider data for two quality measures tracked during an episode of care:...

  10. Modeling of reliability and performance assessment of a dissimilar redundancy actuation system with failure monitoring

    Institute of Scientific and Technical Information of China (English)

    Wang Shaoping; Cui Xiaoyu; Shi Jian; Mileta M. Tomovic; Jiao Zongxia

    2016-01-01

    Actuation system is a vital system in an aircraft, providing the force necessary to move flight control surfaces. The system has a significant influence on the overall aircraft performance and its safety. In order to further increase already high reliability and safety, Airbus has imple-mented a dissimilar redundancy actuation system (DRAS) in its aircraft. The DRAS consists of a hydraulic actuation system (HAS) and an electro-hydrostatic actuation system (EHAS), in which the HAS utilizes a hydraulic source (HS) to move the control surface and the EHAS utilizes an elec-trical supply (ES) to provide the motion force. This paper focuses on the performance degradation processes and fault monitoring strategies of the DRAS, establishes its reliability model based on the generalized stochastic Petri nets (GSPN), and carries out a reliability assessment considering the fault monitoring coverage rate and the false alarm rate. The results indicate that the proposed reli-ability model of the DRAS, considering the fault monitoring, can express its fault logical relation and redundancy degradation process and identify potential safety hazards.

  11. Reliability and Maintainability model (RAM) user and maintenance manual. Part 2

    Science.gov (United States)

    Ebeling, Charles E.

    1995-01-01

    This report documents the procedures for utilizing and maintaining the Reliability and Maintainability Model (RAM) developed by the University of Dayton for the NASA Langley Research Center (LaRC). The RAM model predicts reliability and maintainability (R&M) parameters for conceptual space vehicles using parametric relationships between vehicle design and performance characteristics and subsystem mean time between maintenance actions (MTBM) and manhours per maintenance action (MH/MA). These parametric relationships were developed using aircraft R&M data from over thirty different military aircraft of all types. This report describes the general methodology used within the model, the execution and computational sequence, the input screens and data, the output displays and reports, and study analyses and procedures. A source listing is provided.

  12. A particle swarm model for estimating reliability and scheduling system maintenance

    Science.gov (United States)

    Puzis, Rami; Shirtz, Dov; Elovici, Yuval

    2016-05-01

    Modifying data and information system components may introduce new errors and deteriorate the reliability of the system. Reliability can be efficiently regained with reliability centred maintenance, which requires reliability estimation for maintenance scheduling. A variant of the particle swarm model is used to estimate reliability of systems implemented according to the model view controller paradigm. Simulations based on data collected from an online system of a large financial institute are used to compare three component-level maintenance policies. Results show that appropriately scheduled component-level maintenance greatly reduces the cost of upholding an acceptable level of reliability by reducing the need in system-wide maintenance.

  13. Practical applications of age-dependent reliability models and analysis of operational data

    Energy Technology Data Exchange (ETDEWEB)

    Lannoy, A.; Nitoi, M.; Backstrom, O.; Burgazzi, L.; Couallier, V.; Nikulin, M.; Derode, A.; Rodionov, A.; Atwood, C.; Fradet, F.; Antonov, A.; Berezhnoy, A.; Choi, S.Y.; Starr, F.; Dawson, J.; Palmen, H.; Clerjaud, L

    2005-07-01

    The purpose of the workshop was to present the experience of practical application of time-dependent reliability models. The program of the workshop comprises the following sessions: -) aging management and aging PSA (Probabilistic Safety Assessment), -) modeling, -) operation experience, and -) accelerating aging tests. In order to introduce time aging effect of particular component to the PSA model, it has been proposed to use the constant unavailability values on the short period of time (one year for example) calculated on the basis of age-dependent reliability models. As for modeling, it appears that the problem of too detailed statistical models for application is the lack of data for required parameters. As for operating experience, several methods of operating experience analysis have been presented (algorithms for reliability data elaboration and statistical identification of aging trend). As for accelerated aging tests, it is demonstrated that a combination of operating experience analysis with the results of accelerated aging tests of naturally aged equipment could provide a good basis for continuous operation of instrumentation and control systems.

  14. Reliability and Maintainability Model (RAM): User and Maintenance Manual. Part 2; Improved Supportability Analysis

    Science.gov (United States)

    Ebeling, Charles E.

    1996-01-01

    This report documents the procedures for utilizing and maintaining the Reliability & Maintainability Model (RAM) developed by the University of Dayton for the National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC). The purpose of the grant is to provide support to NASA in establishing operational and support parameters and costs of proposed space systems. As part of this research objective, the model described here was developed. This Manual updates and supersedes the 1995 RAM User and Maintenance Manual. Changes and enhancements from the 1995 version of the model are primarily a result of the addition of more recent aircraft and shuttle R&M data.

  15. Quantification of Wave Model Uncertainties Used for Probabilistic Reliability Assessments of Wave Energy Converters

    DEFF Research Database (Denmark)

    Ambühl, Simon; Kofoed, Jens Peter; Sørensen, John Dalsgaard

    2015-01-01

    Wave models used for site assessments are subjected to model uncertainties, which need to be quantified when using wave model results for probabilistic reliability assessments. This paper focuses on determination of wave model uncertainties. Four different wave models are considered, and validation...... uncertainties can be implemented in probabilistic reliability assessments....

  16. Transcutaneous PTCCO2 measurement in combination with arterial blood gas analysis provides superior accuracy and reliability in ICU patients.

    Science.gov (United States)

    Spelten, Oliver; Fiedler, Fritz; Schier, Robert; Wetsch, Wolfgang A; Hinkelbein, Jochen

    2017-02-01

    Hyper or hypoventilation may have serious clinical consequences in critically ill patients and should be generally avoided, especially in neurosurgical patients. Therefore, monitoring of carbon dioxide partial pressure by intermittent arterial blood gas analysis (PaCO2) has become standard in intensive care units (ICUs). However, several additional methods are available to determine PCO2 including end-tidal (PETCO2) and transcutaneous (PTCCO2) measurements. The aim of this study was to compare the accuracy and reliability of different methods to determine PCO2 in mechanically ventilated patients on ICU. After approval of the local ethics committee PCO2 was determined in n = 32 ICU consecutive patients requiring mechanical ventilation: (1) arterial PaCO2 blood gas analysis with Radiometer ABL 625 (ABL; gold standard), (2) arterial PaCO2 analysis with Immediate Response Mobile Analyzer (IRMA), (3) end-tidal PETCO2 by a Propaq 106 EL monitor and (4) transcutaneous PTCCO2 determination by a Tina TCM4. Bland-Altman method was used for statistical analysis; p analysis revealed good correlation between PaCO2 by IRMA and ABL (R(2) = 0.766; p analysis revealed a bias and precision of 2.0 ± 3.7 mmHg for the IRMA, 2.2 ± 5.7 mmHg for transcutaneous, and -5.5 ± 5.6 mmHg for end-tidal measurement. Arterial CO2 partial pressure by IRMA (PaCO2) and PTCCO2 provided greater accuracy compared to the reference measurement (ABL) than the end-tidal CO2 measurements in critically ill in mechanically ventilated patients patients.

  17. Customer-Provider Strategic Alignment: A Maturity Model

    Science.gov (United States)

    Luftman, Jerry; Brown, Carol V.; Balaji, S.

    This chapter presents a new model for assessing the maturity of a ­customer-provider relationship from a collaborative service delivery perspective: the Customer-Provider Strategic Alignment Maturity (CPSAM) Model. This model builds on recent research for effectively managing the customer-provider relationship in IT service outsourcing contexts and a validated model for assessing alignment across internal IT service units and their business customers within the same organization. After reviewing relevant literature by service science and information systems researchers, the six overarching components of the maturity model are presented: value measurements, governance, partnership, communications, human resources and skills, and scope and architecture. A key assumption of the model is that all of the components need be addressed to assess and improve customer-provider alignment. Examples of specific metrics for measuring the maturity level of each component over the five levels of maturity are also presented.

  18. Hierarchical modeling for reliability analysis using Markov models. B.S./M.S. Thesis - MIT

    Science.gov (United States)

    Fagundo, Arturo

    1994-01-01

    Markov models represent an extremely attractive tool for the reliability analysis of many systems. However, Markov model state space grows exponentially with the number of components in a given system. Thus, for very large systems Markov modeling techniques alone become intractable in both memory and CPU time. Often a particular subsystem can be found within some larger system where the dependence of the larger system on the subsystem is of a particularly simple form. This simple dependence can be used to decompose such a system into one or more subsystems. A hierarchical technique is presented which can be used to evaluate these subsystems in such a way that their reliabilities can be combined to obtain the reliability for the full system. This hierarchical approach is unique in that it allows the subsystem model to pass multiple aggregate state information to the higher level model, allowing more general systems to be evaluated. Guidelines are developed to assist in the system decomposition. An appropriate method for determining subsystem reliability is also developed. This method gives rise to some interesting numerical issues. Numerical error due to roundoff and integration are discussed at length. Once a decomposition is chosen, the remaining analysis is straightforward but tedious. However, an approach is developed for simplifying the recombination of subsystem reliabilities. Finally, a real world system is used to illustrate the use of this technique in a more practical context.

  19. Modeling Parameters of Reliability of Technological Processes of Hydrocarbon Pipeline Transportation

    Directory of Open Access Journals (Sweden)

    Shalay Viktor

    2016-01-01

    Full Text Available On the basis of methods of system analysis and parametric reliability theory, the mathematical modeling of processes of oil and gas equipment operation in reliability monitoring was conducted according to dispatching data. To check the quality of empiric distribution coordination , an algorithm and mathematical methods of analysis are worked out in the on-line mode in a changing operating conditions. An analysis of physical cause-and-effect relations mechanism between the key factors and changing parameters of technical systems of oil and gas facilities is made, the basic types of technical distribution parameters are defined. Evaluation of the adequacy the analyzed parameters of the type of distribution is provided by using a criterion A.Kolmogorov, as the most universal, accurate and adequate to verify the distribution of continuous processes of complex multiple-technical systems. Methods of calculation are provided for supervising by independent bodies for risk assessment and safety facilities.

  20. Using multi-model averaging to improve the reliability of catchment scale nitrogen predictions

    Science.gov (United States)

    Exbrayat, J.-F.; Viney, N. R.; Frede, H.-G.; Breuer, L.

    2013-01-01

    Hydro-biogeochemical models are used to foresee the impact of mitigation measures on water quality. Usually, scenario-based studies rely on single model applications. This is done in spite of the widely acknowledged advantage of ensemble approaches to cope with structural model uncertainty issues. As an attempt to demonstrate the reliability of such multi-model efforts in the hydro-biogeochemical context, this methodological contribution proposes an adaptation of the reliability ensemble averaging (REA) philosophy to nitrogen losses predictions. A total of 4 models are used to predict the total nitrogen (TN) losses from the well-monitored Ellen Brook catchment in Western Australia. Simulations include re-predictions of current conditions and a set of straightforward management changes targeting fertilisation scenarios. Results show that, in spite of good calibration metrics, one of the models provides a very different response to management changes. This behaviour leads the simple average of the ensemble members to also predict reductions in TN export that are not in agreement with the other models. However, considering the convergence of model predictions in the more sophisticated REA approach assigns more weight to previously less well-calibrated models that are more in agreement with each other. This method also avoids having to disqualify any of the ensemble members.

  1. Semi-Markov Models for Degradation-Based Reliability

    Science.gov (United States)

    2010-01-01

    aircraft, marine sys- tems, and machinery ( Jardine and Anderson, 1985; Jardine et al., 1987, 1989; Zhan et al., 2003). An excellent review of PHMs...distributions in a time-varying environment. IEEE Transactions on Reliability, 57, 539–550. Jardine , A.K.S. and Anderson, M. (1985) Use of...concomitant variables for reliability estimation. Maintenance Management International, 5, 135–140. Jardine , A.K.S., Anderson, P.M. and Mann, D.S. (1987

  2. Reliability and validity of measurements on digital study models and plaster models.

    Science.gov (United States)

    Reuschl, Ralph Philip; Heuer, Wieland; Stiesch, Meike; Wenzel, Daniela; Dittmer, Marc Philipp

    2016-02-01

    To compare manual plaster cast and digitized model analysis for accuracy and efficiency. Nineteen plaster models of orthodontic patients in permanent dentition were analyzed by two calibrated examiners. Analyses were performed with a diagnostic calliper and computer-assisted analysis after digitization of the plaster models. The reliability and efficiency of different examiners and methods were compared statistically using a mixed model. Statistically significant differences were found for comparisons of all 28 teeth (P plaster model analysis appears to be an adequate, reliable, and time saving alternative to analogue model analysis using a calliper. © The Author 2015. Published by Oxford University Press on behalf of the European Orthodontic Society. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  3. A Workforce Design Model: Providing Energy to Organizations in Transition

    Science.gov (United States)

    Halm, Barry J.

    2011-01-01

    The purpose of this qualitative study was to examine the change in performance realized by a professional services organization, which resulted in the Life Giving Workforce Design (LGWD) model through a grounded theory research design. This study produced a workforce design model characterized as an organizational blueprint that provides virtuous…

  4. The reliability model of the fault-tolerant computing system with triple-modular redundancy based on the independent nodes

    Science.gov (United States)

    Rahman, P. A.; Bobkova, E. Yu

    2017-01-01

    This paper deals with a reliability model of the restorable non-stop computing system with triple-modular redundancy based on independent computing nodes, taking into consideration the finite time for node activation and different node failure rates in the active and passive states. The obtained by authors generalized reliability model and calculation formulas for reliability indices for the system based on identical and independent computing nodes with the given threshold for quantity of active nodes, at which system is considered as operable, are also discussed. Finally, the application of the generalized model to the particular case of the non-stop restorable computing system with triple-modular redundancy based on independent nodes and calculation examples for reliability indices are also provided.

  5. Modular System Modeling for Quantitative Reliability Evaluation of Technical Systems

    Directory of Open Access Journals (Sweden)

    Stephan Neumann

    2016-01-01

    Full Text Available In modern times, it is necessary to offer reliable products to match the statutory directives concerning product liability and the high expectations of customers for durable devices. Furthermore, to maintain a high competitiveness, engineers need to know as accurately as possible how long their product will last and how to influence the life expectancy without expensive and time-consuming testing. As the components of a system are responsible for the system reliability, this paper introduces and evaluates calculation methods for life expectancy of common machine elements in technical systems. Subsequently, a method for the quantitative evaluation of the reliability of technical systems is proposed and applied to a heavy-duty power shift transmission.

  6. Digital Avionics Information System (DAIS): Reliability and Maintainability Model Users Guide. Final Report, May 1975-July 1977.

    Science.gov (United States)

    Czuchry, Andrew J.; And Others

    This report provides a complete guide to the stand alone mode operation of the reliability and maintenance (R&M) model, which was developed to facilitate the performance of design versus cost trade-offs within the digital avionics information system (DAIS) acquisition process. The features and structure of the model, its input data…

  7. An Assessment of the VHTR Safety Distance Using the Reliability Physics Model

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Joeun; Kim, Jintae; Jae, Moosung [Hanyang University, Seoul (Korea, Republic of)

    2015-10-15

    In Korea planning the production of hydrogen using high temperature from nuclear power is in progress. To produce hydrogen from nuclear plants, supplying temperature above 800 .deg. C is required. Therefore, Very High Temperature Reactor (VHTR) which is able to provide about 950 .deg. C is suitable. In situation of high temperature and corrosion where hydrogen might be released easily, hydrogen production facility using VHTR has a danger of explosion. Moreover explosion not only has a bad influence upon facility itself but also on VHTR. Those explosions result in unsafe situation that cause serious damage. However, In terms of thermal-hydraulics view, long distance makes low efficiency Thus, in this study, a methodology for the safety assessment of safety distance between the hydrogen production facilities and the VHTR is developed with reliability physics model. Based on the standard safety criteria which is a value of 1 x 10{sup -6}, the safety distance between the hydrogen production facilities and the VHTR using reliability physics model are calculated to be a value of 60m - 100m. In the future, assessment for characteristic of VHTR, the capacity to resist pressure from outside hydrogen explosion and the overpressure for the large amount of detonation volume in detail is expected to identify more precise safety distance using this reliability physics model.

  8. Competing risk models in reliability systems, a weibull distribution model with bayesian analysis approach

    Science.gov (United States)

    Iskandar, Ismed; Satria Gondokaryono, Yudi

    2016-02-01

    In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range

  9. Probabilistic Structural Analysis and Reliability Using NESSUS With Implemented Material Strength Degradation Model

    Science.gov (United States)

    Bast, Callie C.; Jurena, Mark T.; Godines, Cody R.; Chamis, Christos C. (Technical Monitor)

    2001-01-01

    This project included both research and education objectives. The goal of this project was to advance innovative research and education objectives in theoretical and computational probabilistic structural analysis, reliability, and life prediction for improved reliability and safety of structural components of aerospace and aircraft propulsion systems. Research and education partners included Glenn Research Center (GRC) and Southwest Research Institute (SwRI) along with the University of Texas at San Antonio (UTSA). SwRI enhanced the NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) code and provided consulting support for NESSUS-related activities at UTSA. NASA funding supported three undergraduate students, two graduate students, a summer course instructor and the Principal Investigator. Matching funds from UTSA provided for the purchase of additional equipment for the enhancement of the Advanced Interactive Computational SGI Lab established during the first year of this Partnership Award to conduct the probabilistic finite element summer courses. The research portion of this report presents the cumulation of work performed through the use of the probabilistic finite element program, NESSUS, Numerical Evaluation and Structures Under Stress, and an embedded Material Strength Degradation (MSD) model. Probabilistic structural analysis provided for quantification of uncertainties associated with the design, thus enabling increased system performance and reliability. The structure examined was a Space Shuttle Main Engine (SSME) fuel turbopump blade. The blade material analyzed was Inconel 718, since the MSD model was previously calibrated for this material. Reliability analysis encompassing the effects of high temperature and high cycle fatigue, yielded a reliability value of 0.99978 using a fully correlated random field for the blade thickness. The reliability did not change significantly for a change in distribution type except for a change in

  10. Modeling Sensor Reliability in Fault Diagnosis Based on Evidence Theory.

    Science.gov (United States)

    Yuan, Kaijuan; Xiao, Fuyuan; Fei, Liguo; Kang, Bingyi; Deng, Yong

    2016-01-18

    Sensor data fusion plays an important role in fault diagnosis. Dempster-Shafer (D-R) evidence theory is widely used in fault diagnosis, since it is efficient to combine evidence from different sensors. However, under the situation where the evidence highly conflicts, it may obtain a counterintuitive result. To address the issue, a new method is proposed in this paper. Not only the statistic sensor reliability, but also the dynamic sensor reliability are taken into consideration. The evidence distance function and the belief entropy are combined to obtain the dynamic reliability of each sensor report. A weighted averaging method is adopted to modify the conflict evidence by assigning different weights to evidence according to sensor reliability. The proposed method has better performance in conflict management and fault diagnosis due to the fact that the information volume of each sensor report is taken into consideration. An application in fault diagnosis based on sensor fusion is illustrated to show the efficiency of the proposed method. The results show that the proposed method improves the accuracy of fault diagnosis from 81.19% to 89.48% compared to the existing methods.

  11. Fault maintenance trees: reliability centered maintenance via statistical model checking

    NARCIS (Netherlands)

    Ruijters, Enno; Guck, Dennis; Drolenga, Peter; Stoelinga, Mariëlle

    2016-01-01

    The current trend in infrastructural asset management is towards risk-based (a.k.a. reliability centered) maintenance, promising better performance at lower cost. By maintaining crucial components more intensively than less important ones, dependability increases while costs decrease. This requires

  12. Fault maintenance trees: reliability centered maintenance via statistical model checking

    NARCIS (Netherlands)

    Ruijters, Enno Jozef Johannes; Guck, Dennis; Drolenga, Peter; Stoelinga, Mariëlle Ida Antoinette

    The current trend in infrastructural asset management is towards risk-based (a.k.a. reliability centered) maintenance, promising better performance at lower cost. By maintaining crucial components more intensively than less important ones, dependability increases while costs decrease. This requires

  13. Continuously Optimized Reliable Energy (CORE) Microgrid: Models & Tools (Fact Sheet)

    Energy Technology Data Exchange (ETDEWEB)

    2013-07-01

    This brochure describes Continuously Optimized Reliable Energy (CORE), a trademarked process NREL employs to produce conceptual microgrid designs. This systems-based process enables designs to be optimized for economic value, energy surety, and sustainability. Capabilities NREL offers in support of microgrid design are explained.

  14. Impact of Loss Synchronization on Reliable High Speed Networks: A Model Based Simulation

    Directory of Open Access Journals (Sweden)

    Suman Kumar

    2014-01-01

    Full Text Available Contemporary nature of network evolution demands for simulation models which are flexible, scalable, and easily implementable. In this paper, we propose a fluid based model for performance analysis of reliable high speed networks. In particular, this paper aims to study the dynamic relationship between congestion control algorithms and queue management schemes, in order to develop a better understanding of the causal linkages between the two. We propose a loss synchronization module which is user configurable. We validate our model through simulations under controlled settings. Also, we present a performance analysis to provide insights into two important issues concerning 10 Gbps high speed networks: (i impact of bottleneck buffer size on the performance of 10 Gbps high speed network and (ii impact of level of loss synchronization on link utilization-fairness tradeoffs. The practical impact of the proposed work is to provide design guidelines along with a powerful simulation tool to protocol designers and network developers.

  15. Novel Software Reliability Estimation Model for Altering Paradigms of Software Engineering

    Directory of Open Access Journals (Sweden)

    Ritika Wason

    2012-05-01

    Full Text Available A number of different software engineering paradigms like Component-Based Software Engineering (CBSE, Autonomic Computing, Service-Oriented Computing (SOC, Fault-Tolerant Computing and many others are being researched currently. These paradigms denote a paradigm shift from the currently mainstream object-oriented paradigm and are altering the way we view, design, develop and exercise software. Though these paradigms indicate a major shift in the way we design and code software. However, we still rely on traditional reliability models for estimating the reliability of any of the above systems. This paper analyzes the underlying characteristics of these paradigms and proposes a novel Finite Automata Based Reliability model as a suitable model for estimating reliability of modern, complex, distributed and critical software applications. We further outline the basic framework for an intelligent, automata-based reliability model that can be used for accurate estimation of system reliability of software systems at any point in the software life cycle.

  16. Circuit design for reliability

    CERN Document Server

    Cao, Yu; Wirth, Gilson

    2015-01-01

    This book presents physical understanding, modeling and simulation, on-chip characterization, layout solutions, and design techniques that are effective to enhance the reliability of various circuit units.  The authors provide readers with techniques for state of the art and future technologies, ranging from technology modeling, fault detection and analysis, circuit hardening, and reliability management. Provides comprehensive review on various reliability mechanisms at sub-45nm nodes; Describes practical modeling and characterization techniques for reliability; Includes thorough presentation of robust design techniques for major VLSI design units; Promotes physical understanding with first-principle simulations.

  17. A new approach to provide high-reliability data systems without using space-qualified electronic components

    Science.gov (United States)

    Haebel, Wolfgang

    2004-08-01

    This paper describes the present situation and the expected trends with regard to the availability of electronic components, their quality levels, technology trends and sensitivity to the space environment. Many recognized vendors have already discontinued their MIL production line and state of the art components will in many cases not be offered in this quality level because of the shrinking market. It becomes therefore obvious that new methods need to be considered "How to build reliable Data Systems for space applications without High-Rel parts". One of the most promising approaches is the identification, masking and suppression of faults by developing fault-tolerant computer systems which is described in this paper.

  18. Testing the stability and reliability of starspot modelling.

    Science.gov (United States)

    Kovari, Zs.; Bartus, J.

    1997-07-01

    Since the mid 70's different starspot modelling techniques have been used to describe the observed spot variability on active stars. Spot positions and temperatures are calculated by application of surface integration techniques or solution of analytic equations on observed photometric data. Artificial spotted light curves were generated, by use of the analytic expressions of Budding (1977Ap&SS..48..207B), to test how the different constraints like the intrinsic scatter of the observed data or the angle of inclination affects the spot solutions. Counteractions between the different parameters like inclination, latitude and spot size were also investigated. The results of re-modelling the generated data were scrutinized statistically. It was found, that (1) 0.002-0.005mag of photometric accuracy is required to recover geometrical spot parameters within an acceptable error box; (2) even a 0.03-0.05mag error in unspotted brightness substantially affects the recovery of the original spot distribution; (3) especially at low inclination, under- or overestimation of inclination by 10° leads to an important systematic error in spot latitude and size; (4) when the angle of inclination i<~20° photometric spot modelling is unable to provide satisfactory information on spot location and size.

  19. Mathematical modeling and reliability analysis of a 3D Li-ion battery

    Directory of Open Access Journals (Sweden)

    RICHARD HONG PENG LIANG

    2014-02-01

    Full Text Available The three-dimensional (3D Li-ion battery presents an effective solution to issues affecting its two-dimensional counterparts, as it is able to attain high energy capacities for the same areal footprint without sacrificing power density. A 3D battery has key structural features extending in and fully utilizing 3D space, allowing it to achieve greater reliability and longevity. This study applies an electrochemical-thermal coupled model to a checkerboard array of alternating positive and negative electrodes in a 3D architecture with either square or circular electrodes. The mathematical model comprises the transient conservation of charge, species, and energy together with electroneutrality, constitutive relations and relevant initial and boundary conditions. A reliability analysis carried out to simulate malfunctioning of either a positive or negative electrode reveals that although there are deviations in electrochemical and thermal behavior for electrodes adjacent to the malfunctioning electrode as compared to that in a fully-functioning array, there is little effect on electrodes further away, demonstrating the redundancy that a 3D electrode array provides. The results demonstrate that implementation of 3D batteries allow it to reliably and safely deliver power even if a component malfunctions, a strong advantage over conventional 2D batteries.

  20. Modelling Reliability-adaptive Multi-system Operation

    Institute of Scientific and Technical Information of China (English)

    Uwe K. Rakowsky

    2006-01-01

    This contribution discusses the concept of Reliability-Adaptive Systems (RAS) to multi-system operation. A fleet of independently operating systems and a single maintenance unit are considered. It is the objective in this paper to increase overall performance or workload respectively by avoiding delay due to busy maintenance units. This is achieved by concerted and coordinated derating of individual system performance, which increases reliability. Quantification is carried out by way of a convolution-based approach. The approach is tailored to fleets of ships, aeroplanes, spacecraft, and vehicles (trains, trams, buses, cars, trucks, etc.) - Finally, the effectiveness of derating is validated using different criteria. The RAS concept makes sense if average system output loss due to lowered performance level (yielding longer time to failure) is smaller than average loss due to waiting for maintenance in a non-adaptive case.

  1. Power electronics reliability analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Mark A.; Atcitty, Stanley

    2009-12-01

    This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.

  2. Modeling the reliability and maintenance costs of wind turbines using Weibull analysis

    Energy Technology Data Exchange (ETDEWEB)

    Vachon, W.A. [W.A. Vachon & Associates, Inc., Manchester, MA (United States)

    1996-12-31

    A general description is provided of the basic mathematics and use of Weibull statistical models for modeling component failures and maintenance costs as a function of time. The applicability of the model to wind turbine components and subsystems is discussed with illustrative examples of typical component reliabilities drawn from actual field experiences. Example results indicate the dominant role of key subsystems based on a combination of their failure frequency and repair/replacement costs. The value of the model is discussed as a means of defining (1) maintenance practices, (2) areas in which to focus product improvements, (3) spare parts inventory, and (4) long-term trends in maintenance costs as an important element in project cash flow projections used by developers, investors, and lenders. 6 refs., 8 figs., 3 tabs.

  3. Comparison of two stochastic techniques for reliable urban runoff prediction by modeling systematic errors

    DEFF Research Database (Denmark)

    Del Giudice, Dario; Löwe, Roland; Madsen, Henrik;

    2015-01-01

    provide probabilistic predictions of wastewater discharge in a similarly reliable way, both for periods ranging from a few hours up to more than 1 week ahead of time. The EBD produces more accurate predictions on long horizons but relies on computationally heavy MCMC routines for parameter inferences......In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two...

  4. Trust Your Cloud Service Provider: User Based Crypto Model. Sitanaboina

    Directory of Open Access Journals (Sweden)

    Sri Lakshmi Parvathi

    2014-10-01

    Full Text Available In Data Storage as a Service (STaaS cloud computing environment, the equipment used for business operations can be leased from a single service provider along with the application, and the related business data can be stored on equipment provided by the same service provider. This type of arrangement can help a company save on hardware and software infrastructure costs, but storing the company’s data on the service provider’s equipment raises the possibility that important business information may be improperly disclosed to others [1]. Some researchers have suggested that user data stored on a service-provider’s equipment must be encrypted [2]. Encrypting data prior to storage is a common method of data protection, and service providers may be able to build firewalls to ensure that the decryption keys associated with encrypted user data are not disclosed to outsiders. However, if the decryption key and the encrypted data are held by the same service provider, it raises the possibility that high-level administrators within the service provider would have access to both the decryption key and the encrypted data, thus presenting a risk for the unauthorized disclosure of the user data. we in this paper provides an unique business model of cryptography where crypto keys are distributed across the user and the trusted third party(TTP with adoption of such a model mainly the CSP insider attack an form of misuse of valuable user data can be treated secured.

  5. Model correction factor method for reliability problems involving integrals of non-Gaussian random fields

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals...... are considered to be Gaussian. Conventional FORM analysis yields the linearization point of the idealized limit-state surface. A model correction factor is then introduced to push the idealized limit-state surface onto the actual limit-state surface. A few iterations yield a good approximation of the reliability...... reliability method; Model correction factor method; Nataf field integration; Non-Gaussion random field; Random field integration; Structural reliability; Pile foundation reliability...

  6. Sensitivity of Reliability Estimates in Partially Damaged RC Structures subject to Earthquakes, using Reduced Hysteretic Models

    DEFF Research Database (Denmark)

    Iwankiewicz, R.; Nielsen, Søren R. K.; Skjærbæk, P. S.

    The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation.......The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation....

  7. Reliability of Summed Item Scores Using Structural Equation Modeling: An Alternative to Coefficient Alpha

    Science.gov (United States)

    Green, Samuel B.; Yang, Yanyun

    2009-01-01

    A method is presented for estimating reliability using structural equation modeling (SEM) that allows for nonlinearity between factors and item scores. Assuming the focus is on consistency of summed item scores, this method for estimating reliability is preferred to those based on linear SEM models and to the most commonly reported estimate of…

  8. A Cumulative Damage Reliability Model on the Basis of Contact Fatigue of the Rolling Bearing

    Institute of Scientific and Technical Information of China (English)

    HUANG Li

    2006-01-01

    A cumulative damage reliability model of contact fatigue of the rolling bearing is more identical with the actual conditions. It is put forward on the basis of contact fatigue life probability distribution of the rolling bearing that obey Weibull distribution and rest on the Miner cumulative damage theory. Finally a case is given to predict the reliability of bearing roller by using these models.

  9. Reliability Modeling and Optimization Strategy for Manufacturing System Based on RQR Chain

    Directory of Open Access Journals (Sweden)

    Yihai He

    2015-01-01

    Full Text Available Accurate and dynamic reliability modeling for the running manufacturing system is the prerequisite to implement preventive maintenance. However, existing studies could not output the reliability value in real time because their abandonment of the quality inspection data originated in the operation process of manufacturing system. Therefore, this paper presents an approach to model the manufacturing system reliability dynamically based on their operation data of process quality and output data of product reliability. Firstly, on the basis of importance explanation of the quality variations in manufacturing process as the linkage for the manufacturing system reliability and product inherent reliability, the RQR chain which could represent the relationships between them is put forward, and the product qualified probability is proposed to quantify the impacts of quality variation in manufacturing process on the reliability of manufacturing system further. Secondly, the impact of qualified probability on the product inherent reliability is expounded, and the modeling approach of manufacturing system reliability based on the qualified probability is presented. Thirdly, the preventive maintenance optimization strategy for manufacturing system driven by the loss of manufacturing quality variation is proposed. Finally, the validity of the proposed approach is verified by the reliability analysis and optimization example of engine cover manufacturing system.

  10. Service Model for Multi-Provider IP Service Management

    Institute of Scientific and Technical Information of China (English)

    YU Cheng-zhi; SONG Han-tao; LIU Li

    2005-01-01

    In order to solve the problems associated with Internet IP services management, a generic service model for multi-provider IP service management is proposed, which is based on a generalization of the bandwidth broker idea introduced in the differentiated services (DiffServ) environment. This model consists of a hierarchy of service brokers, which makes it fit into providing end-to-end Internet services with QoS support. A simple and scalable mechanism is used to communicate with other cooperative domains to enable customers to dynamically setup services connections over multiple DiffServ domains. The simulation results show that the proposed model is real-time, which can deal with many flow requests in a short period of time, so that it is fit for the service management in a reasonably large network.

  11. Skill and reliability of climate model ensembles at the Last Glacial Maximum and mid-Holocene

    Directory of Open Access Journals (Sweden)

    J. C. Hargreaves

    2013-03-01

    Full Text Available Paleoclimate simulations provide us with an opportunity to critically confront and evaluate the performance of climate models in simulating the response of the climate system to changes in radiative forcing and other boundary conditions. Hargreaves et al. (2011 analysed the reliability of the Paleoclimate Modelling Intercomparison Project, PMIP2 model ensemble with respect to the MARGO sea surface temperature data synthesis (MARGO Project Members, 2009 for the Last Glacial Maximum (LGM, 21 ka BP. Here we extend that work to include a new comprehensive collection of land surface data (Bartlein et al., 2011, and introduce a novel analysis of the predictive skill of the models. We include output from the PMIP3 experiments, from the two models for which suitable data are currently available. We also perform the same analyses for the PMIP2 mid-Holocene (6 ka BP ensembles and available proxy data sets. Our results are predominantly positive for the LGM, suggesting that as well as the global mean change, the models can reproduce the observed pattern of change on the broadest scales, such as the overall land–sea contrast and polar amplification, although the more detailed sub-continental scale patterns of change remains elusive. In contrast, our results for the mid-Holocene are substantially negative, with the models failing to reproduce the observed changes with any degree of skill. One cause of this problem could be that the globally- and annually-averaged forcing anomaly is very weak at the mid-Holocene, and so the results are dominated by the more localised regional patterns in the parts of globe for which data are available. The root cause of the model-data mismatch at these scales is unclear. If the proxy calibration is itself reliable, then representativity error in the data-model comparison, and missing climate feedbacks in the models are other possible sources of error.

  12. Implementation of a combined algorithm designed to increase the reliability of information systems: simulation modeling

    Science.gov (United States)

    Popov, A.; Zolotarev, V.; Bychkov, S.

    2016-11-01

    This paper examines the results of experimental studies of a previously submitted combined algorithm designed to increase the reliability of information systems. The data that illustrates the organization and conduct of the studies is provided. Within the framework of a comparison of As a part of the study conducted, the comparison of the experimental data of simulation modeling and the data of the functioning of the real information system was made. The hypothesis of the homogeneity of the logical structure of the information systems was formulated, thus enabling to reconfigure the algorithm presented, - more specifically, to transform it into the model for the analysis and prediction of arbitrary information systems. The results presented can be used for further research in this direction. The data of the opportunity to predict the functioning of the information systems can be used for strategic and economic planning. The algorithm can be used as a means for providing information security.

  13. Development of an Environment for Software Reliability Model Selection

    Science.gov (United States)

    1992-09-01

    t-1, the reliability of the system is estimated to be [1:954] •i~t): •_’(i• !(2-11) + tp(i,j3) where 5 and f5 are the NIL estimates of a, 3. This...extern FILE f5 extern double ka[]; static FILE *fpl; static boolean DIFFERENT = TRUE; static double LRT, X1[MAX..SAMPLES], Yi[MAX-.SAMPLES]; .static int...function replaced** ** by printfo) function double Betacf(double a, double b, double x) { double qap, qam, qab, em, tem, d; double bz, bm = 1.0, bp, bpp

  14. A new approach to real-time reliability analysis of transmission system using fuzzy Markov model

    Energy Technology Data Exchange (ETDEWEB)

    Tanrioven, M.; Kocatepe, C. [University of Yildiz Technical, Istanbul (Turkey). Dept. of Electrical Engineering; Wu, Q.H.; Turner, D.R.; Wang, J. [Liverpool Univ. (United Kingdom). Dept. of Electrical Engineering and Economics

    2004-12-01

    To date the studies of power system reliability over a specified time period have used average values of the system transition rates in Markov techniques. [Singh C, Billinton R. System reliability modeling and evaluation. London: Hutchison Educational; 1977]. However, the level of power systems reliability varies from time to time due to weather conditions, power demand and random faults [Billinton R, Wojczynski E. Distributional variation of distribution system reliability indices. IEEE Trans Power Apparatus Systems 1985; PAS-104(11):3152-60]. It is essential to obtain an estimate of system reliability under all environmental and operating conditions. In this paper, fuzzy logic is used in the Markov model to describe both transition rates and temperature-based seasonal variations, which identifies multiple weather conditions such as normal, less stormy, very stormy, etc. A three-bus power system model is considered to determine the variation of system reliability in real-time, using this newly developed fuzzy Markov model (FMM). The results cover different aspects such as daily and monthly reliability changes during January and August. The reliability of the power transmission system is derived as a function of augmentation in peak load level. Finally the variation of the system reliability with weather conditions is determined. (author)

  15. Using PoF models to predict system reliability considering failure collaboration

    Directory of Open Access Journals (Sweden)

    Zhiguo Zeng

    2016-10-01

    Full Text Available Existing Physics-of-Failure-based (PoF-based system reliability prediction methods are grounded on the independence assumption, which overlooks the dependency among the components. In this paper, a new type of dependency, referred to as failure collaboration, is introduced and considered in reliability predictions. A PoF-based model is developed to describe the failure behavior of systems subject to failure collaboration. Based on the developed model, the Bisection-based Reliability Analysis Method (BRAM is exploited to calculate the system reliability. The developed methods are applied to predicting the reliability of a Hydraulic Servo Actuator (HSA. The results demonstrate that the developed methods outperform the traditional PoF-based reliability prediction methods when applied to systems subject to failure collaboration.

  16. Stochastic modeling for reliability shocks, burn-in and heterogeneous populations

    CERN Document Server

    Finkelstein, Maxim

    2013-01-01

    Focusing on shocks modeling, burn-in and heterogeneous populations, Stochastic Modeling for Reliability naturally combines these three topics in the unified stochastic framework and presents numerous practical examples that illustrate recent theoretical findings of the authors.  The populations of manufactured items in industry are usually heterogeneous. However, the conventional reliability analysis is performed under the implicit assumption of homogeneity, which can result in distortion of the corresponding reliability indices and various misconceptions. Stochastic Modeling for Reliability fills this gap and presents the basics and further developments of reliability theory for heterogeneous populations. Specifically, the authors consider burn-in as a method of elimination of ‘weak’ items from heterogeneous populations. The real life objects are operating in a changing environment. One of the ways to model an impact of this environment is via the external shocks occurring in accordance with some stocha...

  17. RELY: A reliability modeling system for analysis of sodium-sulfur battery configurations

    Energy Technology Data Exchange (ETDEWEB)

    Hostick, C.J.; Huber, H.D.; Doggett, W.H.; Dirks, J.A.; Dovey, J.F.; Grinde, R.B.; Littlefield, J.S.; Cuta, F.M.

    1987-06-01

    In support of the Office of Energy Storage and Distribution of the US Department of Energy (DOE), Pacific Northwest Laboratory has produced a microcomputer-based software package, called RELY, to assess the impact of sodium-sulfur cell reliability on constant current discharge battery performance. The Fortran-based software operates on IBM microcomputers and IBM-compatibles that have a minimum of 512K of internal memory. The software package has three models that provide the following: (1) a description of the failure distribution parameters used to model cell failure, (2) a Monte Carlo simulation of battery life, and (3) a detailed discharge model for a user-specified battery discharge cycle. 6 refs., 31 figs., 4 tabs.

  18. Analysis of whisker-toughened CMC structural components using an interactive reliability model

    Science.gov (United States)

    Duffy, Stephen F.; Palko, Joseph L.

    1992-01-01

    Realizing wider utilization of ceramic matrix composites (CMC) requires the development of advanced structural analysis technologies. This article focuses on the use of interactive reliability models to predict component probability of failure. The deterministic William-Warnke failure criterion serves as theoretical basis for the reliability model presented here. The model has been implemented into a test-bed software program. This computer program has been coupled to a general-purpose finite element program. A simple structural problem is presented to illustrate the reliability model and the computer algorithm.

  19. Screen for child anxiety related emotional disorders: are subscale scores reliable? A bifactor model analysis.

    Science.gov (United States)

    DeSousa, Diogo Araújo; Zibetti, Murilo Ricardo; Trentini, Clarissa Marceli; Koller, Silvia Helena; Manfro, Gisele Gus; Salum, Giovanni Abrahão

    2014-12-01

    The aim of this study was to investigate the utility of creating and scoring subscales for the self-report version of the Screen for Child Anxiety Related Emotional Disorders (SCARED) by examining whether subscale scores provide reliable information after accounting for a general anxiety factor in a bifactor model analysis. A total of 2420 children aged 9-18 answered the SCARED in their schools. Results suggested adequate fit of the bifactor model. The SCARED score variance was hardly influenced by the specific domains after controlling for the common variance in the general factor. The explained common variance (ECV) for the general factor was large (63.96%). After accounting for the general total score (ωh=.83), subscale scores provided very little reliable information (ωh ranged from .005 to .04). Practitioners that use the SCARED should be careful when scoring and interpreting the instrument subscales since there is more common variance to them than specific variance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Design and analysis of the reliability of on-board computer system based on Markov-model

    Institute of Scientific and Technical Information of China (English)

    MA Xiu-juan; CAO Xi-bin; ZHAO Guo-liang

    2005-01-01

    An on-board computer system should have such advantages as light weight, small volume and low power to meet the demand of micro-satellites. This paper, based on specific characteristics of Stereo Mapping Micro-Satellite ( SMMS), describes the on-board computer system with its advantage of having centralized and distributed control in the same system and analyzes its reliability based on a Markov model in order to provide a theoretical foundation for a reliable design. The on-board computer system has been put into use in principle prototype model of Stereo Mapping Micro-Satellite and has already been debugged. All indexes meet the requirements of the design.

  1. Model of Providing Assistive Technologies in Special Education Schools.

    Science.gov (United States)

    Lersilp, Suchitporn; Putthinoi, Supawadee; Chakpitak, Nopasit

    2015-05-14

    Most students diagnosed with disabilities in Thai special education schools received assistive technologies, but this did not guarantee the greatest benefits. The purpose of this study was to survey the provision, use and needs of assistive technologies, as well as the perspectives of key informants regarding a model of providing them in special education schools. The participants were selected by the purposive sampling method, and they comprised 120 students with visual, physical, hearing or intellectual disabilities from four special education schools in Chiang Mai, Thailand; and 24 key informants such as parents or caregivers, teachers, school principals and school therapists. The instruments consisted of an assistive technology checklist and a semi-structured interview. Results showed that a category of assistive technologies was provided for students with disabilities, with the highest being "services", followed by "media" and then "facilities". Furthermore, mostly students with physical disabilities were provided with assistive technologies, but those with visual disabilities needed it more. Finally, the model of providing assistive technologies was composed of 5 components: Collaboration; Holistic perspective; Independent management of schools; Learning systems and a production manual for users; and Development of an assistive technology center, driven by 3 major sources such as Government and Private organizations, and Schools.

  2. MEMS reliability

    CERN Document Server

    Hartzell, Allyson L; Shea, Herbert R

    2010-01-01

    This book focuses on the reliability and manufacturability of MEMS at a fundamental level. It demonstrates how to design MEMs for reliability and provides detailed information on the different types of failure modes and how to avoid them.

  3. Analysis and Application of Mechanical System Reliability Model Based on Copula Function

    Directory of Open Access Journals (Sweden)

    An Hai

    2016-10-01

    Full Text Available There is complicated correlations in mechanical system. By using the advantages of copula function to solve the related issues, this paper proposes the mechanical system reliability model based on copula function. And makes a detailed research for the serial and parallel mechanical system model and gets their reliability function respectively. Finally, the application research is carried out for serial mechanical system reliability model to prove its validity by example. Using Copula theory to make mechanical system reliability modeling and its expectation, studying the distribution of the random variables (marginal distribution of the mechanical product’ life and associated structure of variables separately, can reduce the difficulty of multivariate probabilistic modeling and analysis to make the modeling and analysis process more clearly.

  4. Modelling catchment areas for secondary care providers: a case study.

    Science.gov (United States)

    Jones, Simon; Wardlaw, Jessica; Crouch, Susan; Carolan, Michelle

    2011-09-01

    Hospitals need to understand patient flows in an increasingly competitive health economy. New initiatives like Patient Choice and the Darzi Review further increase this demand. Essential to understanding patient flows are demographic and geographic profiles of health care service providers, known as 'catchment areas' and 'catchment populations'. This information helps Primary Care Trusts (PCTs) to review how their populations are accessing services, measure inequalities and commission services; likewise it assists Secondary Care Providers (SCPs) to measure and assess potential gains in market share, redesign services, evaluate admission thresholds and plan financial budgets. Unlike PCTs, SCPs do not operate within fixed geographic boundaries. Traditionally, SCPs have used administrative boundaries or arbitrary drive times to model catchment areas. Neither approach satisfactorily represents current patient flows. Furthermore, these techniques are time-consuming and can be challenging for healthcare managers to exploit. This paper presents three different approaches to define catchment areas, each more detailed than the previous method. The first approach 'First Past the Post' defines catchment areas by allocating a dominant SCP to each Census Output Area (OA). The SCP with the highest proportion of activity within each OA is considered the dominant SCP. The second approach 'Proportional Flow' allocates activity proportionally to each OA. This approach allows for cross-boundary flows to be captured in a catchment area. The third and final approach uses a gravity model to define a catchment area, which incorporates drive or travel time into the analysis. Comparing approaches helps healthcare providers to understand whether using more traditional and simplistic approaches to define catchment areas and populations achieves the same or similar results as complex mathematical modelling. This paper has demonstrated, using a case study of Manchester, that when estimating

  5. Software reliability

    CERN Document Server

    Bendell, A

    1986-01-01

    Software Reliability reviews some fundamental issues of software reliability as well as the techniques, models, and metrics used to predict the reliability of software. Topics covered include fault avoidance, fault removal, and fault tolerance, along with statistical methods for the objective assessment of predictive accuracy. Development cost models and life-cycle cost models are also discussed. This book is divided into eight sections and begins with a chapter on adaptive modeling used to predict software reliability, followed by a discussion on failure rate in software reliability growth mo

  6. Biopsy Specimens Obtained 7 Days After Starting Chemoradiotherapy (CRT) Provide Reliable Predictors of Response to CRT for Rectal Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Toshiyuki [Department of Surgery, Tokai University School of Medicine, Kanagawa (Japan); Sadahiro, Sotaro, E-mail: sadahiro@is.icc.u-tokai.ac.jp [Department of Surgery, Tokai University School of Medicine, Kanagawa (Japan); Tanaka, Akira; Okada, Kazutake; Kamata, Hiroko; Kamijo, Akemi [Department of Surgery, Tokai University School of Medicine, Kanagawa (Japan); Murayama, Chieko [Department of Clinical Pharmacology, Tokai University School of Medicine, Kanagawa (Japan); Akiba, Takeshi; Kawada, Shuichi [Department of Radiology, Tokai University School of Medicine, Kanagawa (Japan)

    2013-04-01

    Purpose: Preoperative chemoradiation therapy (CRT) significantly decreases local recurrence in locally advanced rectal cancer. Various biomarkers in biopsy specimens obtained before CRT have been proposed as predictors of response. However, reliable biomarkers remain to be established. Methods and Materials: The study group comprised 101 consecutive patients with locally advanced rectal cancer who received preoperative CRT with oral uracil/tegafur (UFT) or S-1. We evaluated histologic findings on hematoxylin and eosin (H and E) staining and immunohistochemical expressions of Ki67, p53, p21, and apoptosis in biopsy specimens obtained before CRT and 7 days after starting CRT. These findings were contrasted with the histologic response and the degree of tumor shrinkage. Results: In biopsy specimens obtained before CRT, histologic marked regression according to the Japanese Classification of Colorectal Carcinoma (JCCC) criteria and the degree of tumor shrinkage on barium enema examination (BE) were significantly greater in patients with p21-positive tumors than in those with p21-negative tumors (P=.04 and P<.01, respectively). In biopsy specimens obtained 7 days after starting CRT, pathologic complete response, histologic marked regression according to both the tumor regression criteria and JCCC criteria, and T downstaging were significantly greater in patients with apoptosis-positive and p21-positive tumors than in those with apoptosis-negative (P<.01, P=.02, P=.01, and P<.01, respectively) or p21-negative tumors (P=.03, P<.01, P<.01, and P=.02, respectively). The degree of tumor shrinkage on both BE as well as MRI was significantly greater in patients with apoptosis-positive and with p21-positive tumors than in those with apoptosis-negative or p21-negative tumors, respectively. Histologic changes in H and E-stained biopsy specimens 7 days after starting CRT significantly correlated with pathologic complete response and marked regression on both JCCC and tumor

  7. Young Children's Selective Learning of Rule Games from Reliable and Unreliable Models

    Science.gov (United States)

    Rakoczy, Hannes; Warneken, Felix; Tomasello, Michael

    2009-01-01

    We investigated preschoolers' selective learning from models that had previously appeared to be reliable or unreliable. Replicating previous research, children from 4 years selectively learned novel words from reliable over unreliable speakers. Extending previous research, children also selectively learned other kinds of acts--novel games--from…

  8. Reliability of travel times to groundwater abstraction wells: Application of the Netherlands Groundwater Model - LGM

    NARCIS (Netherlands)

    Kovar K; Leijnse A; Uffink G; Pastoors MJH; Mulschlegel JHC; Zaadnoordijk WJ; LDL; IMD; TNO/NITG; Haskoning

    2005-01-01

    A modelling approach was developed, incorporated in the finite-element method based program LGMLUC, making it possible to determine the reliability of travel times of groundwater flowing to groundwater abstraction sites. The reliability is seen here as a band (zone) around the expected travel-time i

  9. Reliability Based Optimal Design of Vertical Breakwaters Modelled as a Series System Failure

    DEFF Research Database (Denmark)

    Christiani, E.; Burcharth, H. F.; Sørensen, John Dalsgaard

    1996-01-01

    Reliability based design of monolithic vertical breakwaters is considered. Probabilistic models of important failure modes such as sliding and rupture failure in the rubble mound and the subsoil are described. Characterisation of the relevant stochastic parameters are presented, and relevant design...... variables are identified and an optimal system reliability formulation is presented. An illustrative example is given....

  10. Probabilistic Approach to System Reliability of Mechanism with Correlated Failure Models

    Directory of Open Access Journals (Sweden)

    Xianzhen Huang

    2012-01-01

    Full Text Available In this paper, based on the kinematic accuracy theory and matrix-based system reliability analysis method, a practical method for system reliability analysis of the kinematic performance of planar linkages with correlated failure modes is proposed. The Taylor series expansion is utilized to derive a general expression of the kinematic performance errors caused by random variables. A proper limit state function (performance function for reliability analysis of the kinematic performance of planar linkages is established. Through the reliability theory and the linear programming method the upper and lower bounds of the system reliability of planar linkages are provided. In the course of system reliability analysis, the correlation of different failure modes is considered. Finally, the practicality, efficiency, and accuracy of the proposed method are shown by a numerical example.

  11. Wind Farm Reliability Modelling Using Bayesian Networks and Semi-Markov Processes

    Directory of Open Access Journals (Sweden)

    Robert Adam Sobolewski

    2015-09-01

    Full Text Available Technical reliability plays an important role among factors affecting the power output of a wind farm. The reliability is determined by an internal collection grid topology and reliability of its electrical components, e.g. generators, transformers, cables, switch breakers, protective relays, and busbars. A wind farm reliability’s quantitative measure can be the probability distribution of combinations of operating and failed states of the farm’s wind turbines. The operating state of a wind turbine is its ability to generate power and to transfer it to an external power grid, which means the availability of the wind turbine and other equipment necessary for the power transfer to the external grid. This measure can be used for quantitative analysis of the impact of various wind farm topologies and the reliability of individual farm components on the farm reliability, and for determining the expected farm output power with consideration of the reliability. This knowledge may be useful in an analysis of power generation reliability in power systems. The paper presents probabilistic models that quantify the wind farm reliability taking into account the above-mentioned technical factors. To formulate the reliability models Bayesian networks and semi-Markov processes were used. Using Bayesian networks the wind farm structural reliability was mapped, as well as quantitative characteristics describing equipment reliability. To determine the characteristics semi-Markov processes were used. The paper presents an example calculation of: (i probability distribution of the combination of both operating and failed states of four wind turbines included in the wind farm, and (ii expected wind farm output power with consideration of its reliability.

  12. AN IMPROVED FUZZY MODEL TO PREDICT SOFTWARE RELIABILITY

    Directory of Open Access Journals (Sweden)

    Deepika Chawla

    2012-09-01

    Full Text Available Software faults are one of major criteria to estimate the software quality or the software reliability. There are number of matrices defined that uses the software faults to estimate the software quality. But when we have a large software system with thousands of class modules, in such case it is not easy to apply the software matrices on each module of software system. The present work isthe solution of the defined problem. In this work software quality is estimated by using the rejection method on software faults. The rejection method is applied on the basis on Fuzzy Logic in a softwaresystem. To perform the analysis in an effective way the weightage approach is used on the software faults. In this work we have assigned different weightage on software faults to categorize the faults respective to fault criticality and the frequency. Once the faults are categorized the next work is the implementation of proposed work software fault to represents the accepted and rejectedmodules from the software system. The obtained result shows the better visualization of software quality in case of software fault analysis.

  13. Effective confidence interval estimation of fault-detection process of software reliability growth models

    Science.gov (United States)

    Fang, Chih-Chiang; Yeh, Chun-Wu

    2016-09-01

    The quantitative evaluation of software reliability growth model is frequently accompanied by its confidence interval of fault detection. It provides helpful information to software developers and testers when undertaking software development and software quality control. However, the explanation of the variance estimation of software fault detection is not transparent in previous studies, and it influences the deduction of confidence interval about the mean value function that the current study addresses. Software engineers in such a case cannot evaluate the potential hazard based on the stochasticity of mean value function, and this might reduce the practicability of the estimation. Hence, stochastic differential equations are utilised for confidence interval estimation of the software fault-detection process. The proposed model is estimated and validated using real data-sets to show its flexibility.

  14. Powering stochastic reliability models by discrete event simulation

    DEFF Research Database (Denmark)

    Kozine, Igor; Wang, Xiaoyun

    2012-01-01

    it difficult to find a solution to the problem. The power of modern computers and recent developments in discrete-event simulation (DES) software enable to diminish some of the drawbacks of stochastic models. In this paper we describe the insights we have gained based on using both Markov and DES models...

  15. Fatigue Reliability and Effective Turbulence Models in Wind Farms

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Frandsen, S.; Tarp-Johansen, N.J.

    2007-01-01

    behind wind turbines can imply a significant reduction in the fatigue lifetime of wind turbines placed in wakes. In this paper the design code model in the wind turbine code IEC 61400-1 (2005) is evaluated from a probabilistic point of view, including the importance of modeling the SN-curve by linear...

  16. Provider practice models in ambulatory oncology practice: analysis of productivity, revenue, and provider and patient satisfaction.

    Science.gov (United States)

    Buswell, Lori A; Ponte, Patricia Reid; Shulman, Lawrence N

    2009-07-01

    Physicians, nurse practitioners, and physician assistants often work in teams to deliver cancer care in ambulatory oncology practices. This is likely to become more prevalent as the demand for oncology services rises, and the number of providers increases only slightly.

  17. NERF - A Computer Program for the Numerical Evaluation of Reliability Functions - Reliability Modelling, Numerical Methods and Program Documentation,

    Science.gov (United States)

    1983-09-01

    Industry Australian Atomic Energy Commission, Director CSIROj Materials Science Division, Library Trans-Australia Airlines, Library Qantas Airways ...designed to evaluate the reliability functions that result from the application of reliability analysis to the fatigue of aircraft structures, in particular...Messages 60+ A.4. Program Assembly 608 DISTRIBUTION DOCUMENT CONTROL DATA II 1. INTRODUCTION The application of reliability analysis to the fatigue

  18. Reliability of Current Biokinetic and Dosimetric Models for Radionuclides: A Pilot Study

    Energy Technology Data Exchange (ETDEWEB)

    Leggett, Richard Wayne [ORNL; Eckerman, Keith F [ORNL; Meck, Robert A. [U.S. Nuclear Regulatory Commission

    2008-10-01

    This report describes the results of a pilot study of the reliability of the biokinetic and dosimetric models currently used by the U.S. Nuclear Regulatory Commission (NRC) as predictors of dose per unit internal or external exposure to radionuclides. The study examines the feasibility of critically evaluating the accuracy of these models for a comprehensive set of radionuclides of concern to the NRC. Each critical evaluation would include: identification of discrepancies between the models and current databases; characterization of uncertainties in model predictions of dose per unit intake or unit external exposure; characterization of variability in dose per unit intake or unit external exposure; and evaluation of prospects for development of more accurate models. Uncertainty refers here to the level of knowledge of a central value for a population, and variability refers to quantitative differences between different members of a population. This pilot study provides a critical assessment of models for selected radionuclides representing different levels of knowledge of dose per unit exposure. The main conclusions of this study are as follows: (1) To optimize the use of available NRC resources, the full study should focus on radionuclides most frequently encountered in the workplace or environment. A list of 50 radionuclides is proposed. (2) The reliability of a dose coefficient for inhalation or ingestion of a radionuclide (i.e., an estimate of dose per unit intake) may depend strongly on the specific application. Multiple characterizations of the uncertainty in a dose coefficient for inhalation or ingestion of a radionuclide may be needed for different forms of the radionuclide and different levels of information of that form available to the dose analyst. (3) A meaningful characterization of variability in dose per unit intake of a radionuclide requires detailed information on the biokinetics of the radionuclide and hence is not feasible for many infrequently

  19. Ex vivo normothermic machine perfusion is safe, simple, and reliable: results from a large animal model.

    Science.gov (United States)

    Nassar, Ahmed; Liu, Qiang; Farias, Kevin; D'Amico, Giuseppe; Tom, Cynthia; Grady, Patrick; Bennett, Ana; Diago Uso, Teresa; Eghtesad, Bijan; Kelly, Dympna; Fung, John; Abu-Elmagd, Kareem; Miller, Charles; Quintini, Cristiano

    2015-02-01

    Normothermic machine perfusion (NMP) is an emerging preservation modality that holds the potential to prevent the injury associated with low temperature and to promote organ repair that follows ischemic cell damage. While several animal studies have showed its superiority over cold storage (CS), minimal studies in the literature have focused on safety, feasibility, and reliability of this technology, which represent key factors in its implementation into clinical practice. The aim of the present study is to report safety and performance data on NMP of DCD porcine livers. After 60 minutes of warm ischemia time, 20 pig livers were preserved using either NMP (n = 15; physiologic perfusion temperature) or CS group (n = 5) for a preservation time of 10 hours. Livers were then tested on a transplant simulation model for 24 hours. Machine safety was assessed by measuring system failure events, the ability to monitor perfusion parameters, sterility, and vessel integrity. The ability of the machine to preserve injured organs was assessed by liver function tests, hemodynamic parameters, and histology. No system failures were recorded. Target hemodynamic parameters were easily achieved and vascular complications were not encountered. Liver function parameters as well as histology showed significant differences between the 2 groups, with NMP livers showing preserved liver function and histological architecture, while CS livers presenting postreperfusion parameters consistent with unrecoverable cell injury. Our study shows that NMP is safe, reliable, and provides superior graft preservation compared to CS in our DCD porcine model. © The Author(s) 2014.

  20. Observation Likelihood Model Design and Failure Recovery Scheme toward Reliable Localization of Mobile Robots

    Directory of Open Access Journals (Sweden)

    Chang-bae Moon

    2011-01-01

    Full Text Available Although there have been many researches on mobile robot localization, it is still difficult to obtain reliable localization performance in a human co-existing real environment. Reliability of localization is highly dependent upon developer's experiences because uncertainty is caused by a variety of reasons. We have developed a range sensor based integrated localization scheme for various indoor service robots. Through the experience, we found out that there are several significant experimental issues. In this paper, we provide useful solutions for following questions which are frequently faced with in practical applications: 1 How to design an observation likelihood model? 2 How to detect the localization failure? 3 How to recover from the localization failure? We present design guidelines of observation likelihood model. Localization failure detection and recovery schemes are presented by focusing on abrupt wheel slippage. Experiments were carried out in a typical office building environment. The proposed scheme to identify the localizer status is useful in practical environments. Moreover, the semi-global localization is a computationally efficient recovery scheme from localization failure. The results of experiments and analysis clearly present the usefulness of proposed solutions.

  1. Digital Avionics Information System (DAIS): Reliability and Maintainability Model. Final Report.

    Science.gov (United States)

    Czuchry, Andrew J.; And Others

    The reliability and maintainability (R&M) model described in this report represents an important portion of a larger effort called the Digital Avionics Information System (DAIS) Life Cycle Cost (LCC) Study. The R&M model is the first of three models that comprise a modeling system for use in LCC analysis of avionics systems. The total…

  2. Reliability analysis of diesel engine crankshaft based on 2D stress strength interference model

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    A 2D stress strength interference model (2D-SSIM) considering that the fatigue reliability of engineering structural components has close relationship to load asymmetric ratio and its variability to some extent is put forward. The principle, geometric schematic and limit state equation of this model are presented. Reliability evaluation for a kind of diesel engine crankshaft was made based on this theory, in which multi-axial loading fatigue criteria was employed. Because more important factors, i.e.stress asymmetric ratio and its variability, are considered, it theoretically can make more accurate evaluation for structural component reliability than the traditional interference model. Correspondingly, a Monte-Carlo Method simulation solution is also given. The computation suggests that this model can yield satisfactory reliability evaluation.

  3. Reliability Modeling Development and Its Applications for Ceramic Capacitors with Base-Metal Electrodes (BMEs)

    Science.gov (United States)

    Liu, Donhang

    2014-01-01

    This presentation includes a summary of NEPP-funded deliverables for the Base-Metal Electrodes (BMEs) capacitor task, development of a general reliability model for BME capacitors, and a summary and future work.

  4. Onboard Robust Visual Tracking for UAVs Using a Reliable Global-Local Object Model

    National Research Council Canada - National Science Library

    Fu, Changhong; Duan, Ran; Kircali, Dogan; Kayacan, Erdal

    2016-01-01

    In this paper, we present a novel onboard robust visual algorithm for long-term arbitrary 2D and 3D object tracking using a reliable global-local object model for unmanned aerial vehicle (UAV) applications, e.g...

  5. Can quantum probability provide a new direction for cognitive modeling?

    Science.gov (United States)

    Pothos, Emmanuel M; Busemeyer, Jerome R

    2013-06-01

    Classical (Bayesian) probability (CP) theory has led to an influential research tradition for modeling cognitive processes. Cognitive scientists have been trained to work with CP principles for so long that it is hard even to imagine alternative ways to formalize probabilities. However, in physics, quantum probability (QP) theory has been the dominant probabilistic approach for nearly 100 years. Could QP theory provide us with any advantages in cognitive modeling as well? Note first that both CP and QP theory share the fundamental assumption that it is possible to model cognition on the basis of formal, probabilistic principles. But why consider a QP approach? The answers are that (1) there are many well-established empirical findings (e.g., from the influential Tversky, Kahneman research tradition) that are hard to reconcile with CP principles; and (2) these same findings have natural and straightforward explanations with quantum principles. In QP theory, probabilistic assessment is often strongly context- and order-dependent, individual states can be superposition states (that are impossible to associate with specific values), and composite systems can be entangled (they cannot be decomposed into their subsystems). All these characteristics appear perplexing from a classical perspective. However, our thesis is that they provide a more accurate and powerful account of certain cognitive processes. We first introduce QP theory and illustrate its application with psychological examples. We then review empirical findings that motivate the use of quantum theory in cognitive theory, but also discuss ways in which QP and CP theories converge. Finally, we consider the implications of a QP theory approach to cognition for human rationality.

  6. Line Transect and Triangulation Surveys Provide Reliable Estimates of the Density of Kloss' Gibbons (Hylobates klossii) on Siberut Island, Indonesia.

    Science.gov (United States)

    Höing, Andrea; Quinten, Marcel C; Indrawati, Yohana Maria; Cheyne, Susan M; Waltert, Matthias

    2013-02-01

    Estimating population densities of key species is crucial for many conservation programs. Density estimates provide baseline data and enable monitoring of population size. Several different survey methods are available, and the choice of method depends on the species and study aims. Few studies have compared the accuracy and efficiency of different survey methods for large mammals, particularly for primates. Here we compare estimates of density and abundance of Kloss' gibbons (Hylobates klossii) using two of the most common survey methods: line transect distance sampling and triangulation. Line transect surveys (survey effort: 155.5 km) produced a total of 101 auditory and visual encounters and a density estimate of 5.5 gibbon clusters (groups or subgroups of primate social units)/km(2). Triangulation conducted from 12 listening posts during the same period revealed a similar density estimate of 5.0 clusters/km(2). Coefficients of variation of cluster density estimates were slightly higher from triangulation (0.24) than from line transects (0.17), resulting in a lack of precision in detecting changes in cluster densities of triangulation and triangulation method also may be appropriate.

  7. Thiamine primed defense provides reliable alternative to systemic fungicide carbendazim against sheath blight disease in rice (Oryza sativa L.).

    Science.gov (United States)

    Bahuguna, Rajeev Nayan; Joshi, Rohit; Shukla, Alok; Pandey, Mayank; Kumar, J

    2012-08-01

    A novel pathogen defense strategy by thiamine priming was evaluated for its efficacy against sheath blight pathogen, Rhizoctonia solani AG-1A, of rice and compared with that of systemic fungicide, carbendazim (BCM). Seeds of semidwarf, high yielding, basmati rice variety Vasumati were treated with thiamine (50 mM) and BCM (4 mM). The pot cultured plants were challenge inoculated with R. solani after 40 days of sowing and effect of thiamine and BCM on rice growth and yield traits was examined. Higher hydrogen peroxide content, total phenolics accumulation, phenylalanine ammonia lyase (PAL) activity and superoxide dismutase (SOD) activity under thiamine treatment displayed elevated level of systemic resistance, which was further augmented under challenging pathogen infection. High transcript level of phenylalanine ammonia lyase (PAL) and manganese superoxide dismutase (MnSOD) validated mode of thiamine primed defense. Though minimum disease severity was observed under BCM treatment, thiamine produced comparable results, with 18.12 per cent lower efficacy. Along with fortifying defense components and minor influence on photosynthetic pigments and nitrate reductase (NR) activity, thiamine treatment significantly reduced pathogen-induced loss in photosynthesis, stomatal conductance, chlorophyll fluorescence, NR activity and NR transcript level. Physiological traits affected under pathogen infection were found signatory for characterizing plant's response under disease and were detectable at early stage of infection. These findings provide a novel paradigm for developing alternative, environmentally safe strategies to control plant diseases.

  8. Reliability-cost models for the power switching devices of wind power converters

    DEFF Research Database (Denmark)

    Ma, Ke; Blaabjerg, Frede

    2012-01-01

    In order to satisfy the growing reliability requirements for the wind power converters with more cost-effective solution, the target of this paper is to establish a new reliability-cost model which can connect the relationship between reliability performances and corresponding semiconductor cost...... for power switching devices. First the conduction loss, switching loss as well as thermal impedance models of power switching devices (IGBT module) are related to the semiconductor chip number information respectively. Afterwards simplified analytical solutions, which can directly extract the junction...

  9. Microstructural Modeling of Brittle Materials for Enhanced Performance and Reliability.

    Energy Technology Data Exchange (ETDEWEB)

    Teague, Melissa Christine [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Teague, Melissa Christine [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rodgers, Theron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rodgers, Theron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grutzik, Scott Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grutzik, Scott Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Meserole, Stephen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Meserole, Stephen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-08-01

    Brittle failure is often influenced by difficult to measure and variable microstructure-scale stresses. Recent advances in photoluminescence spectroscopy (PLS), including improved confocal laser measurement and rapid spectroscopic data collection have established the potential to map stresses with microscale spatial resolution (%3C2 microns). Advanced PLS was successfully used to investigate both residual and externally applied stresses in polycrystalline alumina at the microstructure scale. The measured average stresses matched those estimated from beam theory to within one standard deviation, validating the technique. Modeling the residual stresses within the microstructure produced general agreement in comparison with the experimentally measured results. Microstructure scale modeling is primed to take advantage of advanced PLS to enable its refinement and validation, eventually enabling microstructure modeling to become a predictive tool for brittle materials.

  10. Parametric and semiparametric models with applications to reliability, survival analysis, and quality of life

    CERN Document Server

    Nikulin, M; Mesbah, M; Limnios, N

    2004-01-01

    Parametric and semiparametric models are tools with a wide range of applications to reliability, survival analysis, and quality of life. This self-contained volume examines these tools in survey articles written by experts currently working on the development and evaluation of models and methods. While a number of chapters deal with general theory, several explore more specific connections and recent results in "real-world" reliability theory, survival analysis, and related fields.

  11. Governance, Government, and the Search for New Provider Models.

    Science.gov (United States)

    Saltman, Richard B; Duran, Antonio

    2015-11-03

    A central problem in designing effective models of provider governance in health systems has been to ensure an appropriate balance between the concerns of public sector and/or government decision-makers, on the one hand, and of non-governmental health services actors in civil society and private life, on the other. In tax-funded European health systems up to the 1980s, the state and other public sector decision-makers played a dominant role over health service provision, typically operating hospitals through national or regional governments on a command-and-control basis. In a number of countries, however, this state role has started to change, with governments first stepping out of direct service provision and now de facto pushed to focus more on steering provider organizations rather than on direct public management. In this new approach to provider governance, the state has pulled back into a regulatory role that introduces market-like incentives and management structures, which then apply to both public and private sector providers alike. This article examines some of the main operational complexities in implementing this new governance reality/strategy, specifically from a service provision (as opposed to mostly a financing or even regulatory) perspective. After briefly reviewing some of the key theoretical dilemmas, the paper presents two case studies where this new approach was put into practice: primary care in Sweden and hospitals in Spain. The article concludes that good governance today needs to reflect practical operational realities if it is to have the desired effect on health sector reform outcome.

  12. Governance, Government, and the Search for New Provider Models

    Directory of Open Access Journals (Sweden)

    Richard B. Saltman

    2016-01-01

    Full Text Available A central problem in designing effective models of provider governance in health systems has been to ensure an appropriate balance between the concerns of public sector and/or government decision-makers, on the one hand, and of non-governmental health services actors in civil society and private life, on the other. In tax-funded European health systems up to the 1980s, the state and other public sector decision-makers played a dominant role over health service provision, typically operating hospitals through national or regional governments on a command-and-control basis. In a number of countries, however, this state role has started to change, with governments first stepping out of direct service provision and now de facto pushed to focus more on steering provider organizations rather than on direct public management. In this new approach to provider governance, the state has pulled back into a regulatory role that introduces market-like incentives and management structures, which then apply to both public and private sector providers alike. This article examines some of the main operational complexities in implementing this new governance reality/strategy, specifically from a service provision (as opposed to mostly a financing or even regulatory perspective. After briefly reviewing some of the key theoretical dilemmas, the paper presents two case studies where this new approach was put into practice: primary care in Sweden and hospitals in Spain. The article concludes that good governance today needs to reflect practical operational realities if it is to have the desired effect on health sector reform outcome.

  13. Electromagnetic Model Reliably Predicts Radar Scattering Characteristics of Airborne Organisms

    Science.gov (United States)

    Mirkovic, Djordje; Stepanian, Phillip M.; Kelly, Jeffrey F.; Chilson, Phillip B.

    2016-10-01

    The radar scattering characteristics of aerial animals are typically obtained from controlled laboratory measurements of a freshly harvested specimen. These measurements are tedious to perform, difficult to replicate, and typically yield only a small subset of the full azimuthal, elevational, and polarimetric radio scattering data. As an alternative, biological applications of radar often assume that the radar cross sections of flying animals are isotropic, since sophisticated computer models are required to estimate the 3D scattering properties of objects having complex shapes. Using the method of moments implemented in the WIPL-D software package, we show for the first time that such electromagnetic modeling techniques (typically applied to man-made objects) can accurately predict organismal radio scattering characteristics from an anatomical model: here the Brazilian free-tailed bat (Tadarida brasiliensis). The simulated scattering properties of the bat agree with controlled measurements and radar observations made during a field study of bats in flight. This numerical technique can produce the full angular set of quantitative polarimetric scattering characteristics, while eliminating many practical difficulties associated with physical measurements. Such a modeling framework can be applied for bird, bat, and insect species, and will help drive a shift in radar biology from a largely qualitative and phenomenological science toward quantitative estimation of animal densities and taxonomic identification.

  14. On new cautious structural reliability models in the framework of imprecise probabilities

    DEFF Research Database (Denmark)

    Utkin, Lev; Kozine, Igor

    2010-01-01

    New imprecise structural reliability models are described in this paper. They are developed based on the imprecise Bayesian inference and are imprecise Dirichlet, imprecise negative binomial, gamma-exponential and normal models. The models are applied to computing cautious structural reliability...... measures when the number of events of interest or observations is very small. The main feature of the models is that prior ignorance is not modelled by a fixed single prior distribution, but by a class of priors which is defined by upper and lower probabilities that can converge as statistical data...

  15. A Structural Reliability Business Process Modelling with System Dynamics Simulation

    OpenAIRE

    Lam, C. Y.; S.L. Chan; Ip, W.H.

    2010-01-01

    Business activity flow analysis enables organizations to manage structured business processes, and can thus help them to improve performance. The six types of business activities identified here (i.e., SOA, SEA, MEA, SPA, MSA and FIA) are correlated and interact with one another, and the decisions from any business activity form feedback loops with previous and succeeding activities, thus allowing the business process to be modelled and simulated. For instance, for any company that is eager t...

  16. An Imprecise Probability Model for Structural Reliability Based on Evidence and Gray Theory

    Directory of Open Access Journals (Sweden)

    Bin Suo

    2013-01-01

    Full Text Available To avoid the shortages and limitations of probabilistic and non-probabilistic reliability model for structural reliability analysis in the case of limited samples for basic variables, a new imprecise probability model is proposed. Confidence interval with a given confidence is calculated on the basis of small samples by gray theory, which is not depending on the distribution pattern of variable. Then basic probability assignments and focal elements are constructed and approximation methods of structural reliability based on belief and plausibility functions are proposed in the situation that structure limit state function is monotonic and non-monotonic, respectively. The numerical examples show that the new reliability model utilizes all the information included in small samples and considers both aleatory and epistemic uncertainties in them, thus it can rationally measure the safety of the structure and the measurement can be more and more accurate with the increasing of sample size.

  17. Stochastic data-flow graph models for the reliability analysis of communication networks and computer systems

    Energy Technology Data Exchange (ETDEWEB)

    Chen, D.J.

    1988-01-01

    The literature is abundant with combinatorial reliability analysis of communication networks and fault-tolerant computer systems. However, it is very difficult to formulate reliability indexes using combinatorial methods. These limitations have led to the development of time-dependent reliability analysis using stochastic processes. In this research, time-dependent reliability-analysis techniques using Dataflow Graphs (DGF) are developed. The chief advantages of DFG models over other models are their compactness, structural correspondence with the systems, and general amenability to direct interpretation. This makes the verification of the correspondence of the data-flow graph representation to the actual system possible. Several DGF models are developed and used to analyze the reliability of communication networks and computer systems. Specifically, Stochastic Dataflow graphs (SDFG), both the discrete-time and the continuous time models are developed and used to compute time-dependent reliability of communication networks and computer systems. The repair and coverage phenomenon of communication networks is also analyzed using SDFG models.

  18. Providing surgical care in Somalia: A model of task shifting

    Directory of Open Access Journals (Sweden)

    Ford Nathan P

    2011-07-01

    Full Text Available Abstract Background Somalia is one of the most political unstable countries in the world. Ongoing insecurity has forced an inconsistent medical response by the international community, with little data collection. This paper describes the "remote" model of surgical care by Medecins Sans Frontieres, in Guri-El, Somalia. The challenges of providing the necessary prerequisites for safe surgery are discussed as well as the successes and limitations of task shifting in this resource-limited context. Methods In January 2006, MSF opened a project in Guri-El located between Mogadishu and Galcayo. The objectives were to reduce mortality due to complications of pregnancy and childbirth and from violent and non-violent trauma. At the start of the program, expatriate surgeons and anesthesiologists established safe surgical practices and performed surgical procedures. After January 2008, expatriates were evacuated due to insecurity and surgical care has been provided by local Somalian doctors and nurses with periodic supervisory visits from expatriate staff. Results Between October 2006 and December 2009, 2086 operations were performed on 1602 patients. The majority (1049, 65% were male and the median age was 22 (interquartile range, 17-30. 1460 (70% of interventions were emergent. Trauma accounted for 76% (1585 of all surgical pathology; gunshot wounds accounted for 89% (584 of violent injuries. Operative mortality (0.5% of all surgical interventions was not higher when Somalian staff provided care compared to when expatriate surgeons and anesthesiologists. Conclusions The delivery of surgical care in any conflict-settings is difficult, but in situations where international support is limited, the challenges are more extreme. In this model, task shifting, or the provision of services by less trained cadres, was utilized and peri-operative mortality remained low demonstrating that safe surgical practices can be accomplished even without the presence of fully

  19. Reliable modeling of the electronic spectra of realistic uranium complexes

    Science.gov (United States)

    Tecmer, Paweł; Govind, Niranjan; Kowalski, Karol; de Jong, Wibe A.; Visscher, Lucas

    2013-07-01

    We present an EOMCCSD (equation of motion coupled cluster with singles and doubles) study of excited states of the small [UO2]2+ and [UO2]+ model systems as well as the larger UVIO2(saldien) complex. In addition, the triples contribution within the EOMCCSDT and CR-EOMCCSD(T) (completely renormalized EOMCCSD with non-iterative triples) approaches for the [UO2]2+ and [UO2]+ systems as well as the active-space variant of the CR-EOMCCSD(T) method—CR-EOMCCSd(t)—for the UVIO2(saldien) molecule are investigated. The coupled cluster data were employed as benchmark to choose the "best" appropriate exchange-correlation functional for subsequent time-dependent density functional (TD-DFT) studies on the transition energies for closed-shell species. Furthermore, the influence of the saldien ligands on the electronic structure and excitation energies of the [UO2]+ molecule is discussed. The electronic excitations as well as their oscillator dipole strengths modeled with TD-DFT approach using the CAM-B3LYP exchange-correlation functional for the [UVO2(saldien)]- with explicit inclusion of two dimethyl sulfoxide molecules are in good agreement with the experimental data of Takao et al. [Inorg. Chem. 49, 2349 (2010), 10.1021/ic902225f].

  20. Reliability of linear measurements on a virtual bilateral cleft lip and palate model

    NARCIS (Netherlands)

    Oosterkamp, B.C.M.; van der Meer, W.J.; Rutenfrans, M.; Dijkstra, P.U.

    2006-01-01

    Objective: To assess the reliability and validity of measurements performed on three-dimensional virtual models of neonatal bilateral cleft lip and palate patients, compared with measurements performed on plaster cast models. Materials and Methods: Ten high-quality plaster cast models of bilateral c

  1. Reliability and Stability of VLBI-Derived Sub-Daily EOP Models

    Science.gov (United States)

    Artz, Thomas; Boeckmann, Sarah; Jensen, Laura; Nothnagel, Axel; Steigenberger, Peter

    2010-01-01

    Recent investigations have shown significant shortcomings in the model which is proposed by the IERS to account for the variations in the Earth s rotation with periods around one day and less. To overcome this, an empirical model can be estimated more or less directly from the observations of space geodetic techniques. The aim of this paper is to evaluate the quality and reliability of such a model based on VLBI observations. Therefore, the impact of the estimation method and the analysis options as well as the temporal stability are investigated. It turned out that, in order to provide a realistic accuracy measure of the model coefficients, the formal errors should be inflated by a factor of three. This coincides with the noise floor and the repeatability of the model coefficients and it captures almost all of the differences that are caused by different estimation techniques. The impact of analysis options is small but significant when changing troposphere parameterization or including harmonic station position variations.

  2. Liquefaction of Tangier soils by using physically based reliability analysis modelling

    Directory of Open Access Journals (Sweden)

    Dubujet P.

    2012-07-01

    Full Text Available Approaches that are widely used to characterize propensity of soils to liquefaction are mainly of empirical type. The potential of liquefaction is assessed by using correlation formulas that are based on field tests such as the standard and the cone penetration tests. These correlations depend however on the site where they were derived. In order to adapt them to other sites where seismic case histories are not available, further investigation is required. In this work, a rigorous one-dimensional modelling of the soil dynamics yielding liquefaction phenomenon is considered. Field tests consisting of core sampling and cone penetration testing were performed. They provided the necessary data for numerical simulations performed by using DeepSoil software package. Using reliability analysis, the probability of liquefaction was estimated and the obtained results were used to adapt Juang method to the particular case of sandy soils located in Tangier.

  3. Microgrid Reliability Modeling and Battery Scheduling Using Stochastic Linear Programming

    Energy Technology Data Exchange (ETDEWEB)

    Cardoso, Goncalo; Stadler, Michael; Siddiqui, Afzal; Marnay, Chris; DeForest, Nicholas; Barbosa-Povoa, Ana; Ferrao, Paulo

    2013-05-23

    This paper describes the introduction of stochastic linear programming into Operations DER-CAM, a tool used to obtain optimal operating schedules for a given microgrid under local economic and environmental conditions. This application follows previous work on optimal scheduling of a lithium-iron-phosphate battery given the output uncertainty of a 1 MW molten carbonate fuel cell. Both are in the Santa Rita Jail microgrid, located in Dublin, California. This fuel cell has proven unreliable, partially justifying the consideration of storage options. Several stochastic DER-CAM runs are executed to compare different scenarios to values obtained by a deterministic approach. Results indicate that using a stochastic approach provides a conservative yet more lucrative battery schedule. Lower expected energy bills result, given fuel cell outages, in potential savings exceeding 6percent.

  4. Modeling Manufacturing Impacts on Aging and Reliability of Polyurethane Foams

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Rekha R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Roberts, Christine Cardinal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mondy, Lisa Ann [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Soehnel, Melissa Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Johnson, Kyle [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lorenzo, Henry T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-09-25

    Polyurethane is a complex multiphase material that evolves from a viscous liquid to a system of percolating bubbles, which are created via a CO2 generating reaction. The continuous phase polymerizes to a solid during the foaming process generating heat. Foams introduced into a mold increase their volume up to tenfold, and the dynamics of the expansion process may lead to voids and will produce gradients in density and degree of polymerization. These inhomogeneities can lead to structural stability issues upon aging. For instance, structural components in weapon systems have been shown to change shape as they age depending on their molding history, which can threaten critical tolerances. The purpose of this project is to develop a Cradle-to-Grave multiphysics model, which allows us to predict the material properties of foam from its birth through aging in the stockpile, where its dimensional stability is important.

  5. Modeling Manufacturing Impacts on Aging and Reliability of Polyurethane Foams

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Rekha R.; Roberts, Christine Cardinal; Mondy, Lisa Ann; Soehnel, Melissa Marie; Johnson, Kyle; Lorenzo, Henry T.

    2016-10-01

    Polyurethane is a complex multiphase material that evolves from a viscous liquid to a system of percolating bubbles, which are created via a CO2 generating reaction. The continuous phase polymerizes to a solid during the foaming process generating heat. Foams introduced into a mold increase their volume up to tenfold, and the dynamics of the expansion process may lead to voids and will produce gradients in density and degree of polymerization. These inhomogeneities can lead to structural stability issues upon aging. For instance, structural components in weapon systems have been shown to change shape as they age depending on their molding history, which can threaten critical tolerances. The purpose of this project is to develop a Cradle-to-Grave multiphysics model, which allows us to predict the material properties of foam from its birth through aging in the stockpile, where its dimensional stability is important.

  6. Time-Dependent Reliability Modeling and Analysis Method for Mechanics Based on Convex Process

    Directory of Open Access Journals (Sweden)

    Lei Wang

    2015-01-01

    Full Text Available The objective of the present study is to evaluate the time-dependent reliability for dynamic mechanics with insufficient time-varying uncertainty information. In this paper, the nonprobabilistic convex process model, which contains autocorrelation and cross-correlation, is firstly employed for the quantitative assessment of the time-variant uncertainty in structural performance characteristics. By combination of the set-theory method and the regularization treatment, the time-varying properties of structural limit state are determined and a standard convex process with autocorrelation for describing the limit state is formulated. By virtue of the classical first-passage method in random process theory, a new nonprobabilistic measure index of time-dependent reliability is proposed and its solution strategy is mathematically conducted. Furthermore, the Monte-Carlo simulation method is also discussed to illustrate the feasibility and accuracy of the developed approach. Three engineering cases clearly demonstrate that the proposed method may provide a reasonable and more efficient way to estimate structural safety than Monte-Carlo simulations throughout a product life-cycle.

  7. A modelling approach to find stable and reliable soil organic carbon values for further regionalization.

    Science.gov (United States)

    Bönecke, Eric; Franko, Uwe

    2015-04-01

    Soil organic matter (SOM) and carbon (SOC) might be the most important components to describe soil fertility of agricultural used soils. It is sensitive to temporal and spatial changes due to varying weather conditions, uneven crops and soil management practices and still struggles with providing reliable delineation of spatial variability. Soil organic carbon, furthermore, is an essential initial parameter for dynamic modelling, understanding e.g. carbon and nitrogen processes. Alas it requires cost and time intensive field and laboratory work to attain and using this information. The objective of this study is to assess an approach that reduces efforts of laboratory and field analyses by using method to find stable initial soil organic carbon values for further soil process modelling and regionalization on field scale. The demand of strategies, technics and tools to improve reliable soil organic carbon high resolution maps and additionally reducing cost constraints is hence still facing an increasing attention of scientific research. Although, it is nowadays a widely used practice, combining effective sampling schemes with geophysical sensing techniques, to describe within-field variability of soil organic carbon, it is still challenging large uncertainties, even at field scale in both, science and agriculture. Therefore, an analytical and modelling approach might facilitate and improve this strategy on small and large field scale. This study will show a method, how to find reliable steady state values of soil organic carbon at particular points, using the approved soil process model CANDY (Franko et al. 1995). It is focusing on an iterative algorithm of adjusting the key driving components: soil physical properties, meteorological data and management information, for which we quantified the input and the losses of soil carbon (manure, crop residues, other organic inputs, decomposition, leaching). Furthermore, this approach can be combined with geophysical

  8. A Generalized Model for Electrical Power Distribution Feeders’ Contributions to System Reliability Indices

    Directory of Open Access Journals (Sweden)

    Ganiyu A. Ajenikoko

    2014-01-01

    Full Text Available Reliability indices are parametric quantities used to assess the performance levels of electrical power distribution systems. In this work, a generalized quadratic model is developed for electrical power distribution system contributions to system reliability indices using Ikeja, Port-Harcourt, Kaduna and Kano distribution system feeders as case studies. The mean System Average Interruption Duration Index (SAIDI, System Average Interruption Frequency Index (SAIFI and Customer Average Interruption Duration Index (CAIDI contributions to system reliability indices for Ikeja, Port-Harcourt, Kaduna and Kano distribution systems were 0.0033, 0.0026, 0.0033 and 0.0018 respectively due to the fact that a prolonged period of interruptions was recorded on most of the feeders attached to Port-Harcourt and Kano distribution systems making them to be less reliable compared to Ikeja and Kaduna distribution systems. The generalized Quadratic model forms a basis for a good design, planning and maintenance of distribution systems at large.

  9. Study on Modeling and Simulation of Reliability Diagnosis of Supply Chain Based on Common Cause Failure

    Directory of Open Access Journals (Sweden)

    Guohua Chen

    2013-01-01

    Full Text Available To diagnose key factors which cause the failure of supply chain, on the base of taking 3-tier supply chain centering on manufacturer as the object, the diagnostic model of reliability of supply chain with common cause failure was established. Then considering unreliability and key importance as quantitative index, the diagnostic algorism of key factors of reliability of supply chain with common cause failure was studied by the method of Monte Carlo Simulation. The algorism can be used to evaluate the reliability of f supply chain and determine key factors which cause the failure of supply chain, which supplies a new method for diagnosing reliability of supply chain based on common cause failure. Finally, an example was presented to prove the feasibility and validity of the model and method.

  10. Solitary mammals provide an animal model for autism spectrum disorders.

    Science.gov (United States)

    Reser, Jared Edward

    2014-02-01

    Species of solitary mammals are known to exhibit specialized, neurological adaptations that prepare them to focus working memory on food procurement and survival rather than on social interaction. Solitary and nonmonogamous mammals, which do not form strong social bonds, have been documented to exhibit behaviors and biomarkers that are similar to endophenotypes in autism. Both individuals on the autism spectrum and certain solitary mammals have been reported to be low on measures of affiliative need, bodily expressiveness, bonding and attachment, direct and shared gazing, emotional engagement, conspecific recognition, partner preference, separation distress, and social approach behavior. Solitary mammals also exhibit certain biomarkers that are characteristic of autism, including diminished oxytocin and vasopressin signaling, dysregulation of the endogenous opioid system, increased Hypothalamic-pituitary-adrenal axis (HPA) activity to social encounters, and reduced HPA activity to separation and isolation. The extent of these similarities suggests that solitary mammals may offer a useful model of autism spectrum disorders and an opportunity for investigating genetic and epigenetic etiological factors. If the brain in autism can be shown to exhibit distinct homologous or homoplastic similarities to the brains of solitary animals, it will reveal that they may be central to the phenotype and should be targeted for further investigation. Research of the neurological, cellular, and molecular basis of these specializations in other mammals may provide insight for behavioral analysis, communication intervention, and psychopharmacology for autism.

  11. Reliable design of a closed loop supply chain network under uncertainty: An interval fuzzy possibilistic chance-constrained model

    Science.gov (United States)

    Vahdani, Behnam; Tavakkoli-Moghaddam, Reza; Jolai, Fariborz; Baboli, Arman

    2013-06-01

    This article seeks to offer a systematic approach to establishing a reliable network of facilities in closed loop supply chains (CLSCs) under uncertainties. Facilities that are located in this article concurrently satisfy both traditional objective functions and reliability considerations in CLSC network designs. To attack this problem, a novel mathematical model is developed that integrates the network design decisions in both forward and reverse supply chain networks. The model also utilizes an effective reliability approach to find a robust network design. In order to make the results of this article more realistic, a CLSC for a case study in the iron and steel industry has been explored. The considered CLSC is multi-echelon, multi-facility, multi-product and multi-supplier. Furthermore, multiple facilities exist in the reverse logistics network leading to high complexities. Since the collection centres play an important role in this network, the reliability concept of these facilities is taken into consideration. To solve the proposed model, a novel interactive hybrid solution methodology is developed by combining a number of efficient solution approaches from the recent literature. The proposed solution methodology is a bi-objective interval fuzzy possibilistic chance-constraint mixed integer linear programming (BOIFPCCMILP). Finally, computational experiments are provided to demonstrate the applicability and suitability of the proposed model in a supply chain environment and to help decision makers facilitate their analyses.

  12. A New Software Reliability Framework——An Extended Cleanroom Model

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Cleanroom software engineering has been proven effective in improving software development quality while at the same time increasing reliability. To adapt to large software system development, the paper presents an extended the Cleanroom model, which integrates object-oriented method based on stimulus history, reversed engineering idea, automatic testing and reliability assessment into software development. The paper discusses the architecture and realizing technology of ECM.

  13. Comparison of Model Reliabilities from Single-Step and Bivariate Blending Methods

    DEFF Research Database (Denmark)

    Taskinen, Matti; Mäntysaari, Esa; Lidauer, Martin;

    2013-01-01

    Model based reliabilities in genetic evaluation are compared between three methods: animal model BLUP, single-step BLUP, and bivariate blending after genomic BLUP. The original bivariate blending is revised in this work to better account animal models. The study data is extracted from the product......Model based reliabilities in genetic evaluation are compared between three methods: animal model BLUP, single-step BLUP, and bivariate blending after genomic BLUP. The original bivariate blending is revised in this work to better account animal models. The study data is extracted from...... the production trait evaluation of Nordic Red dairy cattle. Genotyped bulls with daughters are used as training animals, and genotyped bulls and producing cows as candidate animals. For simplicity, size of the data is chosen so that the full inverses of the mixed model equation coefficient matrices can...... be calculated. Model reliabilities by the single-step and the bivariate blending methods were higher than by animal model due to genomic information. Compared to the single-step method, the bivariate blending method reliability estimates were, in general, lower. Computationally bivariate blending method was...

  14. Statistical Degradation Models for Reliability Analysis in Non-Destructive Testing

    Science.gov (United States)

    Chetvertakova, E. S.; Chimitova, E. V.

    2017-04-01

    In this paper, we consider the application of the statistical degradation models for reliability analysis in non-destructive testing. Such models enable to estimate the reliability function (the dependence of non-failure probability on time) for the fixed critical level using the information of the degradation paths of tested items. The most widely used models are the gamma and Wiener degradation models, in which the gamma or normal distributions are assumed as the distribution of degradation increments, respectively. Using the computer simulation technique, we have analysed the accuracy of the reliability estimates, obtained for considered models. The number of increments can be enlarged by increasing the sample size (the number of tested items) or by increasing the frequency of measuring degradation. It has been shown, that the sample size has a greater influence on the accuracy of the reliability estimates in comparison with the measuring frequency. Moreover, it has been shown that another important factor, influencing the accuracy of reliability estimation, is the duration of observing degradation process.

  15. An adaptive neuro fuzzy model for estimating the reliability of component-based software systems

    Directory of Open Access Journals (Sweden)

    Kirti Tyagi

    2014-01-01

    Full Text Available Although many algorithms and techniques have been developed for estimating the reliability of component-based software systems (CBSSs, much more research is needed. Accurate estimation of the reliability of a CBSS is difficult because it depends on two factors: component reliability and glue code reliability. Moreover, reliability is a real-world phenomenon with many associated real-time problems. Soft computing techniques can help to solve problems whose solutions are uncertain or unpredictable. A number of soft computing approaches for estimating CBSS reliability have been proposed. These techniques learn from the past and capture existing patterns in data. The two basic elements of soft computing are neural networks and fuzzy logic. In this paper, we propose a model for estimating CBSS reliability, known as an adaptive neuro fuzzy inference system (ANFIS, that is based on these two basic elements of soft computing, and we compare its performance with that of a plain FIS (fuzzy inference system for different data sets.

  16. TWO-PROCEDURE OF MODEL RELIABILITY-BASED OPTIMIZATION FOR WATER DISTRIBUTION SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Recently, considerable emphasis has been laid to the reliability-based optimization model for water distribution systems. But considerable computational effort is needed to determine the reliability-based optimal design of large networks, even of mid-sized networks. In this paper, a new methodology is presented for the reliability analysis for water distribution systems. This methodology consists of two procedures. The first is that the optimal design is constrained only by the pressure heads at demand nodes, done in GRG2. Because the reliability constrains are removed from the optimal problem, a number of simulations do not need to be conducted, so the computer time is greatly decreased. Then, the second procedure is a linear optimal search procedure. In this linear procedure, the optimal results obtained by GRG2 are adjusted by the reliability constrains. The results are a group of commercial diameters of pipes and the constraints of pressure heads and reliability at nodes are satisfied. Therefore, the computer burden is significantly decreased, and the reliability-based optimization is of more practical use.

  17. Reliability Analysis of a Composite Blade Structure Using the Model Correction Factor Method

    DEFF Research Database (Denmark)

    Dimitrov, Nikolay Krasimiroy; Friis-Hansen, Peter; Berggreen, Christian

    2010-01-01

    This paper presents a reliability analysis of a composite blade profile. The so-called Model Correction Factor technique is applied as an effective alternate approach to the response surface technique. The structural reliability is determined by use of a simplified idealised analytical model which...... in a probabilistic sense is model corrected so that it, close to the design point, represents the same structural behaviour as a realistic FE model. This approach leads to considerable improvement of computational efficiency over classical response surface methods, because the numerically “cheap” idealistic model...... is used as the response surface, while the time-consuming detailed model is called only a few times until the simplified model is calibrated to the detailed model....

  18. Value-Added Models for Teacher Preparation Programs: Validity and Reliability Threats, and a Manageable Alternative

    Science.gov (United States)

    Brady, Michael P.; Heiser, Lawrence A.; McCormick, Jazarae K.; Forgan, James

    2016-01-01

    High-stakes standardized student assessments are increasingly used in value-added evaluation models to connect teacher performance to P-12 student learning. These assessments are also being used to evaluate teacher preparation programs, despite validity and reliability threats. A more rational model linking student performance to candidates who…

  19. solveME: fast and reliable solution of nonlinear ME models

    DEFF Research Database (Denmark)

    Yang, Laurence; Ma, Ding; Ebrahim, Ali

    2016-01-01

    reconstructions (M models), are multiscale, and growth maximization is a nonlinear programming (NLP) problem, mainly due to macromolecule dilution constraints. Results: Here, we address these computational challenges. We develop a fast and numerically reliable solution method for growth maximization in ME models...

  20. Reliability-economics analysis models for photovoltaic power systems. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Stember, L.H.; Huss, W.R.; Bridgman, M.S.

    1982-11-01

    This report describes the development of modeling techniques to characterize the reliability, availability, and maintenance costs of photovoltaic power systems. The developed models can be used by designers of PV systems in making design decisions and trade-offs to minimize life-cycle energy costs.

  1. Bitwise identical compiling setup: prospective for reproducibility and reliability of earth system modeling

    Directory of Open Access Journals (Sweden)

    R. Li

    2015-11-01

    Full Text Available Reproducibility and reliability are fundamental principles of scientific research. A compiling setup that includes a specific compiler version and compiler flags is essential technical supports for Earth system modeling. With the fast development of computer software and hardware, compiling setup has to be updated frequently, which challenges the reproducibility and reliability of Earth system modeling. The existing results of a simulation using an original compiling setup may be irreproducible by a newer compiling setup because trivial round-off errors introduced by the change of compiling setup can potentially trigger significant changes in simulation results. Regarding the reliability, a compiler with millions of lines of codes may have bugs that are easily overlooked due to the uncertainties or unknowns in Earth system modeling. To address these challenges, this study shows that different compiling setups can achieve exactly the same (bitwise identical results in Earth system modeling, and a set of bitwise identical compiling setups of a model can be used across different compiler versions and different compiler flags. As a result, the original results can be more easily reproduced; for example, the original results with an older compiler version can be reproduced exactly with a newer compiler version. Moreover, this study shows that new test cases can be generated based on the differences of bitwise identical compiling setups between different models, which can help detect software bugs or risks in the codes of models and compilers and finally improve the reliability of Earth system modeling.

  2. Reliable dual tensor model estimation in single and crossing fibers based on jeffreys prior

    NARCIS (Netherlands)

    J. Yang (Jianfei); D.H.J. Poot; M.W.A. Caan (Matthan); Su, T. (Tanja); C.B. Majoie (Charles); L.J. van Vliet (Lucas); F. Vos (Frans)

    2016-01-01

    textabstractPurpose This paper presents and studies a framework for reliable modeling of diffusion MRI using a data-acquisition adaptive prior. Methods Automated relevance determination estimates the mean of the posterior distribution of a rank-2 dual tensor model exploiting Jeffreys prior (JARD).

  3. A case study review of technical and technology issues for transition of a utility load management program to provide system reliability resources in restructured electricity markets

    Energy Technology Data Exchange (ETDEWEB)

    Weller, G.H.

    2001-07-15

    Utility load management programs--including direct load control and interruptible load programs--were employed by utilities in the past as system reliability resources. With electricity industry restructuring, the context for these programs has changed; the market that was once controlled by vertically integrated utilities has become competitive, raising the question: can existing load management programs be modified so that they can effectively participate in competitive energy markets? In the short run, modified and/or improved operation of load management programs may be the most effective form of demand-side response available to the electricity system today. However, in light of recent technological advances in metering, communication, and load control, utility load management programs must be carefully reviewed in order to determine appropriate investments to support this transition. This report investigates the feasibility of and options for modifying an existing utility load management system so that it might provide reliability services (i.e. ancillary services) in the competitive markets that have resulted from electricity industry restructuring. The report is a case study of Southern California Edison's (SCE) load management programs. SCE was chosen because it operates one of the largest load management programs in the country and it operates them within a competitive wholesale electricity market. The report describes a wide range of existing and soon-to-be-available communication, control, and metering technologies that could be used to facilitate the evolution of SCE's load management programs and systems to provision of reliability services. The fundamental finding of this report is that, with modifications, SCE's load management infrastructure could be transitioned to provide critical ancillary services in competitive electricity markets, employing currently or soon-to-be available load control technologies.

  4. Computer aided reliability, availability, and safety modeling for fault-tolerant computer systems with commentary on the HARP program

    Science.gov (United States)

    Shooman, Martin L.

    1991-01-01

    Many of the most challenging reliability problems of our present decade involve complex distributed systems such as interconnected telephone switching computers, air traffic control centers, aircraft and space vehicles, and local area and wide area computer networks. In addition to the challenge of complexity, modern fault-tolerant computer systems require very high levels of reliability, e.g., avionic computers with MTTF goals of one billion hours. Most analysts find that it is too difficult to model such complex systems without computer aided design programs. In response to this need, NASA has developed a suite of computer aided reliability modeling programs beginning with CARE 3 and including a group of new programs such as: HARP, HARP-PC, Reliability Analysts Workbench (Combination of model solvers SURE, STEM, PAWS, and common front-end model ASSIST), and the Fault Tree Compiler. The HARP program is studied and how well the user can model systems using this program is investigated. One of the important objectives will be to study how user friendly this program is, e.g., how easy it is to model the system, provide the input information, and interpret the results. The experiences of the author and his graduate students who used HARP in two graduate courses are described. Some brief comparisons were made with the ARIES program which the students also used. Theoretical studies of the modeling techniques used in HARP are also included. Of course no answer can be any more accurate than the fidelity of the model, thus an Appendix is included which discusses modeling accuracy. A broad viewpoint is taken and all problems which occurred in the use of HARP are discussed. Such problems include: computer system problems, installation manual problems, user manual problems, program inconsistencies, program limitations, confusing notation, long run times, accuracy problems, etc.

  5. Do Lumped-Parameter Models Provide the Correct Geometrical Damping?

    DEFF Research Database (Denmark)

    Andersen, Lars

    This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines and other models applied to fast evaluation of struct......This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines and other models applied to fast evaluation...... response during excitation and the geometrical damping related to free vibrations of a hexagonal footing. The optimal order of a lumped-parameter model is determined for each degree of freedom, i.e. horizontal and vertical translation as well as torsion and rocking. In particular, the necessity of coupling...... between horizontal sliding and rocking is discussed....

  6. Reliability Stress-Strength Models for Dependent Observations with Applications in Clinical Trials

    Science.gov (United States)

    Kushary, Debashis; Kulkarni, Pandurang M.

    1995-01-01

    We consider the applications of stress-strength models in studies involving clinical trials. When studying the effects and side effects of certain procedures (treatments), it is often the case that observations are correlated due to subject effect, repeated measurements and observing many characteristics simultaneously. We develop maximum likelihood estimator (MLE) and uniform minimum variance unbiased estimator (UMVUE) of the reliability which in clinical trial studies could be considered as the chances of increased side effects due to a particular procedure compared to another. The results developed apply to both univariate and multivariate situations. Also, for the univariate situations we develop simple to use lower confidence bounds for the reliability. Further, we consider the cases when both stress and strength constitute time dependent processes. We define the future reliability and obtain methods of constructing lower confidence bounds for this reliability. Finally, we conduct simulation studies to evaluate all the procedures developed and also to compare the MLE and the UMVUE.

  7. Modeling Optimal Scheduling for Pumping System to Minimize Operation Cost and Enhance Operation Reliability

    Directory of Open Access Journals (Sweden)

    Yin Luo

    2012-01-01

    Full Text Available Traditional pump scheduling models neglect the operation reliability which directly relates with the unscheduled maintenance cost and the wear cost during the operation. Just for this, based on the assumption that the vibration directly relates with the operation reliability and the degree of wear, it could express the operation reliability as the normalization of the vibration level. The characteristic of the vibration with the operation point was studied, it could be concluded that idealized flow versus vibration plot should be a distinct bathtub shape. There is a narrow sweet spot (80 to 100 percent BEP to obtain low vibration levels in this shape, and the vibration also follows similar law with the square of the rotation speed without resonance phenomena. Then, the operation reliability could be modeled as the function of the capacity and rotation speed of the pump and add this function to the traditional model to form the new. And contrast with the tradition method, the result shown that the new model could fix the result produced by the traditional, make the pump operate in low vibration, then the operation reliability could increase and the maintenance cost could decrease.

  8. Potential Negative Impact of DG on Reliability Index: A Study Based on Time-Domain Modeling

    Science.gov (United States)

    Ran, Xuanchang

    This thesis presents an original insight of the negative impact of distributed generation on reliability index based on dynamic time-domain modeling. Models for essential power system components, such as protective devices and synchronous generators, were developed and tested. A 4 kV distribution loop which carries relatively high power demand was chosen for the analysis. The characteristic curves of all protective devices were extracted from utility database and applied to the time domain relay model. The performance of each device was investigated in details. The negative effect on reliability is due to the fuse opening caused by the installation of DG at the wrong location and inappropriate relay setup. Over 50% of the possible DG locations can produce an undesirable impact. The study conclusion is that there exists a significant potential for the installation of DG to negatively affect the reliability of power systems.

  9. Bayesian Hierarchical Scale Mixtures of Log-Normal Models for Inference in Reliability with Stochastic Constraint

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2017-06-01

    Full Text Available This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT models with stochastic (or uncertain constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT models (such as log-normal, log-Cauchy, and log-logistic FT models as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works.

  10. Do Lumped-Parameter Models Provide the Correct Geometrical Damping?

    DEFF Research Database (Denmark)

    Andersen, Lars

    2007-01-01

    This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil with focus on the horizontal sliding and rocking. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines...

  11. Do Lumped-Parameter Models Provide the Correct Geometrical Damping?

    DEFF Research Database (Denmark)

    Andersen, Lars

    2007-01-01

    This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil with focus on the horizontal sliding and rocking. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines...

  12. STOCHASTIC OBJECT-ORIENTED PETRI NETS (SOPNS) AND ITS APPLICATION IN MODELING OF MANUFACTURING SYSTEM RELIABILITY

    Institute of Scientific and Technical Information of China (English)

    Jiang Zhibin; He Junming

    2003-01-01

    Object-oriented Petri nets (OPNs) is extended into stochastic object-oriented Petri nets (SOPNs) by associating the OPN of an object with stochastic transitions and introducing stochastic places. The stochastic transition of the SOPNs of a production resources can be used to model its reliability, while the SOPN of a production resource can describe its performance with reliability considered. The SOPN model of a case production system is built to illustrate the relationship between the system's performances and the failures of individual production resources.

  13. Reliability based design optimization of concrete mix proportions using generalized ridge regression model

    Directory of Open Access Journals (Sweden)

    Rachna Aggarwal

    2014-12-01

    Full Text Available This paper presents Reliability Based Design Optimization (RBDO model to deal with uncertainties involved in concrete mix design process. The optimization problem is formulated in such a way that probabilistic concrete mix input parameters showing random characteristics are determined by minimizing the cost of concrete subjected to concrete compressive strength constraint for a given target reliability.  Linear and quadratic models based on Ordinary Least Square Regression (OLSR, Traditional Ridge Regression (TRR and Generalized Ridge Regression (GRR techniques have been explored to select the best model to explicitly represent compressive strength of concrete. The RBDO model is solved by Sequential Optimization and Reliability Assessment (SORA method using fully quadratic GRR model. Optimization results for a wide range of target compressive strength and reliability levels of 0.90, 0.95 and 0.99 have been reported. Also, safety factor based Deterministic Design Optimization (DDO designs for each case are obtained. It has been observed that deterministic optimal designs are cost effective but proposed RBDO model gives improved design performance.

  14. Development of Markov model of emergency diesel generator for dynamic reliability analysis

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Young Ho; Choi, Sun Yeong; Yang, Joon Eon [Korea Atomic Energy Research Institute, Taejon (Korea)

    1999-02-01

    The EDG (Emergency Diesal Generator) of nuclear power plant is one of the most important equipments in mitigating accidents. The FT (Fault Tree) method is widely used to assess the reliability of safety systems like an EDG in nuclear power plant. This method, however, has limitations in modeling dynamic features of safety systems exactly. We, hence, have developed a Markov model to represent the stochastic process of dynamic systems whose states change as time moves on. The Markov model enables us to develop a dynamic reliability model of EDG. This model can represent all possible states of EDG comparing to the FRANTIC code developed by U.S. NRC for the reliability analysis of standby systems. to access the regulation policy for test interval, we performed two simulations based on the generic data and plant specific data of YGN 3, respectively by using the developed model. We also estimate the effects of various repair rates and the fractions of starting failures by demand shock to the reliability of EDG. And finally, Aging effect is analyzed. (author). 23 refs., 19 figs., 9 tabs.

  15. Time Dependent Dielectric Breakdown in Copper Low-k Interconnects: Mechanisms and Reliability Models

    Directory of Open Access Journals (Sweden)

    Terence K.S. Wong

    2012-09-01

    Full Text Available The time dependent dielectric breakdown phenomenon in copper low-k damascene interconnects for ultra large-scale integration is reviewed. The loss of insulation between neighboring interconnects represents an emerging back end-of-the-line reliability issue that is not fully understood. After describing the main dielectric leakage mechanisms in low-k materials (Poole-Frenkel and Schottky emission, the major dielectric reliability models that had appeared in the literature are discussed, namely: the Lloyd model, 1/E model, thermochemical E model, E1/2 models, E2 model and the Haase model. These models can be broadly categorized into those that consider only intrinsic breakdown (Lloyd, 1/E, E and Haase and those that take into account copper migration in low-k materials (E1/2, E2. For each model, the physical assumptions and the proposed breakdown mechanism will be discussed, together with the quantitative relationship predicting the time to breakdown and supporting experimental data. Experimental attempts on validation of dielectric reliability models using data obtained from low field stressing are briefly discussed. The phenomenon of soft breakdown, which often precedes hard breakdown in porous ultra low-k materials, is highlighted for future research.

  16. The transparency, reliability and utility of tropical rainforest land-use and land-cover change models.

    Science.gov (United States)

    Rosa, Isabel M D; Ahmed, Sadia E; Ewers, Robert M

    2014-06-01

    Land-use and land-cover (LULC) change is one of the largest drivers of biodiversity loss and carbon emissions globally. We use the tropical rainforests of the Amazon, the Congo basin and South-East Asia as a case study to investigate spatial predictive models of LULC change. Current predictions differ in their modelling approaches, are highly variable and often poorly validated. We carried out a quantitative review of 48 modelling methodologies, considering model spatio-temporal scales, inputs, calibration and validation methods. In addition, we requested model outputs from each of the models reviewed and carried out a quantitative assessment of model performance for tropical LULC predictions in the Brazilian Amazon. We highlight existing shortfalls in the discipline and uncover three key points that need addressing to improve the transparency, reliability and utility of tropical LULC change models: (1) a lack of openness with regard to describing and making available the model inputs and model code; (2) the difficulties of conducting appropriate model validations; and (3) the difficulty that users of tropical LULC models face in obtaining the model predictions to help inform their own analyses and policy decisions. We further draw comparisons between tropical LULC change models in the tropics and the modelling approaches and paradigms in other disciplines, and suggest that recent changes in the climate change and species distribution modelling communities may provide a pathway that tropical LULC change modellers may emulate to further improve the discipline. Climate change models have exerted considerable influence over public perceptions of climate change and now impact policy decisions at all political levels. We suggest that tropical LULC change models have an equally high potential to influence public opinion and impact the development of land-use policies based on plausible future scenarios, but, to do that reliably may require further improvements in the

  17. Attitudes toward the elderly among the health care providers: reliability and validity of Turkish version of the UCLA Geriatrics Attitudes (UCLA-GA) scale.

    Science.gov (United States)

    Sahin, Sevnaz; Mandiracioglu, Aliye; Tekin, Nil; Senuzun, Fisun; Akcicek, Fehmi

    2012-01-01

    The population of above 65 years of age is increasing fast in societies, as the life expectancy is increasing and it leads to high demands for health care service. Health care service for the elderly should be provided by trained team in this field. Success of health care service to be rendered is related to knowledge, skill and attitudes of team members in different profession group (doctor, nurse, social worker, psychologist, etc.) about health of elderly. The aim of this study is to establish the Turkish validity and reliability of 14-question UCLA-GA scale, validity and reliability of which was proven and used the most frequently among the scales that assess attitudes of health care providers toward elderly. A total 256 people, 150 of them were post-graduates, 106 of them were pre-graduates were involved in the study at Ege University, medical faculty between the dates of December 2010 and February 2011. Majority of the participants (63.67%) were women and in the age group of 18-29 (58.3%). The ratio of the ones undergoing geriatric education is 38.2%. It was found out that the Kaiser-Meyer-Olkin (KMO) sampling adequacy test presented high correlation among the items in both single adult households of 14 items of the scale was 0.72. Cronbach alpha value of the scale was found as 0.67 and satisfying. As a result of examination with Tukey's test of additivity, it was seen that items of the scale have additive quality (F=85.25, pattitudes of health care providers toward elderly in geriatrics.

  18. Algorithm for break even availability allocation in process system modification using deterministic valuation model incorporating reliability

    Energy Technology Data Exchange (ETDEWEB)

    Shouri, P.V.; Sreejith, P.S. [Division of Mechanical Engineering, School of Engineering, Cochin University of Science and Technology (CUSAT), Cochin 682 022, Kerala (India)

    2008-06-15

    In the present scenario of energy demand overtaking energy supply, top priority is given for energy conservation programs and policies. As a result, most existing systems are redesigned or modified with a view for improving energy efficiency. Often these modifications can have an impact on process system configuration, thereby affecting process system reliability. The paper presents a model for valuation of process systems incorporating reliability that can be used to determine the change in process system value resulting from system modification. The model also determines the break even system availability and presents an algorithm for allocation of component reliabilities of the modified system based on the break even system availability. The developed equations are applied to a steam power plant to study the effect of various operating parameters on system value. (author)

  19. Reliable gain-scheduled control of discrete-time systems and its application to CSTR model

    Science.gov (United States)

    Sakthivel, R.; Selvi, S.; Mathiyalagan, K.; Shi, Y.

    2016-10-01

    This paper is focused on reliable gain-scheduled controller design for a class of discrete-time systems with randomly occurring nonlinearities and actuator fault. Further, the nonlinearity in the system model is assumed to occur randomly according to a Bernoulli distribution with measurable time-varying probability in real time. The main purpose of this paper is to design a gain-scheduled controller by implementing a probability-dependent Lyapunov function and linear matrix inequality (LMI) approach such that the closed-loop discrete-time system is stochastically stable for all admissible randomly occurring nonlinearities. The existence conditions for the reliable controller is formulated in terms of LMI constraints. Finally, the proposed reliable gain-scheduled control scheme is applied on continuously stirred tank reactor model to demonstrate the effectiveness and applicability of the proposed design technique.

  20. A testing-coverage software reliability model considering fault removal efficiency and error generation.

    Science.gov (United States)

    Li, Qiuying; Pham, Hoang

    2017-01-01

    In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance.

  1. Estimating the Parameters of Software Reliability Growth Models Using the Grey Wolf Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Alaa F. Sheta

    2016-04-01

    Full Text Available In this age of technology, building quality software is essential to competing in the business market. One of the major principles required for any quality and business software product for value fulfillment is reliability. Estimating software reliability early during the software development life cycle saves time and money as it prevents spending larger sums fixing a defective software product after deployment. The Software Reliability Growth Model (SRGM can be used to predict the number of failures that may be encountered during the software testing process. In this paper we explore the advantages of the Grey Wolf Optimization (GWO algorithm in estimating the SRGM’s parameters with the objective of minimizing the difference between the estimated and the actual number of failures of the software system. We evaluated three different software reliability growth models: the Exponential Model (EXPM, the Power Model (POWM and the Delayed S-Shaped Model (DSSM. In addition, we used three different datasets to conduct an experimental study in order to show the effectiveness of our approach.

  2. Blooms' separation of the final exam of Engineering Mathematics II: Item reliability using Rasch measurement model

    Science.gov (United States)

    Fuaad, Norain Farhana Ahmad; Nopiah, Zulkifli Mohd; Tawil, Norgainy Mohd; Othman, Haliza; Asshaari, Izamarlina; Osman, Mohd Hanif; Ismail, Nur Arzilah

    2014-06-01

    In engineering studies and researches, Mathematics is one of the main elements which express physical, chemical and engineering laws. Therefore, it is essential for engineering students to have a strong knowledge in the fundamental of mathematics in order to apply the knowledge to real life issues. However, based on the previous results of Mathematics Pre-Test, it shows that the engineering students lack the fundamental knowledge in certain topics in mathematics. Due to this, apart from making improvements in the methods of teaching and learning, studies on the construction of questions (items) should also be emphasized. The purpose of this study is to assist lecturers in the process of item development and to monitor the separation of items based on Blooms' Taxonomy and to measure the reliability of the items itself usingRasch Measurement Model as a tool. By using Rasch Measurement Model, the final exam questions of Engineering Mathematics II (Linear Algebra) for semester 2 sessions 2012/2013 were analysed and the results will provide the details onthe extent to which the content of the item providesuseful information about students' ability. This study reveals that the items used in Engineering Mathematics II (Linear Algebra) final exam are well constructed but the separation of the items raises concern as it is argued that it needs further attention, as there is abig gap between items at several levels of Blooms' cognitive skill.

  3. Reliability and validation of a behavioral model of clinical behavioral formulation

    Directory of Open Access Journals (Sweden)

    Amanda M Muñoz-Martínez

    2011-05-01

    Full Text Available The aim of this study was to determine the reliability and content and predictive validity of a clinical case formulation, developed from a behavioral perspective. A mixed design integrating levels of descriptive analysis and A-B case study with follow-up was used. The study established the reliability of the following descriptive and explanatory categories: (a problem description, (b predisposing factors, (c precipitating factors, (d acquisition and (e inferred mechanism (maintenance. The analysis was performed on cases from 2005 to 2008 formulated with the model derived from the current study. With regards to validity, expert judges considered that the model had content validity. The predictive validity was established across application of model to three case studies. Discussion shows the importance of extending the investigation with the model in other populations and to establish the clinical and concurrent validity of the model.

  4. Reliability Growth Modeling and Optimal Release Policy Under Fuzzy Environment of an N-version Programming System Incorporating the Effect of Fault Removal Efficiency

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Failure of a safety critical system can lead to big losses. Very high software reliability is required for automating the working of systems such as aircraft controller and nuclear reactor controller software systems. Fault-tolerant softwares are used to increase the overall reliability of software systems. Fault tolerance is achieved using the fault-tolerant schemes such as fault recovery (recovery block scheme), fault masking (N-version programming (NVP)) or a combination of both (Hybrid scheme). These softwares incorporate the ability of system survival even on a failure. Many researchers in the field of software engineering have done excellent work to study the reliability of fault-tolerant systems. Most of them consider the stable system reliability. Few attempts have been made in reliability modeling to study the reliability growth for an NVP system. Recently, a model was proposed to analyze the reliability growth of an NVP system incorporating the effect of fault removal efficiency. In this model, a proportion of the number of failures is assumed to be a measure of fault generation while an appropriate measure of fault generation should be the proportion of faults removed. In this paper, we first propose a testing efficiency model incorporating the effect of imperfect fault debugging and error generation. Using this model, a software reliability growth model (SRGM) is developed to model the reliability growth of an NVP system. The proposed model is useful for practical applications and can provide the measures of debugging effectiveness and additional workload or skilled professional required. It is very important for a developer to determine the optimal release time of the software to improve its performance in terms of competition and cost. In this paper, we also formulate the optimal software release time problem for a 3VP system under fuzzy environment and discuss a the fuzzy optimization technique for solving the problem with a numerical illustration.

  5. A Chaotic Model for Software Reliability%软件可靠性混沌模型

    Institute of Scientific and Technical Information of China (English)

    邹丰忠; 李传湘

    2001-01-01

    在分析软件失效机理后认为:有些软件失效行为具有混沌性,所以可以用混沌方法来处理其软件可靠性推断问题.但在应用混沌方法前先要进行系统辨识,确定为混沌系统后,才能应用嵌入空间技术从软件失效时间序列重建系统相空间和吸引子,进而用吸引子所揭示的混沌属性来估计软件可靠性.文中在三个标准数据集的基础上对此进行了实证分析,结果表明其中两个数据集源于混沌机制,他们的吸引子具有低维的小数极限维数,而且预测与实际可靠性吻合较好.值得指出的是文中所提混沌方法突破了软件可靠性一贯使用随机分析的局限.%Computers affected almost every aspect of human lives. As thedependency on computer systems of human beings grows, so does the need for the technology of reliability of computer systems. In contrast to computer hardware, software is far more complicated. Thus the key is to improve the reliability of software if the overall reliability of a system is to be improved. Although scientists, in the past few decades, proposed lots of reliability models for software, which greatly enhanced the reliability and productivity of software products, these models are far from satisfactory. To build models of high accuracy and to improve the existing models is therefore of practical significance. Conventional theory of software reliability assumes that the failure processes of software are completely random, whereas authors of this paper, on the basis of careful investigation on physical mechanics of software failures,suggest that some dynamics of software failures are of chaotic features. Thus the reliability issue of these systems can be addressed with chaotic approaches. But before applying chaotic methodology to estimate the reliability of the software under consideration, the first thing to do is system identification that uses certain standards to distinguish chaotic dynamics

  6. A continuous-time Bayesian network reliability modeling and analysis framework

    NARCIS (Netherlands)

    Boudali, H.; Dugan, J.B.

    2006-01-01

    We present a continuous-time Bayesian network (CTBN) framework for dynamic systems reliability modeling and analysis. Dynamic systems exhibit complex behaviors and interactions between their components; where not only the combination of failure events matters, but so does the sequence ordering of th

  7. Reviewing progress in PJM's capacity market structure via the new reliability pricing model

    Energy Technology Data Exchange (ETDEWEB)

    Sener, Adil Caner; Kimball, Stefan

    2007-12-15

    The Reliability Pricing Model introduces significant changes to the capacity market structure of PJM. The main feature of the RPM design is a downward-sloping demand curve, which replaces the highly volatile vertical demand curve. The authors review the latest RPM structure, results of the auctions, and the future course of the implementation process. (author)

  8. Bayesian zero-failure reliability modeling and assessment method for multiple numerical control (NC) machine tools

    Institute of Scientific and Technical Information of China (English)

    阚英男; 杨兆军; 李国发; 何佳龙; 王彦鹍; 李洪洲

    2016-01-01

    A new problem that classical statistical methods are incapable of solving is reliability modeling and assessment when multiple numerical control machine tools (NCMTs) reveal zero failures after a reliability test. Thus, the zero-failure data form and corresponding Bayesian model are developed to solve the zero-failure problem of NCMTs, for which no previous suitable statistical model has been developed. An expert−judgment process that incorporates prior information is presented to solve the difficulty in obtaining reliable prior distributions of Weibull parameters. The equations for the posterior distribution of the parameter vector and the Markov chain Monte Carlo (MCMC) algorithm are derived to solve the difficulty of calculating high-dimensional integration and to obtain parameter estimators. The proposed method is applied to a real case; a corresponding programming code and trick are developed to implement an MCMC simulation in WinBUGS, and a mean time between failures (MTBF) of 1057.9 h is obtained. Given its ability to combine expert judgment, prior information, and data, the proposed reliability modeling and assessment method under the zero failure of NCMTs is validated.

  9. 76 FR 28819 - NUREG/CR-XXXX, Development of Quantitative Software Reliability Models for Digital Protection...

    Science.gov (United States)

    2011-05-18

    ... COMMISSION NUREG/CR-XXXX, Development of Quantitative Software Reliability Models for Digital Protection... issued for public comment a document entitled: NUREG/CR-XXXX, ``Development of Quantitative Software... development of regulatory guidance for using risk information related to digital systems in the...

  10. THE EXPECTABLE MODEL OF PARAMETRIC RELIABILITY FOR POWERED ELECTROMAGNETIC UNITS OF RAILWAY ROLLING STOCK

    Directory of Open Access Journals (Sweden)

    M. O. Kostin

    2010-09-01

    Full Text Available The probabilistic model of parametric reliability of power electromagnetic valve contactors of rolling stock which helps to evaluate the probability of failures in condition of switching a contactor (the tractive force during the whole process of operation should be greater than the resulting counteracting force is proposed in the paper.

  11. Mathematical Model of Equipment Unit Reliability for Determination of Optimum Overhaul Periods

    Directory of Open Access Journals (Sweden)

    M. A. Pasiouk

    2009-01-01

    Full Text Available The paper proposes a mathematical model of the equipment unit reliability with due account of operational mode effect and main influencing factors.Its application contributes to reduction of operating costs, optimization of overhaul periods, prolongation of life-service and rational usage of fleet resource.

  12. A continuous-time Bayesian network reliability modeling and analysis framework

    NARCIS (Netherlands)

    Boudali, H.; Dugan, J.B.

    2006-01-01

    We present a continuous-time Bayesian network (CTBN) framework for dynamic systems reliability modeling and analysis. Dynamic systems exhibit complex behaviors and interactions between their components; where not only the combination of failure events matters, but so does the sequence ordering of th

  13. Specific response to herbivore-induced de novo synthesized plant volatiles provides reliable information for host plant selection in a moth.

    Science.gov (United States)

    Zakir, Ali; Bengtsson, Marie; Sadek, Medhat M; Hansson, Bill S; Witzgall, Peter; Anderson, Peter

    2013-09-01

    Animals depend on reliable sensory information for accurate behavioural decisions. For herbivorous insects it is crucial to find host plants for feeding and reproduction, and these insects must be able to differentiate suitable from unsuitable plants. Volatiles are important cues for insect herbivores to assess host plant quality. It has previously been shown that female moths of the Egyptian cotton leafworm, Spodoptera littoralis (Lepidoptera: Noctuidae), avoid oviposition on damaged cotton Gossypium hirsutum, which may mediated by herbivore-induced plant volatiles (HIPVs). Among the HIPVs, some volatiles are released following any type of damage while others are synthesized de novo and released by the plants only in response to herbivore damage. In behavioural experiments we here show that oviposition by S. littoralis on undamaged cotton plants was reduced by adding volatiles collected from plants with ongoing herbivory. Gas chromatography-electroantennographic detection (GC-EAD) recordings revealed that antennae of mated S. littoralis females responded to 18 compounds from a collection of headspace volatiles of damaged cotton plants. Among these compounds, a blend of the seven de novo synthesized volatile compounds was found to reduce oviposition in S. littoralis on undamaged plants under both laboratory and ambient (field) conditions in Egypt. Volatile compounds that are not produced de novo by the plants did not affect oviposition. Our results show that ovipositing females respond specifically to the de novo synthesized volatiles released from plants under herbivore attack. We suggest that these volatiles provide reliable cues for ovipositing females to detect plants that could provide reduced quality food for their offspring and an increased risk of competition and predation.

  14. Determination of Wave Model Uncertainties used for Probabilistic Reliability Assessments of Wave Energy Devices

    DEFF Research Database (Denmark)

    Ambühl, Simon; Kofoed, Jens Peter; Sørensen, John Dalsgaard

    2014-01-01

    Wave models used for site assessments are subject to model uncertainties, which need to be quantified when using wave model results for probabilistic reliability assessments. This paper focuses on determination of wave model uncertainties. Considered are four different wave models and validation...... data is collected from published scientific research. The bias, the root-mean-square error as well as the scatter index are considered for the significant wave height as well as the mean zero-crossing wave period. Based on an illustrative generic example it is shown how the estimated uncertainties can...

  15. Meeting Human Reliability Requirements through Human Factors Design, Testing, and Modeling

    Energy Technology Data Exchange (ETDEWEB)

    R. L. Boring

    2007-06-01

    In the design of novel systems, it is important for the human factors engineer to work in parallel with the human reliability analyst to arrive at the safest achievable design that meets design team safety goals and certification or regulatory requirements. This paper introduces the System Development Safety Triptych, a checklist of considerations for the interplay of human factors and human reliability through design, testing, and modeling in product development. This paper also explores three phases of safe system development, corresponding to the conception, design, and implementation of a system.

  16. On New Cautious Structural Reliability Models in the Framework of imprecise Probabilities

    DEFF Research Database (Denmark)

    Utkin, Lev V.; Kozine, Igor

    2010-01-01

    both aleatory (stochas-tic) and epistemic uncertainty and the flexibility with which information can be represented. The previous research of the authors related to generalizing structural reliability models to impre-cise statistical measures is summarized in Utkin & Kozine (2002) and Utkin (2004...... the above mentioned inputs do not exist and the analyst has on-ly some judgments or measurements (observations) of values of stress and strength. How to utilize this available information for computing the structural reliability and what to do if the number of judgments or measurements is very small...

  17. Mathematic Modeling of Complex Hydraulic Machinery Systems When Evaluating Reliability Using Graph Theory

    Science.gov (United States)

    Zemenkova, M. Yu; Shipovalov, A. N.; Zemenkov, Yu D.

    2016-04-01

    The main technological equipment of pipeline transport of hydrocarbons are hydraulic machines. During transportation of oil mainly used of centrifugal pumps, designed to work in the “pumping station-pipeline” system. Composition of a standard pumping station consists of several pumps, complex hydraulic piping. The authors have developed a set of models and algorithms for calculating system reliability of pumps. It is based on the theory of reliability. As an example, considered one of the estimation methods with the application of graph theory.

  18. Reliability modelling of repairable systems using Petri nets and fuzzy Lambda-Tau methodology

    Energy Technology Data Exchange (ETDEWEB)

    Knezevic, J.; Odoom, E.R

    2001-07-01

    A methodology is developed which uses Petri nets instead of the fault tree methodology and solves for reliability indices utilising fuzzy Lambda-Tau method. Fuzzy set theory is used for representing the failure rate and repair time instead of the classical (crisp) set theory because fuzzy numbers allow expert opinions, linguistic variables, operating conditions, uncertainty and imprecision in reliability information to be incorporated into the system model. Petri nets are used because unlike the fault tree methodology, the use of Petri nets allows efficient simultaneous generation of minimal cut and path sets.

  19. Experimental studies on power transformer model winding provided with MOVs

    Directory of Open Access Journals (Sweden)

    G.H. Kusumadevi

    2017-05-01

    Full Text Available Surge voltage distribution across a HV transformer winding due to appearance of very fast rise time (rise time of order 1 μs transient voltages is highly non-uniform along the length of the winding for initial time instant of occurrence of surge. In order to achieve nearly uniform initial time instant voltage distribution along the length of the HV winding, investigations have been carried out on transformer model winding. By connecting similar type of metal oxide varistors across sections of HV transformer model winding, it is possible to improve initial time instant surge voltage distribution across length of the HV transformer winding. Transformer windings with α values 5.3, 9.5 and 19 have been analyzed. The experimental studies have been carried out using high speed oscilloscope of good accuracy. The initial time instant voltage distribution across sections of winding with MOV remains nearly uniform along length of the winding. Also results of fault diagnostics carried out with and without connection of MOVs across sections of winding are reported.

  20. Gearbox Reliability Collaborative Phase 1 and 2: Testing and Modeling Results; Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Keller, J.; Guo, Y.; LaCava, W.; Link, H.; McNiff, B.

    2012-05-01

    The Gearbox Reliability Collaborative (GRC) investigates root causes of wind turbine gearbox premature failures and validates design assumptions that affect gearbox reliability using a combined testing and modeling approach. Knowledge gained from the testing and modeling of the GRC gearboxes builds an understanding of how the selected loads and events translate into internal responses of three-point mounted gearboxes. This paper presents some testing and modeling results of the GRC research during Phase 1 and 2. Non-torque loads from the rotor including shaft bending and thrust, traditionally assumed to be uncoupled with gearbox, affect gear and bearing loads and resulting gearbox responses. Bearing clearance increases bearing loads and causes cyclic loading, which could contribute to a reduced bearing life. Including flexibilities of key drivetrain subcomponents is important in order to reproduce the measured gearbox response during the tests using modeling approaches.

  1. Do Cochrane reviews provide a good model for social science?

    DEFF Research Database (Denmark)

    Konnerup, Merete; Kongsted, Hans Christian

    2012-01-01

    Formalised research synthesis to underpin evidence-based policy and practice has become increasingly important in areas of public policy. In this paper we discuss whether the Cochrane standard for systematic reviews of healthcare interventions is appropriate for social research. We examine...... the formal criteria of the Cochrane Collaboration for including particular study designs and search the Cochrane Library to provide quantitative evidence on the de facto standard of actual Cochrane reviews. By identifying the sample of Cochrane reviews that consider observational designs, we are able...... to conclude that the majority of reviews appears limited to considering randomised controlled trials only. Because recent studies have delineated conditions for observational studies in social research to produce valid evidence, we argue that an inclusive approach is essential for truly evidence-based policy...

  2. Modeling and Simulation of Reliability & Maintainability Parameters for Reusable Launch Vehicles using Design of Experiments

    Science.gov (United States)

    Unal, Resit; Morris, W. Douglas; White, Nancy H.; Lepsch, Roger A.

    2004-01-01

    This paper describes the development of a methodology for estimating reliability and maintainability distribution parameters for a reusable launch vehicle. A disciplinary analysis code and experimental designs are used to construct approximation models for performance characteristics. These models are then used in a simulation study to estimate performance characteristic distributions efficiently. The effectiveness and limitations of the developed methodology for launch vehicle operations simulations are also discussed.

  3. An Analysis of Starting Points for Setting Up a Model of a More Reliable Ship Propulsion

    OpenAIRE

    Martinović, Dragan; Tudor, Mato; Bernečić, Dean

    2011-01-01

    This paper considers the important requirement for ship propulsion necessary for its immaculate operation, since any failure can endanger the ship and render it useless. Particular attention is given to the failure of auxiliary engines that can also seriously jeopardise the safety of the ship. Therefore the paper presents preliminary investigations for setting up models of reliable ship propulsion accounting for the failure of auxiliary engines. Models of most frequent implementations of e...

  4. System principles, mathematical models and methods to ensure high reliability of safety systems

    Science.gov (United States)

    Zaslavskyi, V.

    2017-04-01

    Modern safety and security systems are composed of a large number of various components designed for detection, localization, tracking, collecting, and processing of information from the systems of monitoring, telemetry, control, etc. They are required to be highly reliable in a view to correctly perform data aggregation, processing and analysis for subsequent decision making support. On design and construction phases of the manufacturing of such systems a various types of components (elements, devices, and subsystems) are considered and used to ensure high reliability of signals detection, noise isolation, and erroneous commands reduction. When generating design solutions for highly reliable systems a number of restrictions and conditions such as types of components and various constrains on resources should be considered. Various types of components perform identical functions; however, they are implemented using diverse principles, approaches and have distinct technical and economic indicators such as cost or power consumption. The systematic use of different component types increases the probability of tasks performing and eliminates the common cause failure. We consider type-variety principle as an engineering principle of system analysis, mathematical models based on this principle, and algorithms for solving optimization problems of highly reliable safety and security systems design. Mathematical models are formalized in a class of two-level discrete optimization problems of large dimension. The proposed approach, mathematical models, algorithms can be used for problem solving of optimal redundancy on the basis of a variety of methods and control devices for fault and defects detection in technical systems, telecommunication networks, and energy systems.

  5. Data Applicability of Heritage and New Hardware for Launch Vehicle System Reliability Models

    Science.gov (United States)

    Al Hassan Mohammad; Novack, Steven

    2015-01-01

    Many launch vehicle systems are designed and developed using heritage and new hardware. In most cases, the heritage hardware undergoes modifications to fit new functional system requirements, impacting the failure rates and, ultimately, the reliability data. New hardware, which lacks historical data, is often compared to like systems when estimating failure rates. Some qualification of applicability for the data source to the current system should be made. Accurately characterizing the reliability data applicability and quality under these circumstances is crucial to developing model estimations that support confident decisions on design changes and trade studies. This presentation will demonstrate a data-source classification method that ranks reliability data according to applicability and quality criteria to a new launch vehicle. This method accounts for similarities/dissimilarities in source and applicability, as well as operating environments like vibrations, acoustic regime, and shock. This classification approach will be followed by uncertainty-importance routines to assess the need for additional data to reduce uncertainty.

  6. A Reliability Model for Ni-BaTiO3-Based (BME) Ceramic Capacitors

    Science.gov (United States)

    Liu, Donhang

    2014-01-01

    The evaluation of multilayer ceramic capacitors (MLCCs) with base-metal electrodes (BMEs) for potential NASA space project applications requires an in-depth understanding of their reliability. The reliability of an MLCC is defined as the ability of the dielectric material to retain its insulating properties under stated environmental and operational conditions for a specified period of time t. In this presentation, a general mathematic expression of a reliability model for a BME MLCC is developed and discussed. The reliability model consists of three parts: (1) a statistical distribution that describes the individual variation of properties in a test group of samples (Weibull, log normal, normal, etc.), (2) an acceleration function that describes how a capacitors reliability responds to external stresses such as applied voltage and temperature (All units in the test group should follow the same acceleration function if they share the same failure mode, independent of individual units), and (3) the effect and contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size r, and capacitor chip size S. In general, a two-parameter Weibull statistical distribution model is used in the description of a BME capacitors reliability as a function of time. The acceleration function that relates a capacitors reliability to external stresses is dependent on the failure mode. Two failure modes have been identified in BME MLCCs: catastrophic and slow degradation. A catastrophic failure is characterized by a time-accelerating increase in leakage current that is mainly due to existing processing defects (voids, cracks, delamination, etc.), or the extrinsic defects. A slow degradation failure is characterized by a near-linear increase in leakage current against the stress time; this is caused by the electromigration of oxygen vacancies (intrinsic defects). The

  7. Improvement of level-1 PSA computer code package - Modeling and analysis for dynamic reliability of nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Chang Hoon; Baek, Sang Yeup; Shin, In Sup; Moon, Shin Myung; Moon, Jae Phil; Koo, Hoon Young; Kim, Ju Shin [Seoul National University, Seoul (Korea, Republic of); Hong, Jung Sik [Seoul National Polytechnology University, Seoul (Korea, Republic of); Lim, Tae Jin [Soongsil University, Seoul (Korea, Republic of)

    1996-08-01

    The objective of this project is to develop a methodology of the dynamic reliability analysis for NPP. The first year`s research was focused on developing a procedure for analyzing failure data of running components and a simulator for estimating the reliability of series-parallel structures. The second year`s research was concentrated on estimating the lifetime distribution and PM effect of a component from its failure data in various cases, and the lifetime distribution of a system with a particular structure. Computer codes for performing these jobs were also developed. The objectives of the third year`s research is to develop models for analyzing special failure types (CCFs, Standby redundant structure) that were nor considered in the first two years, and to complete a methodology of the dynamic reliability analysis for nuclear power plants. The analysis of failure data of components and related researches for supporting the simulator must be preceded for providing proper input to the simulator. Thus this research is divided into three major parts. 1. Analysis of the time dependent life distribution and the PM effect. 2. Development of a simulator for system reliability analysis. 3. Related researches for supporting the simulator : accelerated simulation analytic approach using PH-type distribution, analysis for dynamic repair effects. 154 refs., 5 tabs., 87 figs. (author)

  8. Drosophila provides rapid modeling of renal development, function, and disease.

    Science.gov (United States)

    Dow, Julian A T; Romero, Michael F

    2010-12-01

    The evolution of specialized excretory cells is a cornerstone of the metazoan radiation, and the basic tasks performed by Drosophila and human renal systems are similar. The development of the Drosophila renal (Malpighian) tubule is a classic example of branched tubular morphogenesis, allowing study of mesenchymal-to-epithelial transitions, stem cell-mediated regeneration, and the evolution of a glomerular kidney. Tubule function employs conserved transport proteins, such as the Na(+), K(+)-ATPase and V-ATPase, aquaporins, inward rectifier K(+) channels, and organic solute transporters, regulated by cAMP, cGMP, nitric oxide, and calcium. In addition to generation and selective reabsorption of primary urine, the tubule plays roles in metabolism and excretion of xenobiotics, and in innate immunity. The gene expression resource FlyAtlas.org shows that the tubule is an ideal tissue for the modeling of renal diseases, such as nephrolithiasis and Bartter syndrome, or for inborn errors of metabolism. Studies are assisted by uniquely powerful genetic and transgenic resources, the widespread availability of mutant stocks, and low-cost, rapid deployment of new transgenics to allow manipulation of renal function in an organotypic context.

  9. Modeling Travel Time Reliability of Road Network Considering Connected Vehicle Guidance Characteristics Indexes

    Directory of Open Access Journals (Sweden)

    Jiangfeng Wang

    2017-01-01

    Full Text Available Travel time reliability (TTR is one of the important indexes for effectively evaluating the performance of road network, and TTR can effectively be improved using the real-time traffic guidance information. Compared with traditional traffic guidance, connected vehicle (CV guidance can provide travelers with more timely and accurate travel information, which can further improve the travel efficiency of road network. Five CV characteristics indexes are selected as explanatory variables including the Congestion Level (CL, Penetration Rate (PR, Compliance Rate (CR, release Delay Time (DT, and Following Rate (FR. Based on the five explanatory variables, a TTR model is proposed using the multilogistic regression method, and the prediction accuracy and the impact of characteristics indexes on TTR are analyzed using a CV guidance scenario. The simulation results indicate that 80% of the RMSE is concentrated within the interval of 0 to 0.0412. The correlation analysis of characteristics indexes shows that the influence of CL, PR, CR, and DT on the TTR is significant. PR and CR have a positive effect on TTR, and the average improvement rate is about 77.03% and 73.20% with the increase of PR and CR, respectively, while CL and DT have a negative effect on TTR, and TTR decreases by 31.21% with the increase of DT from 0 to 180 s.

  10. Assessing Reliability of Cellulose Hydrolysis Models to Support Biofuel Process Design – Identifiability and Uncertainty Analysis

    DEFF Research Database (Denmark)

    Sin, Gürkan; Meyer, Anne S.; Gernaey, Krist

    2010-01-01

    The reliability of cellulose hydrolysis models is studied using the NREL model. An identifiability analysis revealed that only 6 out of 26 parameters are identifiable from the available data (typical hydrolysis experiments). Attempting to identify a higher number of parameters (as done...... are not informative enough (sensitivities of 16 parameters were insignificant). This indicates that the NREL model has severe parameter uncertainty, likely to be the case for other hydrolysis models as well since similar kinetic expressions are used. To overcome this impasse, we have used the Monte Carlo procedure...

  11. Frontiers of reliability

    CERN Document Server

    Basu, Asit P; Basu, Sujit K

    1998-01-01

    This volume presents recent results in reliability theory by leading experts in the world. It will prove valuable for researchers, and users of reliability theory. It consists of refereed invited papers on a broad spectrum of topics in reliability. The subjects covered include Bayesian reliability, Bayesian reliability modeling, confounding in a series system, DF tests, Edgeworth approximation to reliability, estimation under random censoring, fault tree reduction for reliability, inference about changes in hazard rates, information theory and reliability, mixture experiment, mixture of Weibul

  12. LED Lighting System Reliability Modeling and Inference via Random Effects Gamma Process and Copula Function

    Directory of Open Access Journals (Sweden)

    Huibing Hao

    2015-01-01

    Full Text Available Light emitting diode (LED lamp has attracted increasing interest in the field of lighting systems due to its low energy and long lifetime. For different functions (i.e., illumination and color, it may have two or more performance characteristics. When the multiple performance characteristics are dependent, it creates a challenging problem to accurately analyze the system reliability. In this paper, we assume that the system has two performance characteristics, and each performance characteristic is governed by a random effects Gamma process where the random effects can capture the unit to unit differences. The dependency of performance characteristics is described by a Frank copula function. Via the copula function, the reliability assessment model is proposed. Considering the model is so complicated and analytically intractable, the Markov chain Monte Carlo (MCMC method is used to estimate the unknown parameters. A numerical example about actual LED lamps data is given to demonstrate the usefulness and validity of the proposed model and method.

  13. Simultaneous parameter and tolerance optimization of structures via probability-interval mixed reliability model

    DEFF Research Database (Denmark)

    Luo, Yangjun; Wu, Xiaoxiang; Zhou, Mingdong

    2015-01-01

    on a probability-interval mixed reliability model, the imprecision of design parameters is modeled as interval uncertainties fluctuating within allowable tolerance bounds. The optimization model is defined as to minimize the total manufacturing cost under mixed reliability index constraints, which are further...... transformed into their equivalent formulations by using the performance measure approach. The optimization problem is then solved with the sequential approximate programming. Meanwhile, a numerically stable algorithm based on the trust region method is proposed to efficiently update the target performance......Both structural sizes and dimensional tolerances strongly influence the manufacturing cost and the functional performance of a practical product. This paper presents an optimization method to simultaneously find the optimal combination of structural sizes and dimensional tolerances. Based...

  14. Reliability and efficiency of generalized rumor spreading model on complex social networks

    CERN Document Server

    Naimi, Yaghoob

    2013-01-01

    We introduce the generalized rumor spreading model and investigate some properties of this model on different complex social networks. Despite pervious rumor models that both the spreader-spreader ($SS$) and the spreader-stifler ($SR$) interactions have the same rate $\\alpha$, we define $\\alpha^{(1)}$ and $\\alpha^{(2)}$ for $SS$ and $SR$ interactions, respectively. The effect of variation of $\\alpha^{(1)}$ and $\\alpha^{(2)}$ on the final density of stiflers is investigated. Furthermore, the influence of the topological structure of the network in rumor spreading is studied by analyzing the behavior of several global parameters such as reliability and efficiency. Our results show that while networks with homogeneous connectivity patterns reach a higher reliability, scale-free topologies need a less time to reach a steady state with respect the rumor.

  15. Reliability and Efficiency of Generalized Rumor Spreading Model on Complex Social Networks

    Institute of Scientific and Technical Information of China (English)

    Yaghoob Naimi; Mohammad Naimi

    2013-01-01

    We introduce the generalized rumor spreading model and investigate some properties of this model on different complex social networks.Despite pervious rumor models that both the spreader-spreader (SS) and the spreaderstifler (SR) interactions have the same rate α,we define α(1) and α(2) for SS and SR interactions,respectively.The effect of variation of α(1) and α(2) on the final density of stiflers is investigated.Furthermore,the influence of the topological structure of the network in rumor spreading is studied by analyzing the behavior of several global parameters such as reliability and efficiency.Our results show that while networks with homogeneous connectivity patterns reach a higher reliability,scale-free topologies need a less time to reach a steady state with respect the rumor.

  16. Trapezoidal Numerical Integration of Fire Radiative Power (FRP) Provides More Reliable Estimation of Fire Radiative Energy (FRE) and so Biomass Consumption Than Conventional Estimation Methods

    Science.gov (United States)

    Sathyachandran, S. K.; Roy, D. P.; Boschetti, L.

    2014-12-01

    The Fire Radiative Power (FRP) [MW] is a measure of the rate of biomass combustion and can be retrieved from ground based and satellite observations using middle infra-red measurements. The temporal integral of FRP is the Fire Radiative Energy (FRE) [MJ] and is related linearly to the total biomass consumption and so pyrogenic emissions. Satellite derived biomass consumption and emissions estimates have been derived conventionally by computing the summed total FRP, or the average FRP (arithmetic average of FRP retrievals), over spatial geographic grids for fixed time periods. These two methods are prone to estimation bias, especially under irregular sampling conditions such as provided by polar-orbiting satellites, because the FRP can vary rapidly in space and time as a function of the fire behavior. Linear temporal integration of FRP taking into account when the FRP values were observed and using the trapezoidal rule for numerical integration has been suggested as an alternate FRE estimation method. In this study FRP data measured rapidly with a dual-band radiometer over eight prescribed fires are used to compute eight FRE values using the sum, mean and trapezoidal estimation approaches under a variety of simulated irregular sampling conditions. The estimated values are compared to biomass consumed measurements for each of the eight fires to provide insights into which method provides more accurate and precise biomass consumption estimates. The three methods are also applied to continental MODIS FRP data to study their differences using polar orbiting satellite data. The research findings indicate that trapezoidal FRP numerical integration provides the most reliable estimator.

  17. A competing risk model for the reliability of cylinder liners in marine Diesel engines

    Energy Technology Data Exchange (ETDEWEB)

    Bocchetti, D. [Grimaldi Group, Naples (Italy); Giorgio, M. [Department of Aerospace and Mechanical Engineering, Second University of Naples, Aversa (Italy); Guida, M. [Department of Information Engineering and Electrical Engineering, University of Salerno, Fisciano (Italy); Pulcini, G. [Istituto Motori, National Research Council-CNR, Naples (Italy)], E-mail: g.pulcini@im.cnr.it

    2009-08-15

    In this paper, a competing risk model is proposed to describe the reliability of the cylinder liners of a marine Diesel engine. Cylinder liners presents two dominant failure modes: wear degradation and thermal cracking. The wear process is described through a stochastic process, whereas the failure time due to the thermal cracking is described by the Weibull distribution. The use of the proposed model allows performing goodness-of-fit test and parameters estimation on the basis of both wear and failure data. Moreover, it enables reliability estimates of the state of the liners to be obtained and the hierarchy of the failure mechanisms to be determined for any given age and wear level of the liner. The model has been applied to a real data set: 33 cylinder liners of Sulzer RTA 58 engines, which equip twin ships of the Grimaldi Group. Estimates of the liner reliability and of other quantities of interest under the competing risk model are obtained, as well as the conditional failure probability and mean residual lifetime, given the survival age and the accumulated wear. Furthermore, the model has been used to estimate the probability that a liner fails due to one of the failure modes when both of these modes act.

  18. Bayesian Reliability Modeling and Assessment Solution for NC Machine Tools under Small-sample Data

    Institute of Scientific and Technical Information of China (English)

    YANG Zhaojun; KAN Yingnan; CHEN Fei; XU Binbin; CHEN Chuanhai; YANG Chuangui

    2015-01-01

    Although Markov chain Monte Carlo(MCMC) algorithms are accurate, many factors may cause instability when they are utilized in reliability analysis; such instability makes these algorithms unsuitable for widespread engineering applications. Thus, a reliability modeling and assessment solution aimed at small-sample data of numerical control(NC) machine tools is proposed on the basis of Bayes theories. An expert-judgment process of fusing multi-source prior information is developed to obtain the Weibull parameters’ prior distributions and reduce the subjective bias of usual expert-judgment methods. The grid approximation method is applied to two-parameter Weibull distribution to derive the formulas for the parameters’ posterior distributions and solve the calculation difficulty of high-dimensional integration. The method is then applied to the real data of a type of NC machine tool to implement a reliability assessment and obtain the mean time between failures(MTBF). The relative error of the proposed method is 5.8020×10-4 compared with the MTBF obtained by the MCMC algorithm. This result indicates that the proposed method is as accurate as MCMC. The newly developed solution for reliability modeling and assessment of NC machine tools under small-sample data is easy, practical, and highly suitable for widespread application in the engineering field; in addition, the solution does not reduce accuracy.

  19. Reliability estimation and remaining useful lifetime prediction for bearing based on proportional hazard model

    Institute of Scientific and Technical Information of China (English)

    王鹭; 张利; 王学芝

    2015-01-01

    As the central component of rotating machine, the performance reliability assessment and remaining useful lifetime prediction of bearing are of crucial importance in condition-based maintenance to reduce the maintenance cost and improve the reliability. A prognostic algorithm to assess the reliability and forecast the remaining useful lifetime (RUL) of bearings was proposed, consisting of three phases. Online vibration and temperature signals of bearings in normal state were measured during the manufacturing process and the most useful time-dependent features of vibration signals were extracted based on correlation analysis (feature selection step). Time series analysis based on neural network, as an identification model, was used to predict the features of bearing vibration signals at any horizons (feature prediction step). Furthermore, according to the features, degradation factor was defined. The proportional hazard model was generated to estimate the survival function and forecast the RUL of the bearing (RUL prediction step). The positive results show that the plausibility and effectiveness of the proposed approach can facilitate bearing reliability estimation and RUL prediction.

  20. Modeling the City Distribution System Reliability with Bayesian Networks to Identify Influence Factors

    Directory of Open Access Journals (Sweden)

    Hao Zhang

    2016-01-01

    Full Text Available Under the increasingly uncertain economic environment, the research on the reliability of urban distribution system has great practical significance for the integration of logistics and supply chain resources. This paper summarizes the factors that affect the city logistics distribution system. Starting from the research of factors that influence the reliability of city distribution system, further construction of city distribution system reliability influence model is built based on Bayesian networks. The complex problem is simplified by using the sub-Bayesian network, and an example is analyzed. In the calculation process, we combined the traditional Bayesian algorithm and the Expectation Maximization (EM algorithm, which made the Bayesian model able to lay a more accurate foundation. The results show that the Bayesian network can accurately reflect the dynamic relationship among the factors affecting the reliability of urban distribution system. Moreover, by changing the prior probability of the node of the cause, the correlation degree between the variables that affect the successful distribution can be calculated. The results have significant practical significance on improving the quality of distribution, the level of distribution, and the efficiency of enterprises.

  1. Developing Research Agendas on Whole School Improvement Models: The Model Providers' Perspective

    Science.gov (United States)

    Shambaugh, Larisa; Graczewski, Cheryl; Therriault, Susan Bowles; Darwin, Marlene J.

    2007-01-01

    The current education policy environment places a heavy emphasis on scientifically based research. This article examines how whole school improvement models approach the development of a research agenda, including what influences and challenges model providers face in implementing their agenda. Responses also detail the advantages and…

  2. Development of Probabilistic Reliability Models of Photovoltaic System Topologies for System Adequacy Evaluation

    Directory of Open Access Journals (Sweden)

    Ahmad Alferidi

    2017-02-01

    Full Text Available The contribution of solar power in electric power systems has been increasing rapidly due to its environmentally friendly nature. Photovoltaic (PV systems contain solar cell panels, power electronic converters, high power switching and often transformers. These components collectively play an important role in shaping the reliability of PV systems. Moreover, the power output of PV systems is variable, so it cannot be controlled as easily as conventional generation due to the unpredictable nature of weather conditions. Therefore, solar power has a different influence on generating system reliability compared to conventional power sources. Recently, different PV system designs have been constructed to maximize the output power of PV systems. These different designs are commonly adopted based on the scale of a PV system. Large-scale grid-connected PV systems are generally connected in a centralized or a string structure. Central and string PV schemes are different in terms of connecting the inverter to PV arrays. Micro-inverter systems are recognized as a third PV system topology. It is therefore important to evaluate the reliability contribution of PV systems under these topologies. This work utilizes a probabilistic technique to develop a power output model for a PV generation system. A reliability model is then developed for a PV integrated power system in order to assess the reliability and energy contribution of the solar system to meet overall system demand. The developed model is applied to a small isolated power unit to evaluate system adequacy and capacity level of a PV system considering the three topologies.

  3. Advanced modeling and simulation to design and manufacture high performance and reliable advanced microelectronics and microsystems.

    Energy Technology Data Exchange (ETDEWEB)

    Nettleship, Ian (University of Pittsburgh, Pittsburgh, PA); Hinklin, Thomas; Holcomb, David Joseph; Tandon, Rajan; Arguello, Jose Guadalupe, Jr. (,; .); Dempsey, James Franklin; Ewsuk, Kevin Gregory; Neilsen, Michael K.; Lanagan, Michael (Pennsylvania State University, University Park, PA)

    2007-07-01

    An interdisciplinary team of scientists and engineers having broad expertise in materials processing and properties, materials characterization, and computational mechanics was assembled to develop science-based modeling/simulation technology to design and reproducibly manufacture high performance and reliable, complex microelectronics and microsystems. The team's efforts focused on defining and developing a science-based infrastructure to enable predictive compaction, sintering, stress, and thermomechanical modeling in ''real systems'', including: (1) developing techniques to and determining materials properties and constitutive behavior required for modeling; (2) developing new, improved/updated models and modeling capabilities, (3) ensuring that models are representative of the physical phenomena being simulated; and (4) assessing existing modeling capabilities to identify advances necessary to facilitate the practical application of Sandia's predictive modeling technology.

  4. Reliability of lumped hydrological modeling in a semi-arid mountainous catchment facing water-use changes

    Science.gov (United States)

    Hublart, Paul; Ruelland, Denis; García de Cortázar-Atauri, Inaki; Gascoin, Simon; Lhermitte, Stef; Ibacache, Antonio

    2016-09-01

    This paper explores the reliability of a hydrological modeling framework in a mesoscale (1515 km2) catchment of the dry Andes (30° S) where irrigation water use and snow sublimation represent a significant part of the annual water balance. To this end, a 20-year simulation period encompassing a wide range of climate and water-use conditions was selected to evaluate three types of integrated models referred to as A, B and C. These models share the same runoff generation and routing module but differ in their approach to snowmelt modeling and irrigation water use. Model A relies on a simple degree-day approach to estimate snowmelt rates and assumes that irrigation impacts can be neglected at the catchment scale. Model B ignores irrigation impacts just as Model A but uses an enhanced degree-day approach to account for the effects of net radiation and sublimation on melt rates. Model C relies on the same snowmelt routine as Model B but incorporates irrigation impacts on natural streamflow using a conceptual irrigation module. Overall, the reliability of probabilistic streamflow predictions was greatly improved with Model C, resulting in narrow uncertainty bands and reduced structural errors, notably during dry years. This model-based analysis also stressed the importance of considering sublimation in empirical snowmelt models used in the subtropics, and provided evidence that water abstractions from the unregulated river are impacting on the hydrological response of the system. This work also highlighted areas requiring additional research, including the need for a better conceptualization of runoff generation processes in the dry Andes.

  5. Modeling and simulation for microelectronic packaging assembly manufacturing, reliability and testing

    CERN Document Server

    Liu, Sheng

    2011-01-01

    Although there is increasing need for modeling and simulation in the IC package design phase, most assembly processes and various reliability tests are still based on the time consuming ""test and try out"" method to obtain the best solution. Modeling and simulation can easily ensure virtual Design of Experiments (DoE) to achieve the optimal solution. This has greatly reduced the cost and production time, especially for new product development. Using modeling and simulation will become increasingly necessary for future advances in 3D package development.  In this book, Liu and Liu allow people

  6. Development of thermal hydraulic models for the reliable regulatory auditing code

    Energy Technology Data Exchange (ETDEWEB)

    Chung, B. D.; Song, C. H.; Lee, Y. J.; Kwon, T. S. [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2003-04-15

    The objective of this project is to develop thermal hydraulic models for use in improving the reliability of the regulatory auditing codes. The current year fall under the first step of the 3 year project, and the main researches were focused on identifying the candidate thermal hydraulic models for improvement and to develop prototypical model development. During the current year, the verification calculations submitted for the APR 1400 design certification have been reviewed, the experimental data from the MIDAS DVI experiment facility in KAERI have been analyzed and evaluated, candidate thermal hydraulic models for improvement have been identified, prototypical models for the improved thermal hydraulic models have been developed, items for experiment in connection with the model development have been identified, and preliminary design of the experiment has been carried out.

  7. An Efficient Variable Screening Method for Effective Surrogate Models for Reliability-Based Design Optimization

    Science.gov (United States)

    2014-04-01

    reliability-based design optimization ( RBDO ) process, surrogate models are frequently used to reduce the number of simulations because analysis of a...the RBDO problem and thus mitigate the curse of dimensionality. Therefore, it is desirable to develop an efficient and effective variable...screening method for reduction of the dimension of the RBDO problem. In this paper, requirements of the variable screening method for deterministic design

  8. A model for reliability analysis and calculation applied in an example from chemical industry

    Directory of Open Access Journals (Sweden)

    Pejović Branko B.

    2010-01-01

    Full Text Available The subject of the paper is reliability design in polymerization processes that occur in reactors of a chemical industry. The designed model is used to determine the characteristics and indicators of reliability, which enabled the determination of basic factors that result in a poor development of a process. This would reduce the anticipated losses through the ability to control them, as well as enabling the improvement of the quality of production, which is the major goal of the paper. The reliability analysis and calculation uses the deductive method based on designing of a scheme for fault tree analysis of a system based on inductive conclusions. It involves the use standard logical symbols and rules of Boolean algebra and mathematical logic. The paper eventually gives the results of the work in the form of quantitative and qualitative reliability analysis of the observed process, which served to obtain complete information on the probability of top event in the process, as well as objective decision making and alternative solutions.

  9. COMPETENCY ASSESSMENT OF CLOTHING FASHION DESIGN: RASCH MEASUREMENT MODEL FOR CONSTRUCT VALIDITY AND RELIABILITY

    Directory of Open Access Journals (Sweden)

    Arasinah Kamis

    2013-12-01

    Full Text Available The Clothing Fashion Design (CFaD assessment instrument was used to measure the level of competence among instructors in Skills Training Institute (STI. This study was conducted to select items that are valid, fair, and of quality. The CFaD instrument consists of 97 Likert scale items with six constructs of designing, pattern drafting, computer, sewing, creative, and trade/entrepreneurship. The instrument was administered for the first stage of testing to 95 instructors in STI who teach in the field of fashion and clothing. The Rasch measurement model was used to obtain the reliability, validity, relevance of person items and unidimensionality of items. Therefore, Winsteps software version 3.72.3 was used to analyze the data. The findings showed that the items in the six constructs of skill competency have high reliability, from 0.63 to 0.96 for the Likert scale items. Meanwhile, the reliability of the respondents was estimated between 0.93-0.98. The analysis also indicate that 11 out of the 97 items were misfit while 32 items need to be repaired prior to the decision of dropping some of them due to lack of unidimensionality and differing levels of difficulty. Decisions to remove or repair were made so that the instrument is more fair and equitable to all respondents, and reliable.

  10. Variability in faecal egg counts – a statistical model to achieve reliable determination of anthelmintic resistance in livestock

    DEFF Research Database (Denmark)

    Nielsen, Martin Krarup; Vidyashankar, Anand N.; Hanlon, Bret;

    arithmetic calculations classified nine farms (14.1 %) as resistant and 11 farms (17.2 %) as suspect resistant. Using 10000 Monte Carlo simulated data sets, our methodology provides a reliable classification of farms into different resistance categories with a false discovery rate of 1.02 %. The methodology...... statistical model was therefore developed for analysis of FECRT data from multiple farms. Horse age, gender, zip code and pre-treatment egg count were incorporated into the model. Horses and farms were kept as random effects. Resistance classifications were based on model-based 95% lower confidence limit (LCL......) values of predicted mean efficacies, and cutoff values were justified statistically. The model was used to evaluate the efficacy of pyrantel embonate paste on 64 Danish horse farms. Of 1644 horses, 614 had egg counts > 200 eggs per gram (EPG) and were treated. The cutoff LCL values used for classifying...

  11. The Application of the Model Correction Factor Method to a Reliability Analysis of a Composite Blade Structure

    DEFF Research Database (Denmark)

    Dimitrov, Nikolay Krasimiroy; Friis-Hansen, Peter; Berggreen, Christian

    2009-01-01

    This paper presents a reliability analysis of a composite blade profile. The so-called Model Correction Factor technique is applied as an effective alternate approach to the response surface technique. The structural reliability is determined by use of a simplified idealised analytical model which...

  12. Immunization of stromal cell targeting fibroblast activation protein providing immunotherapy to breast cancer mouse model.

    Science.gov (United States)

    Meng, Mingyao; Wang, Wenju; Yan, Jun; Tan, Jing; Liao, Liwei; Shi, Jianlin; Wei, Chuanyu; Xie, Yanhua; Jin, Xingfang; Yang, Li; Jin, Qing; Zhu, Huirong; Tan, Weiwei; Yang, Fang; Hou, Zongliu

    2016-08-01

    Unlike heterogeneous tumor cells, cancer-associated fibroblasts (CAF) are genetically more stable which serve as a reliable target for tumor immunotherapy. Fibroblast activation protein (FAP) which is restrictively expressed in tumor cells and CAF in vivo and plays a prominent role in tumor initiation, progression, and metastasis can function as a tumor rejection antigen. In the current study, we have constructed artificial FAP(+) stromal cells which mimicked the FAP(+) CAF in vivo. We immunized a breast cancer mouse model with FAP(+) stromal cells to perform immunotherapy against FAP(+) cells in the tumor microenvironment. By forced expression of FAP, we have obtained FAP(+) stromal cells whose phenotype was CD11b(+)/CD34(+)/Sca-1(+)/FSP-1(+)/MHC class I(+). Interestingly, proliferation capacity of the fibroblasts was significantly enhanced by FAP. In the breast cancer-bearing mouse model, vaccination with FAP(+) stromal cells has significantly inhibited the growth of allograft tumor and reduced lung metastasis indeed. Depletion of T cell assays has suggested that both CD4(+) and CD8(+) T cells were involved in the tumor cytotoxic immune response. Furthermore, tumor tissue from FAP-immunized mice revealed that targeting FAP(+) CAF has induced apoptosis and decreased collagen type I and CD31 expression in the tumor microenvironment. These results implicated that immunization with FAP(+) stromal cells led to the disruption of the tumor microenvironment. Our study may provide a novel strategy for immunotherapy of a broad range of cancer.

  13. Influence of model specifications on the reliabilities of genomic prediction in a Swedish-Finnish red breed cattle population

    DEFF Research Database (Denmark)

    Rius-Vilarrasa, E; Strandberg, E; Fikse, W F

    2012-01-01

    Using a combined multi-breed reference population, this study explored the influence of model specification and the effect of including a polygenic effect on the reliability of genomic breeding values (DGV and GEBV). The combined reference population consisted of 2986 Swedish Red Breed (SRB...... effects. The influence of the inclusion of a polygenic effect on the reliability of DGV varied across traits and model specifications. Average correlation between DGV with the Mendelian sampling term, across traits, was highest (R =0.25) for the GBLUP model and decreased with increasing proportion...... of markers with large effects. Reliabilities increased when DGV and parent average information were combined in an index. The GBLUP model with the largest gain across traits in the reliability of the index achieved the highest DGV mean reliability. However, the polygenic models showed to be less biased...

  14. Reliability Engineering

    CERN Document Server

    Lazzaroni, Massimo

    2012-01-01

    This book gives a practical guide for designers and users in Information and Communication Technology context. In particular, in the first Section, the definition of the fundamental terms according to the international standards are given. Then, some theoretical concepts and reliability models are presented in Chapters 2 and 3: the aim is to evaluate performance for components and systems and reliability growth. Chapter 4, by introducing the laboratory tests, puts in evidence the reliability concept from the experimental point of view. In ICT context, the failure rate for a given system can be

  15. Evaluation of Fatigue Life Reliability of Steering Knuckle Using Pearson Parametric Distribution Model

    Directory of Open Access Journals (Sweden)

    E. A. Azrulhisham

    2010-01-01

    Full Text Available Steering module is a part of automotive suspension system which provides a means for an accurate vehicle placement and stability control. Components such as steering knuckle are subjected to fatigue failures due to cyclic loads arising from various driving conditions. This paper intends to give a description of a method used in the fatigue life reliability evaluation of the knuckle used in a passenger car steering system. An accurate representation of Belgian pave service loads in terms of response-time history signal was obtained from accredited test track using road load data acquisition. The acquired service load data was replicated on durability test rig and the SN method was used to estimate the fatigue life. A Pearson system was developed to evaluate the predicted fatigue life reliability by considering the variations in material properties. Considering random loads experiences by the steering knuckle, it is found that shortest life appears to be in the vertical load direction with the lowest fatigue life reliability between 14000–16000 cycles. Taking into account the inconsistency of the material properties, the proposed method is capable of providing the probability of failure of mass-produced parts.

  16. Reliable and efficient solution of genome-scale models of Metabolism and macromolecular Expression

    Science.gov (United States)

    Ma, Ding; Yang, Laurence; Fleming, Ronan M. T.; Thiele, Ines; Palsson, Bernhard O.; Saunders, Michael A.

    2017-01-01

    Constraint-Based Reconstruction and Analysis (COBRA) is currently the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many orders of magnitude. Data values also have greatly varying magnitudes. Standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers based on rational arithmetic require a near-optimal warm start to be practical on large problems (current ME models have 70,000 constraints and variables and will grow larger). We have developed a quadruple-precision version of our linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves reliability and efficiency for ME models and other challenging problems tested here. DQQ will enable extensive use of large linear and nonlinear models in systems biology and other applications involving multiscale data.

  17. Development of thermal hydraulic models for the reliable regulatory auditing code

    Energy Technology Data Exchange (ETDEWEB)

    Chung, B. D.; Song, C. H.; Lee, Y. J.; Kwon, T. S.; Lee, S. W. [Korea Automic Energy Research Institute, Taejon (Korea, Republic of)

    2004-02-15

    The objective of this project is to develop thermal hydraulic models for use in improving the reliability of the regulatory auditing codes. The current year fall under the second step of the 3 year project, and the main researches were focused on the development of downcorner boiling model. During the current year, the bubble stream model of downcorner has been developed and installed in he auditing code. The model sensitivity analysis has been performed for APR1400 LBLOCA scenario using the modified code. The preliminary calculation has been performed for the experimental test facility using FLUENT and MARS code. The facility for air bubble experiment has been installed. The thermal hydraulic phenomena for VHTR and super critical reactor have been identified for the future application and model development.

  18. Reliability-Based Design of Wind Turbine Foundations – Computational Modelling

    DEFF Research Database (Denmark)

    Vahdatirad, Mohammad Javad

    of fossil fuels causing pollution, environmental degradation, and climate change, and finally mixed messages regarding declining domestic and foreign oil reserves. Therefore, the wind power industry is becoming a key player as the green energy producer in many developed countries. However, consumers demand...... increased cost-effectiveness in wind turbines, and an optimized design must be implemented on the expensive structural components. The traditional wind turbine foundation typically expends 25-30% of the total wind turbine budget; thus it is one of the most costly fabrication components. Therefore......, a reduction in foundation cost, and optimizing foundation structural design is the best solution to cost effectiveness. An optimized wind turbine foundation design should provide a suitable target reliability level. Unfortunately, the reliability level is not identified in most current deterministic design...

  19. Modeling and Implementation of Reliable Ternary Arithmetic and Logic Unit Design Using Vhdl

    Directory of Open Access Journals (Sweden)

    Meruva Kumar Raja

    2014-06-01

    Full Text Available Multivalve logic is a reliable method for defining, analyzing, testing and implementing the basic combinational circuitry with VHDL simulator. It offers better utilization of transmission channels because of its high speed for higher information carried out and it gives more efficient performance. One of the main realizing of the MVL (ternary logic is that reduces the number of required computation steps, simplicity and energy efficiency in digital logic design. This paper using reliable method is brought out for implementing the basic combinational, sequential and TALU (Ternary Arithmetic and Logic Unit circuitry with minimum number of ternary switching circuits (Multiplexers. In this the potential of VHDL modelling and simulation that can be applied to ternary switching circuits to verify its functionality and timing specifications. An intention is to show how proposed simulator can be used to simulate MVL circuits and to evaluate system performance.

  20. Phd study of reliability and validity: One step closer to a standardized music therapy assessment model

    DEFF Research Database (Denmark)

    Jacobsen, Stine Lindahl

    The paper will present a phd study concerning reliability and validity of music therapy assessment model “Assessment of Parenting Competences” (APC) in the area of families with emotionally neglected children. This study had a multiple strategy design with a philosophical base of critical realism...... and pragmatism. The fixed design for this study was a between and within groups design in testing the APCs reliability and validity. The two different groups were parents with neglected children and parents with non-neglected children. The flexible design had a multiple case study strategy specifically...... of the theoretical understanding of the clientgroup. Furthermore, a short describtion of the specific assessment protocol and analysis procedures of APC will be a part of the presentation. The phd study sought to explore how to develop measures of parenting competences in looking at autonomy relationship...

  1. The reliability of sensitive information provided by injecting drug users in a clinical setting: clinician-administered versus audio computer-assisted self-interviewing (ACASI).

    Science.gov (United States)

    Islam, M Mofizul; Topp, Libby; Conigrave, Katherine M; van Beek, Ingrid; Maher, Lisa; White, Ann; Rodgers, Craig; Day, Carolyn A

    2012-01-01

    Research with injecting drug users (IDUs) suggests greater willingness to report sensitive and stigmatised behaviour via audio computer-assisted self-interviewing (ACASI) methods than during face-to-face interviews (FFIs); however, previous studies were limited in verifying this within the same individuals at the same time point. This study examines the relative willingness of IDUs to report sensitive information via ACASI and during a face-to-face clinical assessment administered in health services for IDUs. During recruitment for a randomised controlled trial undertaken at two IDU-targeted health services, assessments were undertaken as per clinical protocols, followed by referral of eligible clients to the trial, in which baseline self-report data were collected via ACASI. Five questions about sensitive injecting and sexual risk behaviours were administered to participants during both clinical interviews and baseline research data collection. "Percentage agreement" determined the magnitude of concordance/discordance in responses across interview methods, while tests appropriate to data format assessed the statistical significance of this variation. Results for all five variables suggest that, relative to ACASI, FFI elicited responses that may be perceived as more socially desirable. Discordance was statistically significant for four of the five variables examined. Participants who reported a history of sex work were more likely to provide discordant responses to at least one socially sensitive item. In health services for IDUs, information collection via ACASI may elicit more reliable and valid responses than FFI. Adoption of a universal precautionary approach to complement individually tailored assessment of and advice regarding health risk behaviours for IDUs may address this issue.

  2. Reliability evaluation of auxiliary feedwater system by mapping GO-FLOW models into Bayesian networks.

    Science.gov (United States)

    Liu, Zengkai; Liu, Yonghong; Wu, Xinlei; Yang, Dongwei; Cai, Baoping; Zheng, Chao

    2016-09-01

    Bayesian network (BN) is a widely used formalism for representing uncertainty in probabilistic systems and it has become a popular tool in reliability engineering. The GO-FLOW method is a success-oriented system analysis technique and capable of evaluating system reliability and risk. To overcome the limitations of GO-FLOW method and add new method for BN model development, this paper presents a novel approach on constructing a BN from GO-FLOW model. GO-FLOW model involves with several discrete time points and some signals change at different time points. But it is a static system at one time point, which can be described with BN. Therefore, the developed BN with the proposed method in this paper is equivalent to GO-FLOW model at one time point. The equivalent BNs of the fourteen basic operators in the GO-FLOW methodology are developed. Then, the existing GO-FLOW models can be mapped into equivalent BNs on basis of the developed BNs of operators. A case of auxiliary feedwater system of a pressurized water reactor is used to illustrate the method. The results demonstrate that the GO-FLOW chart can be successfully mapped into equivalent BNs.

  3. Contemporary Treatment of Reliability and Validity in Educational Assessment

    Science.gov (United States)

    Dimitrov, Dimiter M.

    2010-01-01

    The focus of this presidential address is on the contemporary treatment of reliability and validity in educational assessment. Highlights on reliability are provided under the classical true-score model using tools from latent trait modeling to clarify important assumptions and procedures for reliability estimation. In addition to reliability,…

  4. Contemporary Treatment of Reliability and Validity in Educational Assessment

    Science.gov (United States)

    Dimitrov, Dimiter M.

    2010-01-01

    The focus of this presidential address is on the contemporary treatment of reliability and validity in educational assessment. Highlights on reliability are provided under the classical true-score model using tools from latent trait modeling to clarify important assumptions and procedures for reliability estimation. In addition to reliability,…

  5. An enhanced reliability-oriented workforce planning model for process industry using combined fuzzy goal programming and differential evolution approach

    Science.gov (United States)

    Ighravwe, D. E.; Oke, S. A.; Adebiyi, K. A.

    2017-08-01

    This paper draws on the "human reliability" concept as a structure for gaining insight into the maintenance workforce assessment in a process industry. Human reliability hinges on developing the reliability of humans to a threshold that guides the maintenance workforce to execute accurate decisions within the limits of resources and time allocations. This concept offers a worthwhile point of deviation to encompass three elegant adjustments to literature model in terms of maintenance time, workforce performance and return-on-workforce investments. These fully explain the results of our influence. The presented structure breaks new grounds in maintenance workforce theory and practice from a number of perspectives. First, we have successfully implemented fuzzy goal programming (FGP) and differential evolution (DE) techniques for the solution of optimisation problem in maintenance of a process plant for the first time. The results obtained in this work showed better quality of solution from the DE algorithm compared with those of genetic algorithm and particle swarm optimisation algorithm, thus expressing superiority of the proposed procedure over them. Second, the analytical discourse, which was framed on stochastic theory, focusing on specific application to a process plant in Nigeria is a novelty. The work provides more insights into maintenance workforce planning during overhaul rework and overtime maintenance activities in manufacturing systems and demonstrated capacity in generating substantially helpful information for practice.

  6. Demands placed on waste package performance testing and modeling by some general results on reliability analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chesnut, D.A.

    1991-09-01

    Waste packages for a US nuclear waste repository are required to provide reasonable assurance of maintaining substantially complete containment of radionuclides for 300 to 1000 years after closure. The waiting time to failure for complex failure processes affecting engineered or manufactured systems is often found to be an exponentially-distributed random variable. Assuming that this simple distribution can be used to describe the behavior of a hypothetical single barrier waste package, calculations presented in this paper show that the mean time to failure (the only parameter needed to completely specify an exponential distribution) would have to be more than 10{sub 7} years in order to provide reasonable assurance of meeting this requirement. With two independent barriers, each would need to have a mean time to failure of only 10{sup 5} years to provide the same reliability. Other examples illustrate how multiple barriers can provide a strategy for not only achieving but demonstrating regulatory compliance.

  7. Reliability of a Novel Model for Drug Release from 2D HPMC-Matrices

    Directory of Open Access Journals (Sweden)

    Rumiana Blagoeva

    2010-04-01

    Full Text Available A novel model of drug release from 2D-HPMC matrices is considered. Detailed mathematical description of matrix swelling and the effect of the initial drug loading are introduced. A numerical approach to solution of the posed nonlinear 2D problem is used on the basis of finite element domain approximation and time difference method. The reliability of the model is investigated in two steps: numerical evaluation of the water uptake parameters; evaluation of drug release parameters under available experimental data. The proposed numerical procedure for fitting the model is validated performing different numerical examples of drug release in two cases (with and without taking into account initial drug loading. The goodness of fit evaluated by the coefficient of determination is presented to be very good with few exceptions. The obtained results show better model fitting when accounting the effect of initial drug loading (especially for larger values.

  8. The role of reliability graph models in assuring dependable operation of complex hardware/software systems

    Science.gov (United States)

    Patterson-Hine, F. A.; Davis, Gloria J.; Pedar, A.

    1991-01-01

    The complexity of computer systems currently being designed for critical applications in the scientific, commercial, and military arenas requires the development of new techniques for utilizing models of system behavior in order to assure 'ultra-dependability'. The complexity of these systems, such as Space Station Freedom and the Air Traffic Control System, stems from their highly integrated designs containing both hardware and software as critical components. Reliability graph models, such as fault trees and digraphs, are used frequently to model hardware systems. Their applicability for software systems has also been demonstrated for software safety analysis and the analysis of software fault tolerance. This paper discusses further uses of graph models in the design and implementation of fault management systems for safety critical applications.

  9. A reliability-based maintenance technicians' workloads optimisation model with stochastic consideration

    Science.gov (United States)

    Ighravwe, D. E.; Oke, S. A.; Adebiyi, K. A.

    2016-12-01

    The growing interest in technicians' workloads research is probably associated with the recent surge in competition. This was prompted by unprecedented technological development that triggers changes in customer tastes and preferences for industrial goods. In a quest for business improvement, this worldwide intense competition in industries has stimulated theories and practical frameworks that seek to optimise performance in workplaces. In line with this drive, the present paper proposes an optimisation model which considers technicians' reliability that complements factory information obtained. The information used emerged from technicians' productivity and earned-values using the concept of multi-objective modelling approach. Since technicians are expected to carry out routine and stochastic maintenance work, we consider these workloads as constraints. The influence of training, fatigue and experiential knowledge of technicians on workload management was considered. These workloads were combined with maintenance policy in optimising reliability, productivity and earned-values using the goal programming approach. Practical datasets were utilised in studying the applicability of the proposed model in practice. It was observed that our model was able to generate information that practicing maintenance engineers can apply in making more informed decisions on technicians' management.

  10. Discrete Software Reliability Growth Modeling for Errors of Different Severity Incorporating Change-point Concept

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Several software reliability growth models (SRGM) have been developed to monitor the reliability growth during the testing phase of software development. In most of the existing research available in the literatures, it is considered that a similar testing effort is required on each debugging effort. However, in practice, different types of faults may require different amounts of testing efforts for their detection and removal. Consequently, faults are classified into three categories on the basis of severity: simple, hard and complex. This categorization may be extended to r type of faults on the basis of severity. Although some existing research in the literatures has incorporated this concept that fault removal rate (FRR) is different for different types of faults, they assume that the FRR remains constant during the overall testing period. On the contrary, it has been observed that as testing progresses, FRR changes due to changing testing strategy, skill, environment and personnel resources. In this paper, a general discrete SRGM is proposed for errors of different severity in software systems using the change-point concept. Then, the models are formulated for two particular environments. The models were validated on two real-life data sets. The results show better fit and wider applicability of the proposed models as to different types of failure datasets.

  11. The importance of data quality for generating reliable distribution models for rare, elusive, and cryptic species.

    Science.gov (United States)

    Aubry, Keith B; Raley, Catherine M; McKelvey, Kevin S

    2017-01-01

    The availability of spatially referenced environmental data and species occurrence records in online databases enable practitioners to easily generate species distribution models (SDMs) for a broad array of taxa. Such databases often include occurrence records of unknown reliability, yet little information is available on the influence of data quality on SDMs generated for rare, elusive, and cryptic species that are prone to misidentification in the field. We investigated this question for the fisher (Pekania pennanti), a forest carnivore of conservation concern in the Pacific States that is often confused with the more common Pacific marten (Martes caurina). Fisher occurrence records supported by physical evidence (verifiable records) were available from a limited area, whereas occurrence records of unknown quality (unscreened records) were available from throughout the fisher's historical range. We reserved 20% of the verifiable records to use as a test sample for both models and generated SDMs with each dataset using Maxent. The verifiable model performed substantially better than the unscreened model based on multiple metrics including AUCtest values (0.78 and 0.62, respectively), evaluation of training and test gains, and statistical tests of how well each model predicted test localities. In addition, the verifiable model was consistent with our knowledge of the fisher's habitat relations and potential distribution, whereas the unscreened model indicated a much broader area of high-quality habitat (indices > 0.5) that included large expanses of high-elevation habitat that fishers do not occupy. Because Pacific martens remain relatively common in upper elevation habitats in the Cascade Range and Sierra Nevada, the SDM based on unscreened records likely reflects primarily a conflation of marten and fisher habitat. Consequently, accurate identifications are far more important than the spatial extent of occurrence records for generating reliable SDMs for the

  12. Measuring Validity And Reliability of Perception of Online Collaborative Learning Questionnaire Using Rasch Model

    Directory of Open Access Journals (Sweden)

    Sharifah Nadiyah Razali

    2016-12-01

    Full Text Available This study aims to generate empirical evidence on the validity and reliability of Perception of Online Collaborative Learning Questionnaire (POCLQ using Rasch model. The questionnaire was distributed to 32 (N=32 Diploma Hotel Catering students from Politeknik Ibrahim Sultan, Johor (PIS. Data obtained was analysed using WINSTEP version 3.68 software. The finding showed that POCLQ had high reliability with five categories of difficulties items. So, it can be concluded that POCLQ is reliable and strongly accepted. Meanwhile, analysis of items fit showed there were six items that are not in the specified range and based on standardised residual correlation measurement value; there were five items found to be overlapped that should be dropped. All the items that needed to be dropped based on the analysis of result had been refined and retained for the purpose of the study and based on expert's view. Therefore, all items remained after Rasch analysis. It is hoped that this study will give emphasis to other researchers about the importance of analysing items to ensure the quality of an instrument being developed.

  13. Reliability-based congestion pricing model under endogenous equilibrated market penetration and compliance rate of ATIS

    Institute of Scientific and Technical Information of China (English)

    钟绍鹏; 邓卫

    2015-01-01

    A reliability-based stochastic system optimum congestion pricing (SSOCP) model with endogenous market penetration and compliance rate in an advanced traveler information systems (ATIS) environment was proposed. All travelers were divided into two classes. The first guided travelers were referred to as the equipped travelers who follow ATIS advice, while the second unguided travelers were referred to as the unequipped travelers and the equipped travelers who do not follow the ATIS advice (also referred to as non-complied travelers). Travelers were assumed to take travel time, congestion pricing, and travel time reliability into account when making travel route choice decisions. In order to arrive at on time, travelers needed to allow for a safety margin to their trip. The market penetration of ATIS was determined by a continuous increasing function of the information benefit, and the ATIS compliance rate of equipped travelers was given as the probability of the actually experienced travel costs of guided travelers less than or equal to those of unguided travelers. The analysis results could enhance our understanding of the effect of travel demand level and travel time reliability confidence level on the ATIS market penetration and compliance rate; and the effect of travel time perception variation of guided and unguided travelers on the mean travel cost savings (MTCS) of the equipped travelers, the ATIS market penetration, compliance rate, and the total network effective travel time (TNETT).

  14. Quantified Risk Ranking Model for Condition-Based Risk and Reliability Centered Maintenance

    Science.gov (United States)

    Chattopadhyaya, Pradip Kumar; Basu, Sushil Kumar; Majumdar, Manik Chandra

    2017-06-01

    In the recent past, risk and reliability centered maintenance (RRCM) framework is introduced with a shift in the methodological focus from reliability and probabilities (expected values) to reliability, uncertainty and risk. In this paper authors explain a novel methodology for risk quantification and ranking the critical items for prioritizing the maintenance actions on the basis of condition-based risk and reliability centered maintenance (CBRRCM). The critical items are identified through criticality analysis of RPN values of items of a system and the maintenance significant precipitating factors (MSPF) of items are evaluated. The criticality of risk is assessed using three risk coefficients. The likelihood risk coefficient treats the probability as a fuzzy number. The abstract risk coefficient deduces risk influenced by uncertainty, sensitivity besides other factors. The third risk coefficient is called hazardous risk coefficient, which is due to anticipated hazards which may occur in the future and the risk is deduced from criteria of consequences on safety, environment, maintenance and economic risks with corresponding cost for consequences. The characteristic values of all the three risk coefficients are obtained with a particular test. With few more tests on the system, the values may change significantly within controlling range of each coefficient, hence `random number simulation' is resorted to obtain one distinctive value for each coefficient. The risk coefficients are statistically added to obtain final risk coefficient of each critical item and then the final rankings of critical items are estimated. The prioritization in ranking of critical items using the developed mathematical model for risk assessment shall be useful in optimization of financial losses and timing of maintenance actions.

  15. Quantified Risk Ranking Model for Condition-Based Risk and Reliability Centered Maintenance

    Science.gov (United States)

    Chattopadhyaya, Pradip Kumar; Basu, Sushil Kumar; Majumdar, Manik Chandra

    2016-03-01

    In the recent past, risk and reliability centered maintenance (RRCM) framework is introduced with a shift in the methodological focus from reliability and probabilities (expected values) to reliability, uncertainty and risk. In this paper authors explain a novel methodology for risk quantification and ranking the critical items for prioritizing the maintenance actions on the basis of condition-based risk and reliability centered maintenance (CBRRCM). The critical items are identified through criticality analysis of RPN values of items of a system and the maintenance significant precipitating factors (MSPF) of items are evaluated. The criticality of risk is assessed using three risk coefficients. The likelihood risk coefficient treats the probability as a fuzzy number. The abstract risk coefficient deduces risk influenced by uncertainty, sensitivity besides other factors. The third risk coefficient is called hazardous risk coefficient, which is due to anticipated hazards which may occur in the future and the risk is deduced from criteria of consequences on safety, environment, maintenance and economic risks with corresponding cost for consequences. The characteristic values of all the three risk coefficients are obtained with a particular test. With few more tests on the system, the values may change significantly within controlling range of each coefficient, hence `random number simulation' is resorted to obtain one distinctive value for each coefficient. The risk coefficients are statistically added to obtain final risk coefficient of each critical item and then the final rankings of critical items are estimated. The prioritization in ranking of critical items using the developed mathematical model for risk assessment shall be useful in optimization of financial losses and timing of maintenance actions.

  16. Reproducibility, reliability and validity of measurements obtained from Cecile3 digital models

    Directory of Open Access Journals (Sweden)

    Gustavo Adolfo Watanabe-Kanno

    2009-09-01

    Full Text Available The aim of this study was to determine the reproducibility, reliability and validity of measurements in digital models compared to plaster models. Fifteen pairs of plaster models were obtained from orthodontic patients with permanent dentition before treatment. These were digitized to be evaluated with the program Cécile3 v2.554.2 beta. Two examiners measured three times the mesiodistal width of all the teeth present, intercanine, interpremolar and intermolar distances, overjet and overbite. The plaster models were measured using a digital vernier. The t-Student test for paired samples and interclass correlation coefficient (ICC were used for statistical analysis. The ICC of the digital models were 0.84 ± 0.15 (intra-examiner and 0.80 ± 0.19 (inter-examiner. The average mean difference of the digital models was 0.23 ± 0.14 and 0.24 ± 0.11 for each examiner, respectively. When the two types of measurements were compared, the values obtained from the digital models were lower than those obtained from the plaster models (p < 0.05, although the differences were considered clinically insignificant (differences < 0.1 mm. The Cécile digital models are a clinically acceptable alternative for use in Orthodontics.

  17. Using Linkage Analysis to Detect Gene-Gene Interactions. 2. Improved Reliability and Extension to More-Complex Models.

    Directory of Open Access Journals (Sweden)

    Susan E Hodge

    and reliable for a wide range of parameters. Our statistic performs well both with the epistatic models (false negative rates, i.e., failing to detect interaction, ranging from 0 to 2.5% and with the heterogeneity models (false positive rates, i.e., falsely detecting interaction, ≤1%. It works well with the additive model except when allele frequencies at the two loci differ widely. We explore those features of the additive model that make detecting interaction more difficult. All testing of this method suggests that it provides a reliable approach to detecting gene-gene interaction.

  18. Validated Loads Prediction Models for Offshore Wind Turbines for Enhanced Component Reliability

    DEFF Research Database (Denmark)

    Koukoura, Christina

    To improve the reliability of offshore wind turbines, accurate prediction of their response is required. Therefore, validation of models with site measurements is imperative. In the present thesis a 3.6MW pitch regulated-variable speed offshore wind turbine on a monopole foundation is built...... response of a boat impact. The first and second modal damping of the system during normal operation both from measurements and simulations are identified with the implementation of the Enhanced Frequency Domain Decomposition technique. The effect of damping on the side-side fatigue of the support structure...

  19. Hierarchical nanoreinforced composites for highly reliable large wind turbines: Computational modelling and optimization

    DEFF Research Database (Denmark)

    Mishnaevsky, Leon

    2014-01-01

    , with modified, hybridor nanomodified structures. In this project, we seek to explore the potential of hybrid (carbon/glass),nanoreinforced and hierarchical composites (with secondary CNT, graphene or nanoclay reinforcement) as future materials for highly reliable large wind turbines. Using 3D multiscale...... computational models ofthe composites, we study the effect of hybrid structure and of nanomodifications on the strength, lifetime and service properties of the materials (see Figure 1). As a result, a series of recommendations toward the improvement of composites for structural applications under long term...

  20. Probabilistic modelling of combined sewer overflow using the First Order Reliability Method

    DEFF Research Database (Denmark)

    Thorndahl, Søren; Schaarup-Jensen, Kjeld; Jensen, Jacob Birk

    2007-01-01

    This paper presents a new and alternative method (in the context of urban drainage) for probabilistic hydrodynamical analysis of drainage systems in general and especially prediction of combined sewer overflow. Using a probabilistic shell it is possible to implement both input and parameter...... uncertainties on an application of the commercial urban drainage model MOUSE combined with the probabilistic First Order Reliability Method (FORM). Applying statistical characteristics on several years of rainfall, it is possible to derive a parameterization of the rainfall input and the failure probability...

  1. Probabilistic Modelling of Combined Sewer Overflow Using the First Order Reliability Method

    DEFF Research Database (Denmark)

    Thorndahl, Søren; Schaarup-Jensen, Kjeld; Jensen, Jacob Birk

    2008-01-01

    This paper presents a new and alternative method (in the context of urban drainage) for probabilistic hydrodynamical analysis of drainage systems in general and especially prediction of combined sewer overflow. Using a probabilistic shell it is possible to implement both input and parameter...... uncertainties on an application of the commercial urban drainage model MOUSE combined with the probabilistic First Order Reliability Method (FORM). Applying statistical characteristics on several years of rainfall, it is possible to derive a parameterization of the rainfall input and the failure probability...

  2. A new lifetime estimation model for a quicker LED reliability prediction

    Science.gov (United States)

    Hamon, B. H.; Mendizabal, L.; Feuillet, G.; Gasse, A.; Bataillou, B.

    2014-09-01

    LED reliability and lifetime prediction is a key point for Solid State Lighting adoption. For this purpose, one hundred and fifty LEDs have been aged for a reliability analysis. LEDs have been grouped following nine current-temperature stress conditions. Stress driving current was fixed between 350mA and 1A and ambient temperature between 85C and 120°C. Using integrating sphere and I(V) measurements, a cross study of the evolution of electrical and optical characteristics has been done. Results show two main failure mechanisms regarding lumen maintenance. The first one is the typically observed lumen depreciation and the second one is a much more quicker depreciation related to an increase of the leakage and non radiative currents. Models of the typical lumen depreciation and leakage resistance depreciation have been made using electrical and optical measurements during the aging tests. The combination of those models allows a new method toward a quicker LED lifetime prediction. These two models have been used for lifetime predictions for LEDs.

  3. Function Based Nonlinear Least Squares and Application to Jelinski--Moranda Software Reliability Model

    CERN Document Server

    Liu, Jingwei

    2011-01-01

    A function based nonlinear least squares estimation (FNLSE) method is proposed and investigated in parameter estimation of Jelinski-Moranda software reliability model. FNLSE extends the potential fitting functions of traditional least squares estimation (LSE), and takes the logarithm transformed nonlinear least squares estimation (LogLSE) as a special case. A novel power transformation function based nonlinear least squares estimation (powLSE) is proposed and applied to the parameter estimation of Jelinski-Moranda model. Solved with Newton-Raphson method, Both LogLSE and powLSE of Jelinski-Moranda models are applied to the mean time between failures (MTBF) predications on six standard software failure time data sets. The experimental results demonstrate the effectiveness of powLSE with optimal power index compared to the classical least--squares estimation (LSE), maximum likelihood estimation (MLE) and LogLSE in terms of recursively relative error (RE) index and Braun statistic index.

  4. Assessment of the reliability of reproducing two-dimensional resistivity models using an image processing technique.

    Science.gov (United States)

    Ishola, Kehinde S; Nawawi, Mohd Nm; Abdullah, Khiruddin; Sabri, Ali Idriss Aboubakar; Adiat, Kola Abdulnafiu

    2014-01-01

    This study attempts to combine the results of geophysical images obtained from three commonly used electrode configurations using an image processing technique in order to assess their capabilities to reproduce two-dimensional (2-D) resistivity models. All the inverse resistivity models were processed using the PCI Geomatica software package commonly used for remote sensing data sets. Preprocessing of the 2-D inverse models was carried out to facilitate further processing and statistical analyses. Four Raster layers were created, three of these layers were used for the input images and the fourth layer was used as the output of the combined images. The data sets were merged using basic statistical approach. Interpreted results show that all images resolved and reconstructed the essential features of the models. An assessment of the accuracy of the images for the four geologic models was performed using four criteria: the mean absolute error and mean percentage absolute error, resistivity values of the reconstructed blocks and their displacements from the true models. Generally, the blocks of the images of maximum approach give the least estimated errors. Also, the displacement of the reconstructed blocks from the true blocks is the least and the reconstructed resistivities of the blocks are closer to the true blocks than any other combined used. Thus, it is corroborated that when inverse resistivity models are combined, most reliable and detailed information about the geologic models is obtained than using individual data sets.

  5. Reliability and Productivity Modeling for the Optimization of Separated Spacecraft Interferometers

    Science.gov (United States)

    Kenny, Sean (Technical Monitor); Wertz, Julie

    2002-01-01

    As technological systems grow in capability, they also grow in complexity. Due to this complexity, it is no longer possible for a designer to use engineering judgement to identify the components that have the largest impact on system life cycle metrics, such as reliability, productivity, cost, and cost effectiveness. One way of identifying these key components is to build quantitative models and analysis tools that can be used to aid the designer in making high level architecture decisions. Once these key components have been identified, two main approaches to improving a system using these components exist: add redundancy or improve the reliability of the component. In reality, the most effective approach to almost any system will be some combination of these two approaches, in varying orders of magnitude for each component. Therefore, this research tries to answer the question of how to divide funds, between adding redundancy and improving the reliability of components, to most cost effectively improve the life cycle metrics of a system. While this question is relevant to any complex system, this research focuses on one type of system in particular: Separate Spacecraft Interferometers (SSI). Quantitative models are developed to analyze the key life cycle metrics of different SSI system architectures. Next, tools are developed to compare a given set of architectures in terms of total performance, by coupling different life cycle metrics together into one performance metric. Optimization tools, such as simulated annealing and genetic algorithms, are then used to search the entire design space to find the "optimal" architecture design. Sensitivity analysis tools have been developed to determine how sensitive the results of these analyses are to uncertain user defined parameters. Finally, several possibilities for the future work that could be done in this area of research are presented.

  6. Incorporating S-shaped testing-effort functions into NHPP software reliability model with imperfect debugging

    Institute of Scientific and Technical Information of China (English)

    Qiuying Li; Haifeng Li; Minyan Lu

    2015-01-01

    Testing-effort (TE) and imperfect debugging (ID) in the reliability modeling process may further improve the fitting and pre-diction results of software reliability growth models (SRGMs). For describing the S-shaped varying trend of TE increasing rate more accurately, first, two S-shaped testing-effort functions (TEFs), i.e., delayed S-shaped TEF (DS-TEF) and inflected S-shaped TEF (IS-TEF), are proposed. Then these two TEFs are incorporated into various types (exponential-type, delayed S-shaped and in-flected S-shaped) of non-homogeneous Poisson process (NHPP) SRGMs with two forms of ID respectively for obtaining a series of new NHPP SRGMs which consider S-shaped TEFs as wel as ID. Final y these new SRGMs and several comparison NHPP SRGMs are applied into four real failure data-sets respectively for investigating the fitting and prediction power of these new SRGMs. The experimental results show that: (i) the proposed IS-TEF is more suitable and flexible for describing the consumption of TE than the previous TEFs; (i ) incorporating TEFs into the inflected S-shaped NHPP SRGM may be more effective and appropriate compared with the exponential-type and the delayed S-shaped NHPP SRGMs; (i i) the inflected S-shaped NHPP SRGM con-sidering both IS-TEF and ID yields the most accurate fitting and prediction results than the other comparison NHPP SRGMs.

  7. 基于离散Fourier变换的通信网可靠性数据挖掘算法模型%A Data Mining Model Based on Discrete Fourier Transform(DFT)Theory for Reliability Data of Communication Networks

    Institute of Scientific and Technical Information of China (English)

    周中定; 孙青华; 梁雄健

    2003-01-01

    In this paper, it presents a data mining model based on Discrete Fourier Transform (DFT)theory for relia-bility data of communication networks, it helps us analyze aberrance data information and provide a method to analy-sis and evaluate reliability data for management of communication network. It's effective and convenient by a practicalapplication.

  8. Physics-Based Stress Corrosion Cracking Component Reliability Model cast in an R7-Compatible Cumulative Damage Framework

    Energy Technology Data Exchange (ETDEWEB)

    Unwin, Stephen D.; Lowry, Peter P.; Layton, Robert F.; Toloczko, Mychailo B.; Johnson, Kenneth I.; Sanborn, Scott E.

    2011-07-01

    This is a working report drafted under the Risk-Informed Safety Margin Characterization pathway of the Light Water Reactor Sustainability Program, describing statistical models of passives component reliabilities.

  9. Characterization of System Level Single Event Upset (SEU) Responses using SEU Data, Classical Reliability Models, and Space Environment Data

    Science.gov (United States)

    Berg, Melanie; Label, Kenneth; Campola, Michael; Xapsos, Michael

    2017-01-01

    We propose a method for the application of single event upset (SEU) data towards the analysis of complex systems using transformed reliability models (from the time domain to the particle fluence domain) and space environment data.

  10. Halo Models of Large Scale Structure and Reliability of Cosmological N-Body Simulations

    CERN Document Server

    Gaite, Jose

    2013-01-01

    Halo models of the large scale structure of the Universe are critically examined, focusing on the definition of halos as smooth distributions of cold dark matter. This definition is essentially based on the results of cosmological N-body simulations. By a careful analysis of the standard assumptions of halo models and N-body simulations and by taking into account previous studies of self-similarity of the cosmic web structure, we conclude that N-body cosmological simulations are not fully reliable in the range of scales where halos appear. Therefore, to have a consistent definition of halos, it is necessary either to define them as entities of arbitrary size with a grainy rather than smooth structure or to define their size in terms of small-scale baryonic physics.

  11. Halo Models of Large Scale Structure and Reliability of Cosmological N-Body Simulations

    Directory of Open Access Journals (Sweden)

    José Gaite

    2013-05-01

    Full Text Available Halo models of the large scale structure of the Universe are critically examined, focusing on the definition of halos as smooth distributions of cold dark matter. This definition is essentially based on the results of cosmological N-body simulations. By a careful analysis of the standard assumptions of halo models and N-body simulations and by taking into account previous studies of self-similarity of the cosmic web structure, we conclude that N-body cosmological simulations are not fully reliable in the range of scales where halos appear. Therefore, to have a consistent definition of halos is necessary either to define them as entities of arbitrary size with a grainy rather than smooth structure or to define their size in terms of small-scale baryonic physics.

  12. Bifactor Modeling and the Estimation of Model-Based Reliability in the WAIS-IV

    Science.gov (United States)

    Gignac, Gilles E.; Watkins, Marley W.

    2013-01-01

    Previous confirmatory factor analytic research that has examined the factor structure of the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) has endorsed either higher order models or oblique factor models that tend to amalgamate both general factor and index factor sources of systematic variance. An alternative model that has not yet…

  13. Bifactor Modeling and the Estimation of Model-Based Reliability in the WAIS-IV

    Science.gov (United States)

    Gignac, Gilles E.; Watkins, Marley W.

    2013-01-01

    Previous confirmatory factor analytic research that has examined the factor structure of the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) has endorsed either higher order models or oblique factor models that tend to amalgamate both general factor and index factor sources of systematic variance. An alternative model that has not yet…

  14. A reliable facility location design model with site-dependent disruption in the imperfect information context.

    Science.gov (United States)

    Yun, Lifen; Wang, Xifu; Fan, Hongqiang; Li, Xiaopeng

    2017-01-01

    This paper proposes a reliable facility location design model under imperfect information with site-dependent disruptions; i.e., each facility is subject to a unique disruption probability that varies across the space. In the imperfect information contexts, customers adopt a realistic "trial-and-error" strategy to visit facilities; i.e., they visit a number of pre-assigned facilities sequentially until they arrive at the first operational facility or give up looking for the service. This proposed model aims to balance initial facility investment and expected long-term operational cost by finding the optimal facility locations. A nonlinear integer programming model is proposed to describe this problem. We apply a linearization technique to reduce the difficulty of solving the proposed model. A number of problem instances are studied to illustrate the performance of the proposed model. The results indicate that our proposed model can reveal a number of interesting insights into the facility location design with site-dependent disruptions, including the benefit of backup facilities and system robustness against variation of the loss-of-service penalty.

  15. Microgrid Design Analysis Using Technology Management Optimization and the Performance Reliability Model

    Energy Technology Data Exchange (ETDEWEB)

    Stamp, Jason E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eddy, John P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jensen, Richard P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Munoz-Ramos, Karina [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-01-01

    Microgrids are a focus of localized energy production that support resiliency, security, local con- trol, and increased access to renewable resources (among other potential benefits). The Smart Power Infrastructure Demonstration for Energy Reliability and Security (SPIDERS) Joint Capa- bility Technology Demonstration (JCTD) program between the Department of Defense (DOD), Department of Energy (DOE), and Department of Homeland Security (DHS) resulted in the pre- liminary design and deployment of three microgrids at military installations. This paper is focused on the analysis process and supporting software used to determine optimal designs for energy surety microgrids (ESMs) in the SPIDERS project. There are two key pieces of software, an ex- isting software application developed by Sandia National Laboratories (SNL) called Technology Management Optimization (TMO) and a new simulation developed for SPIDERS called the per- formance reliability model (PRM). TMO is a decision support tool that performs multi-objective optimization over a mixed discrete/continuous search space for which the performance measures are unrestricted in form. The PRM is able to statistically quantify the performance and reliability of a microgrid operating in islanded mode (disconnected from any utility power source). Together, these two software applications were used as part of the ESM process to generate the preliminary designs presented by SNL-led DOE team to the DOD. Acknowledgements Sandia National Laboratories and the SPIDERS technical team would like to acknowledge the following for help in the project: * Mike Hightower, who has been the key driving force for Energy Surety Microgrids * Juan Torres and Abbas Akhil, who developed the concept of microgrids for military instal- lations * Merrill Smith, U.S. Department of Energy SPIDERS Program Manager * Ross Roley and Rich Trundy from U.S. Pacific Command * Bill Waugaman and Bill Beary from U.S. Northern Command * Tarek Abdallah, Melanie

  16. A Mid-Layer Model for Human Reliability Analysis: Understanding the Cognitive Causes of Human Failure Events

    Energy Technology Data Exchange (ETDEWEB)

    Stacey M. L. Hendrickson; April M. Whaley; Ronald L. Boring; James Y. H. Chang; Song-Hua Shen; Ali Mosleh; Johanna H. Oxstrand; John A. Forester; Dana L. Kelly; Erasmia L. Lois

    2010-06-01

    The Office of Nuclear Regulatory Research (RES) is sponsoring work in response to a Staff Requirements Memorandum (SRM) directing an effort to establish a single human reliability analysis (HRA) method for the agency or guidance for the use of multiple methods. As part of this effort an attempt to develop a comprehensive HRA qualitative approach is being pursued. This paper presents a draft of the method’s middle layer, a part of the qualitative analysis phase that links failure mechanisms to performance shaping factors. Starting with a Crew Response Tree (CRT) that has identified human failure events, analysts identify potential failure mechanisms using the mid-layer model. The mid-layer model presented in this paper traces the identification of the failure mechanisms using the Information-Diagnosis/Decision-Action (IDA) model and cognitive models from the psychological literature. Each failure mechanism is grouped according to a phase of IDA. Under each phase of IDA, the cognitive models help identify the relevant performance shaping factors for the failure mechanism. The use of IDA and cognitive models can be traced through fault trees, which provide a detailed complement to the CRT.

  17. The importance of data quality for generating reliable distribution models for rare, elusive, and cryptic species

    Science.gov (United States)

    Aubry, Keith B.; Raley, Catherine M.; McKelvey, Kevin S.

    2017-01-01

    The availability of spatially referenced environmental data and species occurrence records in online databases enable practitioners to easily generate species distribution models (SDMs) for a broad array of taxa. Such databases often include occurrence records of unknown reliability, yet little information is available on the influence of data quality on SDMs generated for rare, elusive, and cryptic species that are prone to misidentification in the field. We investigated this question for the fisher (Pekania pennanti), a forest carnivore of conservation concern in the Pacific States that is often confused with the more common Pacific marten (Martes caurina). Fisher occurrence records supported by physical evidence (verifiable records) were available from a limited area, whereas occurrence records of unknown quality (unscreened records) were available from throughout the fisher’s historical range. We reserved 20% of the verifiable records to use as a test sample for both models and generated SDMs with each dataset using Maxent. The verifiable model performed substantially better than the unscreened model based on multiple metrics including AUCtest values (0.78 and 0.62, respectively), evaluation of training and test gains, and statistical tests of how well each model predicted test localities. In addition, the verifiable model was consistent with our knowledge of the fisher’s habitat relations and potential distribution, whereas the unscreened model indicated a much broader area of high-quality habitat (indices > 0.5) that included large expanses of high-elevation habitat that fishers do not occupy. Because Pacific martens remain relatively common in upper elevation habitats in the Cascade Range and Sierra Nevada, the SDM based on unscreened records likely reflects primarily a conflation of marten and fisher habitat. Consequently, accurate identifications are far more important than the spatial extent of occurrence records for generating reliable SDMs for the

  18. A consistent modelling methodology for secondary settling tanks: a reliable numerical method.

    Science.gov (United States)

    Bürger, Raimund; Diehl, Stefan; Farås, Sebastian; Nopens, Ingmar; Torfs, Elena

    2013-01-01

    The consistent modelling methodology for secondary settling tanks (SSTs) leads to a partial differential equation (PDE) of nonlinear convection-diffusion type as a one-dimensional model for the solids concentration as a function of depth and time. This PDE includes a flux that depends discontinuously on spatial position modelling hindered settling and bulk flows, a singular source term describing the feed mechanism, a degenerating term accounting for sediment compressibility, and a dispersion term for turbulence. In addition, the solution itself is discontinuous. A consistent, reliable and robust numerical method that properly handles these difficulties is presented. Many constitutive relations for hindered settling, compression and dispersion can be used within the model, allowing the user to switch on and off effects of interest depending on the modelling goal as well as investigate the suitability of certain constitutive expressions. Simulations show the effect of the dispersion term on effluent suspended solids and total sludge mass in the SST. The focus is on correct implementation whereas calibration and validation are not pursued.

  19. Overview of RELCOMP, the reliability and cost model for electrical generation planning

    Energy Technology Data Exchange (ETDEWEB)

    Buehring, W.A.; Hub, K.A.; VanKuiken, J.C.

    1979-11-01

    RELCOMP is a system-planning tool that can be used to assess the reliability and economic performance of alternative expansion patterns of electric-utility-generating systems. Given input information such as capacity, forced outage rate, number of weeks of annual scheduled maintenance, and economic data for individual units along with the expected utility load characteristics, the nonoptimizing model calculates a system maintenance schedule, the loss-of-load probability, unserved demand for energy, mean time between system failures to meet the load, required reserve to meet a specified system-failure rate, expected energy generation from each unit, and system energy cost. Emergency interties and firm purchases can be included in the analysis. The calculation can be broken down into five distinct categories: maintenance scheduling, system reliability, capacity requirement, energy allocation, and energy cost. This brief description of the program is intended to serve as preliminary documentation for RELCOMP until a more-complete documentation is prepared. In addition to this documentation, a sample problem and a detailed input description are available from the authors.

  20. Reliability Analysis of a Composite Wind Turbine Blade Section Using the Model Correction Factor Method: Numerical Study and Validation

    DEFF Research Database (Denmark)

    Dimitrov, Nikolay Krasimirov; Friis-Hansen, Peter; Berggreen, Christian

    2013-01-01

    Reliability analysis of fiber-reinforced composite structures is a relatively unexplored field, and it is therefore expected that engineers and researchers trying to apply such an approach will meet certain challenges until more knowledge is accumulated. While doing the analyses included...... in the present paper, the authors have experienced some of the possible pitfalls on the way to complete a precise and robust reliability analysis for layered composites. Results showed that in order to obtain accurate reliability estimates it is necessary to account for the various failure modes described...... by the composite failure criteria. Each failure mode has been considered in a separate component reliability analysis, followed by a system analysis which gives the total probability of failure of the structure. The Model Correction Factor method used in connection with FORM (First-Order Reliability Method) proved...

  1. Estimation of the reliability of all-ceramic crowns using finite element models and the stress-strength interference theory.

    Science.gov (United States)

    Li, Yan; Chen, Jianjun; Liu, Jipeng; Zhang, Lei; Wang, Weiguo; Zhang, Shaofeng

    2013-09-01

    The reliability of all-ceramic crowns is of concern to both patients and doctors. This study introduces a new methodology for quantifying the reliability of all-ceramic crowns based on the stress-strength interference theory and finite element models. The variables selected for the reliability analysis include the magnitude of the occlusal contact area, the occlusal load and the residual thermal stress. The calculated reliabilities of crowns under different loading conditions showed that too small occlusal contact areas or too great a difference of the thermal coefficient between veneer and core layer led to high failure possibilities. There results were consistent with many previous reports. Therefore, the methodology is shown to be a valuable method for analyzing the reliabilities of the restorations in the complicated oral environment.

  2. Considering the Fault Dependency Concept with Debugging Time Lag in Software Reliability Growth Modeling Using a Power Function of Testing Time

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Since the early 1970s tremendous growth has been seen in the research of software reliability growth modeling. In general, software reliability growth models (SRGMs) are applicable to the late stages of testing in software development and they can provide useful information about how to improve the reliability of software products. A number of SRGMs have been proposed in the literature to represent time-dependent fault identification / removal phenomenon; still new models are being proposed that could fit a greater number of reliability growth curves. Often, it is assumed that detected faults are immediately corrected when mathematical models are developed. This assumption may not be realistic in practice because the time to remove a detected fault depends on the complexity of the fault, the skill and experience of the personnel, the size of the debugging team, the technique, and so on. Thus, the detected fault need not be immediately removed, and it may lag the fault detection process by a delay effect factor. In this paper, we first review how different software reliability growth models have been developed, where fault detection process is dependent not only on the number of residual fault content but also on the testing time, and see how these models can be reinterpreted as the delayed fault detection model by using a delay effect factor. Based on the power function of the testing time concept, we propose four new SRGMs that assume the presence of two types of faults in the software: leading and dependent faults. Leading faults are those that can be removed upon a failure being observed. However, dependent faults are masked by leading faults and can only be removed after the corresponding leading fault has been removed with a debugging time lag. These models have been tested on real software error data to show its goodness of fit, predictive validity and applicability.

  3. Development of a model selection method based on the reliability of a soft sensor model

    Directory of Open Access Journals (Sweden)

    Takeshi Okada,

    2012-04-01

    Full Text Available Soft sensors are widely used to realize highly efficient operation in chemical process because every important variablesuch as product quality is not measured online. By using soft sensors, such a difficult-to-measure variable y can be estimatedby other process variables which are measured online. In order to estimate values of y without degradation of a soft sensormodel, a time difference (TD model was proposed previously. Though a TD model has high predictive ability, the model doesnot function well when process conditions have never been observed. To cope with this problem, a soft sensor model can beupdated with newest data. But updating a model needs time and effort for plant operators. We therefore developed an onlinemonitoring system to judge whether a TD model can predict values of y accurately or an updating model should be used forboth reducing maintenance cost and improving predictive accuracy of soft sensors. The monitoring system is based onsupport vector machine or standard deviation of y-values estimated from various intervals of time difference. We confirmedthat the proposed system has functioned successfully through the analysis of real industrial data of a distillation process.

  4. Modelling a reliability system governed by discrete phase-type distributions

    Energy Technology Data Exchange (ETDEWEB)

    Ruiz-Castro, Juan Eloy [Departamento de Estadistica e Investigacion Operativa, Universidad de Granada, 18071 Granada (Spain)], E-mail: jeloy@ugr.es; Perez-Ocon, Rafael [Departamento de Estadistica e Investigacion Operativa, Universidad de Granada, 18071 Granada (Spain)], E-mail: rperezo@ugr.es; Fernandez-Villodre, Gemma [Departamento de Estadistica e Investigacion Operativa, Universidad de Granada, 18071 Granada (Spain)

    2008-11-15

    We present an n-system with one online unit and the others in cold standby. There is a repairman. When the online fails it goes to repair, and instantaneously a standby unit becomes the online one. The operational and repair times follow discrete phase-type distributions. Given that any discrete distribution defined on the positive integers is a discrete phase-type distribution, the system can be considered a general one. A model with unlimited number of units is considered for approximating a system with a great number of units. We show that the process that governs the system is a quasi-birth-and-death process. For this system, performance reliability measures; the up and down periods, and the involved costs are calculated in a matrix and algorithmic form. We show that the discrete case is not a trivial case of the continuous one. The results given in this paper have been implemented computationally with Matlab.

  5. A multi-objective reliable programming model for disruption in supply chain

    Directory of Open Access Journals (Sweden)

    Emran Mohammadi

    2013-05-01

    Full Text Available One of the primary concerns on supply chain management is to handle risk components, properly. There are various reasons for having risk in supply chain such as natural disasters, unexpected incidents, etc. When a series of facilities are built and deployed, one or a number of them could probably fail at any time due to bad weather conditions, labor strikes, economic crises, sabotage or terrorist attacks and changes in ownership of the system. The objective of risk management is to reduce the effects of different domains to an acceptable level. To overcome the risk, we propose a reliable capacitated supply chain network design (RSCND model by considering random disruptions risk in both distribution centers and suppliers. The proposed study of this paper considers three objective functions and the implementation is verified using some instance.

  6. Factor structure and internal reliability of an exercise health belief model scale in a Mexican population.

    Science.gov (United States)

    Villar, Oscar Armando Esparza-Del; Montañez-Alvarado, Priscila; Gutiérrez-Vega, Marisela; Carrillo-Saucedo, Irene Concepción; Gurrola-Peña, Gloria Margarita; Ruvalcaba-Romero, Norma Alicia; García-Sánchez, María Dolores; Ochoa-Alcaraz, Sergio Gabriel

    2017-03-01

    Mexico is one of the countries with the highest rates of overweight and obesity around the world, with 68.8% of men and 73% of women reporting both. This is a public health problem since there are several health related consequences of not exercising, like having cardiovascular diseases or some types of cancers. All of these problems can be prevented by promoting exercise, so it is important to evaluate models of health behaviors to achieve this goal. Among several models the Health Belief Model is one of the most studied models to promote health related behaviors. This study validates the first exercise scale based on the Health Belief Model (HBM) in Mexicans with the objective of studying and analyzing this model in Mexico. Items for the scale called the Exercise Health Belief Model Scale (EHBMS) were developed by a health research team, then the items were applied to a sample of 746 participants, male and female, from five cities in Mexico. The factor structure of the items was analyzed with an exploratory factor analysis and the internal reliability with Cronbach's alpha. The exploratory factor analysis reported the expected factor structure based in the HBM. The KMO index (0.92) and the Barlett's sphericity test (p factor loadings, ranging from 0.31 to 0.92, and the internal consistencies of the factors were also acceptable, with alpha values ranging from 0.67 to 0.91. The EHBMS is a validated scale that can be used to measure exercise based on the HBM in Mexican populations.

  7. Reliability Generalization: "Lapsus Linguae"

    Science.gov (United States)

    Smith, Julie M.

    2011-01-01

    This study examines the proposed Reliability Generalization (RG) method for studying reliability. RG employs the application of meta-analytic techniques similar to those used in validity generalization studies to examine reliability coefficients. This study explains why RG does not provide a proper research method for the study of reliability,…

  8. Reliability of some ageing nuclear power plant system: a simple stochastic model

    Energy Technology Data Exchange (ETDEWEB)

    Suarez-Antola, Roberto [Catholic University of Uruguay, Montevideo (Uruguay). School of Engineering and Technologies; Ministerio de Industria, Energia y Mineria, Montevideo (Uruguay). Direccion Nacional de Energia y Tecnologia Nuclear; E-mail: rsuarez@ucu.edu.uy

    2007-07-01

    The random number of failure-related events in certain repairable ageing systems, like certain nuclear power plant components, during a given time interval, may be often modelled by a compound Poisson distribution. One of these is the Polya-Aeppli distribution. The derivation of a stationary Polya-Aeppli distribution as a limiting distribution of rare events for stationary Bernouilli trials with first order Markov dependence is considered. But if the parameters of the Polya-Aeppli distribution are suitable time functions, we could expect that the resulting distribution would allow us to take into account the distribution of failure-related events in an ageing system. Assuming that a critical number of damages produce an emergent failure, the above mentioned results can be applied in a reliability analysis. It is natural to ask under what conditions a Polya-Aeppli distribution could be a limiting distribution for non-homogeneous Bernouilli trials with first order Markov dependence. In this paper this problem is analyzed and possible applications of the obtained results to ageing or deteriorating nuclear power plant components are considered. The two traditional ways of modelling repairable systems in reliability theory: the 'as bad as old' concept, that assumes that the replaced component is exactly under the same conditions as was the aged component before failure, and the 'as good as new' concept, that assumes that the new component is under the same conditions of the replaced component when it was new, are briefly discussed in relation with the findings of the present work. (author)

  9. Development of Energy and Reserve Pre-dispatch and Re-dispatch Models for Real-time Price Risk and Reliability Assessment

    DEFF Research Database (Denmark)

    Ding, Yi; Xie, Min; Wu, Qiuwei

    2014-01-01

    of securing proper balancing between generation and demand. The high penetration of renewable energy sources will also increase the burden of system operator for maintaining system reliabilities. However the current strategy of reliability management developed for conventional power systems and existing...... electricity market design may not cope with the future challenges the power system faces. The development of smart grid will enable power system scheduling and the electricity market to operate in a shorter time horizon for better integrating renewable energy sources into power systems. This paper presents...... the real time operation, the energy re-dispatch model is used for contingency management and providing balancing services based on the results of the energy and reserve pre-dispatch model. The energy re-dispatch model is formulated as a single-period AC OPF model, which is used to determine generation re...

  10. Reliable groundwater levels: failures and lessons learned from modeling and monitoring studies

    Science.gov (United States)

    Van Lanen, Henny A. J.

    2017-04-01

    Adequate management of groundwater resources requires an a priori assessment of impacts of intended groundwater abstractions. Usually, groundwater flow modeling is used to simulate the influence of the planned abstraction on groundwater levels. Model performance is tested by using observed groundwater levels. Where a multi-aquifer system occurs, groundwater levels in the different aquifers have to be monitored through observation wells with filters at different depths, i.e. above the impermeable clay layer (phreatic water level) and beneath (artesian aquifer level). A reliable artesian level can only be measured if the space between the outer wall of the borehole (vertical narrow shaft) and the observation well is refilled with impermeable material at the correct depth (post-drilling phase) to prevent a vertical hydraulic connection between the artesian and phreatic aquifer. We were involved in improper refilling, which led to impossibility to monitor reliable artesian aquifer levels. At the location of the artesian observation well, a freely overflowing spring was seen, which implied water leakage from the artesian aquifer affected the artesian groundwater level. Careful checking of the monitoring sites in a study area is a prerequisite to use observations for model performance assessment. After model testing the groundwater model is forced with proposed groundwater abstractions (sites, extraction rates). The abstracted groundwater volume is compensated by a reduction of groundwater flow to the drainage network and the model simulates associated groundwater tables. The drawdown of groundwater level is calculated by comparing the simulated groundwater level with and without groundwater abstraction. In lowland areas, such as vast areas of the Netherlands, the groundwater model has to consider a variable drainage network, which means that small streams only carry water during the wet winter season, and run dry during the summer. The main streams drain groundwater

  11. A Correlated Model for Evaluating Performance and Energy of Cloud System Given System Reliability

    Directory of Open Access Journals (Sweden)

    Hongli Zhang

    2015-01-01

    Full Text Available The serious issue of energy consumption for high performance computing systems has attracted much attention. Performance and energy-saving have become important measures of a computing system. In the cloud computing environment, the systems usually allocate various resources (such as CPU, Memory, Storage, etc. on multiple virtual machines (VMs for executing tasks. Therefore, the problem of resource allocation for running VMs should have significant influence on both system performance and energy consumption. For different processor utilizations assigned to the VM, there exists the tradeoff between energy consumption and task completion time when a given task is executed by the VMs. Moreover, the hardware failure, software failure and restoration characteristics also have obvious influences on overall performance and energy. In this paper, a correlated model is built to analyze both performance and energy in the VM execution environment given the reliability restriction, and an optimization model is presented to derive the most effective solution of processor utilization for the VM. Then, the tradeoff between energy-saving and task completion time is studied and balanced when the VMs execute given tasks. Numerical examples are illustrated to build the performance-energy correlated model and evaluate the expected values of task completion time and consumed energy.

  12. Modeling and Quantification of Team Performance in Human Reliability Analysis for Probabilistic Risk Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Jeffrey C. JOe; Ronald L. Boring

    2014-06-01

    Probabilistic Risk Assessment (PRA) and Human Reliability Assessment (HRA) are important technical contributors to the United States (U.S.) Nuclear Regulatory Commission’s (NRC) risk-informed and performance based approach to regulating U.S. commercial nuclear activities. Furthermore, all currently operating commercial NPPs in the U.S. are required by federal regulation to be staffed with crews of operators. Yet, aspects of team performance are underspecified in most HRA methods that are widely used in the nuclear industry. There are a variety of "emergent" team cognition and teamwork errors (e.g., communication errors) that are 1) distinct from individual human errors, and 2) important to understand from a PRA perspective. The lack of robust models or quantification of team performance is an issue that affects the accuracy and validity of HRA methods and models, leading to significant uncertainty in estimating HEPs. This paper describes research that has the objective to model and quantify team dynamics and teamwork within NPP control room crews for risk informed applications, thereby improving the technical basis of HRA, which improves the risk-informed approach the NRC uses to regulate the U.S. commercial nuclear industry.

  13. Developing a highly reliable cae analysis model of the mechanisms that cause bolt loosening in automobiles

    Directory of Open Access Journals (Sweden)

    Ken Hashimoto

    2014-10-01

    Full Text Available In this study, we developed a highly reliable CAE analysis model of the mechanisms that cause loosening of bolt fasteners, which has been a bottleneck in automobile development and design, using a technical element model for highly accurate CAE that we had previously developed, and verified its validity. Specifically, drawing on knowledge gained from our clarification of the mechanisms that cause loosening of bolt fasteners using actual machine tests, we conducted an accelerated bench test consisting of a threedimensional vibration load test of the loosening of bolt fasteners used in mounts and rear suspension arms, where interviews with personnel at an automaker indicated loosening was most pronounced, and reproduced actual machine tests with CAE analysis based on a technical element model for highly accurate CAE analysis. Based on these results, we were able to reproduce dynamic behavior in which larger screw pitches (lead angles lead to greater non-uniformity of surface pressure, particularly around the nut seating surface, causing loosening to occur in areas with the lowest surface pressure. Furthermore, we implemented highly accurate CAE analysis with no error (gap compared to actual machine tests.

  14. A suction blister model reliably assesses skin barrier restoration and immune response.

    Science.gov (United States)

    Smith, Tracey J; Wilson, Marques A; Young, Andrew J; Montain, Scott J

    2015-02-01

    Skin wound healing models can be used to detect changes in immune function in response to interventions. This study used a test-retest format to assess the reliability of a skin suction blister procedure for quantitatively evaluating human immune function in repeated measures type studies. Up to eight suction blisters (~30 mm(2)) were induced via suction on each participant's left and right forearm (randomized order; blister session 1 and 2), separated by approximately one week. Fluid was sampled from each blister, and the top layer of each blister was removed to reveal up to eight skin wounds. Fluid from each wound was collected 4, 7 and 24h after blisters were induced, and proinflammatory cytokines were measured. Transepidermal water loss (TEWL), to assess skin barrier recovery, was measured daily at each wound site until values were within 90% of baseline values (i.e., unbroken skin). Sleep, stress and inflammation (i.e., factors that affect wound healing and immune function), preceding the blister induction, were assessed via activity monitors (Actical, Philips Respironics, Murrysville, Pennsylvania), the Perceived Stress Scale (PSS) and C-reactive protein (CRP), respectively. Area-under-the-curve and TEWL, between blister session 1 and 2, were compared using Pearson correlations and partial correlations (controlling for average nightly sleep, PSS scores and CRP). The suction blister method was considered reliable for assessing immune response and skin barrier recovery if correlation coefficients reached 0.7. Volunteers (n=16; 12 M; 4F) were 23 ± 5 years [mean ± SD]. Time to skin barrier restoration was 4.9 ± 0.8 and 4.8 ± 0.9 days for sessions 1 and 2, respectively. Correlation coefficients for skin barrier restoration, IL-6, IL-8 and MIP-1α were 0.9 (Pblister method is sufficiently reliable for assessing skin barrier restoration and immune responsiveness. This data can be used to determine sample sizes for cross-sectional or repeated-measures types of

  15. A Model to Partly but Reliably Distinguish DDOS Flood Traffic from Aggregated One

    Directory of Open Access Journals (Sweden)

    Ming Li

    2012-01-01

    Full Text Available Reliable distinguishing DDOS flood traffic from aggregated traffic is desperately desired by reliable prevention of DDOS attacks. By reliable distinguishing, we mean that flood traffic can be distinguished from aggregated one for a predetermined probability. The basis to reliably distinguish flood traffic from aggregated one is reliable detection of signs of DDOS flood attacks. As is known, reliably distinguishing DDOS flood traffic from aggregated traffic becomes a tough task mainly due to the effects of flash-crowd traffic. For this reason, this paper studies reliable detection in the underlying DiffServ network to use static-priority schedulers. In this network environment, we present a method for reliable detection of signs of DDOS flood attacks for a given class with a given priority. There are two assumptions introduced in this study. One is that flash-crowd traffic does not have all priorities but some. The other is that attack traffic has all priorities in all classes, otherwise an attacker cannot completely achieve its DDOS goal. Further, we suppose that the protected site is equipped with a sensor that has a signature library of the legitimate traffic with the priorities flash-crowd traffic does not have. Based on those, we are able to reliably distinguish attack traffic from aggregated traffic with the priorities that flash-crowd traffic does not have according to a given detection probability.

  16. Model Testing and Reliability Evalution of the New Deepwater Breakwater at La Coruña, Spain

    DEFF Research Database (Denmark)

    Burcharth, Hans Falk; Maciñeira, Enrique; Canalejo, Pedro

    2003-01-01

    tankers are arranged along the inner side of the breakwater. The paper presents the design criteria, the design procedure, the main results from model testing, and the subsequent reliability evaluation and optimisation of the cross section design for the most exposed part of the breakwater. Model test...

  17. Construct Validity and Reliability of the Adult Rejection Sensitivity Questionnaire: A Comparison of Three Factor Models

    Directory of Open Access Journals (Sweden)

    Marco Innamorati

    2014-01-01

    Full Text Available Objectives and Methods. The aim of the study was to investigate the construct validity of the ARSQ. Methods. The ARSQ and self-report measures of depression, anxiety, and hopelessness were administered to 774 Italian adults, aged 18 to 64 years. Results. Structural equation modeling indicated that the factor structure of the ARSQ can be represented by a bifactor model: a general rejection sensitivity factor and two group factors, expectancy of rejection and rejection anxiety. Reliability of observed scores was not satisfactory: only 44% of variance in observed total scores was due to the common factors. The analyses also indicated different correlates for the general factor and the group factors. Limitations. We administered an Italian version of the ARSQ to a nonclinical sample of adults, so that studies which use clinical populations or the original version of the ARSQ could obtain different results from those presented here. Conclusion. Our results suggest that the construct validity of the ARSQ is disputable and that rejection anxiety and expectancy could bias individuals to readily perceive and strongly react to cues of rejection in different ways.

  18. Towards a High Reliable Enforcement of Safety Regulations - A Workflow Meta Data Model and Probabilistic Failure Management Approach

    Directory of Open Access Journals (Sweden)

    Heiko Henning Thimm

    2016-10-01

    Full Text Available Today’s companies are able to automate the enforcement of Environmental, Health and Safety (EH&S duties through the use of workflow management technology. This approach requires to specify activities that are combined to workflow models for EH&S enforcement duties. In order to meet given safety regulations these activities are to be completed correctly and within given deadlines. Otherwise, activity failures emerge which may lead to breaches against safety regulations. A novel domain-specific workflow meta data model is proposed. The model enables a system to detect and predict activity failures through the use of data about the company, failure statistics, and activity proxies. Since the detection and prediction methods are based on the evaluation of constraints specified on EH&S regulations, a system approach is proposed that builds on the integration of a Workflow Management System (WMS with an EH&S Compliance Information System. Main principles of the failure detection and prediction are described. For EH&S managers the system shall provide insights into the current failure situation. This can help to prevent and mitigate critical situations such as safety enforcement measures that are behind their deadlines. As a result a more reliable enforcement of safety regulations can be achieved.

  19. An Extension of the Rasch Model for Ratings Providing Both Location and Dispersion Parameters.

    Science.gov (United States)

    Andrich, David

    1982-01-01

    An elaboration of a psychometric model for rated data, which belongs to the class of Rasch models, is shown to provide a model with two parameters, one characterizing location and one characterizing dispersion. Characteristics of the dispersion parameter are discussed. (Author/JKS)

  20. The simulation of cutoff lows in a regional climate model: reliability and future trends

    Energy Technology Data Exchange (ETDEWEB)

    Grose, Michael R. [University of Tasmania, Antarctic Climate and Ecosystems Cooperative Research Centre (ACE CRC), Private Bag 80, Hobart, TAS (Australia); Pook, Michael J.; McIntosh, Peter C.; Risbey, James S. [CSIRO Marine and Atmospheric Research, Centre for Australian Weather and Climate Research (CAWCR), Hobart, TAS (Australia); Bindoff, Nathaniel L. [University of Tasmania, Antarctic Climate and Ecosystems Cooperative Research Centre (ACE CRC), Private Bag 80, Hobart, TAS (Australia); CSIRO Marine and Atmospheric Research, Centre for Australian Weather and Climate Research (CAWCR), Hobart, TAS (Australia); University of Tasmania, Institute of Marine and Antarctic Studies (IMAS), Private Bag 129, Hobart, TAS (Australia)

    2012-07-15

    Cutoff lows are an important source of rainfall in the mid-latitudes that climate models need to simulate accurately to give confidence in climate projections for rainfall. Coarse-scale general circulation models used for climate studies show some notable biases and deficiencies in the simulation of cutoff lows in the Australian region and important aspects of the broader circulation such as atmospheric blocking and the split jet structure observed over Australia. The regional climate model conformal cubic atmospheric model or CCAM gives an improvement in some aspects of the simulation of cutoffs in the Australian region, including a reduction in the underestimate of the frequency of cutoff days by more than 15 % compared to a typical GCM. This improvement is due at least in part to substantially higher resolution. However, biases in the simulation of the broader circulation, blocking and the split jet structure are still present. In particular, a northward bias in the central latitude of cutoff lows creates a substantial underestimate of the associated rainfall over Tasmania in April to October. Also, the regional climate model produces a significant north-south distortion of the vertical profile of cutoff lows, with the largest distortion occurring in the cooler months that was not apparent in GCM simulations. The remaining biases and presence of new biases demonstrates that increased horizontal resolution is not the only requirement in the reliable simulation of cutoff lows in climate models. Notwithstanding the biases in their simulation, the regional climate model projections show some responses to climate warming that are noteworthy. The projections indicate a marked closing of the split jet in winter. This change is associated with changes to atmospheric blocking in the Tasman Sea, which decreases in June to November (by up to 7.9 m s{sup -1}), and increases in December to May. The projections also show a reduction in the number of annual cutoff days by 67

  1. Forecasting consequences of accidental release: how reliable are current assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Rohwer, P.S.; Hoffman, F.O.; Miller, C.W.

    1983-01-01

    This paper focuses on uncertainties in model output used to assess accidents. We begin by reviewing the historical development of assessment models and the associated interest in uncertainties as these evolutionary processes occurred in the United States. This is followed by a description of the sources of uncertainties in assessment calculations. Types of models appropriate for assessment of accidents are identified. A summary of results from our analysis of uncertainty is provided in results obtained with current methodology for assessing routine and accidental radionuclide releases to the environment. We conclude with discussion of preferred procedures and suggested future directions to improve the state-of-the-art of radiological assessments.

  2. Reliability estimation for single dichotomous items based on Mokken's IRT model

    NARCIS (Netherlands)

    Meijer, R R; Sijtsma, K; Molenaar, Ivo W

    1995-01-01

    Item reliability is of special interest for Mokken's nonparametric item response theory, and is useful for the evaluation of item quality in nonparametric test construction research. It is also of interest for nonparametric person-fit analysis. Three methods for the estimation of the reliability of

  3. Reliability estimation for single dichotomous items based on Mokken's IRT model

    NARCIS (Netherlands)

    Meijer, Rob R.; Sijtsma, Klaas; Molenaar, Ivo W.

    1995-01-01

    Item reliability is of special interest for Mokken’s nonparametric item response theory, and is useful for the evaluation of item quality in nonparametric test construction research. It is also of interest for nonparametric person-fit analysis. Three methods for the estimation of the reliability of

  4. JUPITER: Joint Universal Parameter IdenTification and Evaluation of Reliability - An Application Programming Interface (API) for Model Analysis

    Science.gov (United States)

    Banta, Edward R.; Poeter, Eileen P.; Doherty, John E.; Hill, Mary C.

    2006-01-01

    The Joint Universal Parameter IdenTification and Evaluation of Reliability Application Programming Interface (JUPITER API) improves the computer programming resources available to those developing applications (computer programs) for model analysis. The JUPITER API consists of eleven Fortran-90 modules that provide for encapsulation of data and operations on that data. Each module contains one or more entities: data, data types, subroutines, functions, and generic interfaces. The modules do not constitute computer programs themselves; instead, they are used to construct computer programs. Such computer programs are called applications of the API. The API provides common modeling operations for use by a variety of computer applications. The models being analyzed are referred to here as process models, and may, for example, represent the physics, chemistry, and(or) biology of a field or laboratory system. Process models commonly are constructed using published models such as MODFLOW (Harbaugh et al., 2000; Harbaugh, 2005), MT3DMS (Zheng and Wang, 1996), HSPF (Bicknell et al., 1997), PRMS (Leavesley and Stannard, 1995), and many others. The process model may be accessed by a JUPITER API application as an external program, or it may be implemented as a subroutine within a JUPITER API application . In either case, execution of the model takes place in a framework designed by the application programmer. This framework can be designed to take advantage of any parallel processing capabilities possessed by the process model, as well as the parallel-processing capabilities of the JUPITER API. Model analyses for which the JUPITER API could be useful include, for example: * Compare model results to observed values to determine how well the model reproduces system processes and characteristics. * Use sensitivity analysis to determine the information provided by observations to parameters and predictions of interest. * Determine the additional data needed to improve selected

  5. A support vector machine model provides an accurate transcript-level-based diagnostic for major depressive disorder

    Science.gov (United States)

    Yu, J S; Xue, A Y; Redei, E E; Bagheri, N

    2016-01-01

    Major depressive disorder (MDD) is a critical cause of morbidity and disability with an economic cost of hundreds of billions of dollars each year, necessitating more effective treatment strategies and novel approaches to translational research. A notable barrier in addressing this public health threat involves reliable identification of the disorder, as many affected individuals remain undiagnosed or misdiagnosed. An objective blood-based diagnostic test using transcript levels of a panel of markers would provide an invaluable tool for MDD as the infrastructure—including equipment, trained personnel, billing, and governmental approval—for similar tests is well established in clinics worldwide. Here we present a supervised classification model utilizing support vector machines (SVMs) for the analysis of transcriptomic data readily obtained from a peripheral blood specimen. The model was trained on data from subjects with MDD (n=32) and age- and gender-matched controls (n=32). This SVM model provides a cross-validated sensitivity and specificity of 90.6% for the diagnosis of MDD using a panel of 10 transcripts. We applied a logistic equation on the SVM model and quantified a likelihood of depression score. This score gives the probability of a MDD diagnosis and allows the tuning of specificity and sensitivity for individual patients to bring personalized medicine closer in psychiatry. PMID:27779627

  6. Reliable experimental model of hepatic veno-occlusive disease caused by monocrotaline

    Institute of Scientific and Technical Information of China (English)

    Miao-Yan Chen; Jian-Ting Cai; Qin Du; Liang-Jing Wang; Jia-Min Chen; Li-Ming Shao

    2008-01-01

    BACKGROUND:Hepatic veno-occlusive disease (HVOD) is a severe complication of chemotherapy before hematopoietic stem cell transplantation and dietary ingestion of pyrrolizidine alkaloids. Many experimental models were established to study its mechanisms or therapy, but few are ideal. This work aimed at evaluating a rat model of HVOD induced by monocrotaline to help advance research into this disease. METHODS:Thirty-two male rats were randomly classiifed into 5 groups, and PBS or monocrotaline was administered (100 mg/kg or 160 mg/kg). They were sacriifced on day 7 (groups A, B and D) or day 10 (groups C and E). Blood samples were collected to determine liver enzyme concentrations. The weight of the liver and body and the amount of ascites were measured. Histopathological changes of liver tissue on light microscopy were assessed by a modiifed Deleve scoring system. The positivity of proliferating cell nuclear antigen (PCNA) was estimated. RESULTS:The rats that were treated with 160 mg/kg monocrotaline presented with severe clinical symptoms (including two deaths) and the histopathological picture of HVOD. On the other hand, the rats that were fed with 100 mg/kg monocrotaline had milder and reversible manifestations. Comparison of the rats sacriifced on day 10 with those sacriifced on day 7 showed that the positivity of PCNA increased, especially that of hepatocytes. CONCLUSIONS:Monocrotaline induces acute, dose-dependent HVOD in rats. The model is potentially reversible with a low dose, but reliable and irreversible with a higher dose. The modiifed scoring system seems to be more accurate than the traditional one in relfecting the histopathology of HVOD. The enhancement of PCNA positivity may be associated with hepatic tissue undergoing recovery.

  7. An Appropriate Wind Model for Wind Integrated Power Systems Reliability Evaluation Considering Wind Speed Correlations

    Directory of Open Access Journals (Sweden)

    Rajesh Karki

    2013-02-01

    Full Text Available Adverse environmental impacts of carbon emissions are causing increasing concerns to the general public throughout the world. Electric energy generation from conventional energy sources is considered to be a major contributor to these harmful emissions. High emphasis is therefore being given to green alternatives of energy, such as wind and solar. Wind energy is being perceived as a promising alternative. This source of energy technology and its applications have undergone significant research and development over the past decade. As a result, many modern power systems include a significant portion of power generation from wind energy sources. The impact of wind generation on the overall system performance increases substantially as wind penetration in power systems continues to increase to relatively high levels. It becomes increasingly important to accurately model the wind behavior, the interaction with other wind sources and conventional sources, and incorporate the characteristics of the energy demand in order to carry out a realistic evaluation of system reliability. Power systems with high wind penetrations are often connected to multiple wind farms at different geographic locations. Wind speed correlations between the different wind farms largely affect the total wind power generation characteristics of such systems, and therefore should be an important parameter in the wind modeling process. This paper evaluates the effect of the correlation between multiple wind farms on the adequacy indices of wind-integrated systems. The paper also proposes a simple and appropriate probabilistic analytical model that incorporates wind correlations, and can be used for adequacy evaluation of multiple wind-integrated systems.

  8. Evaluation of MCF10A as a Reliable Model for Normal Human Mammary Epithelial Cells.

    Directory of Open Access Journals (Sweden)

    Ying Qu

    Full Text Available Breast cancer is the most common cancer in women and a leading cause of cancer-related deaths for women worldwide. Various cell models have been developed to study breast cancer tumorigenesis, metastasis, and drug sensitivity. The MCF10A human mammary epithelial cell line is a widely used in vitro model for studying normal breast cell function and transformation. However, there is limited knowledge about whether MCF10A cells reliably represent normal human mammary cells. MCF10A cells were grown in monolayer, suspension (mammosphere culture, three-dimensional (3D "on-top" Matrigel, 3D "cell-embedded" Matrigel, or mixed Matrigel/collagen I gel. Suspension culture was performed with the MammoCult medium and low-attachment culture plates. Cells grown in 3D culture were fixed and subjected to either immunofluorescence staining or embedding and sectioning followed by immunohistochemistry and immunofluorescence staining. Cells or slides were stained for protein markers commonly used to identify mammary progenitor and epithelial cells. MCF10A cells expressed markers representing luminal, basal, and progenitor phenotypes in two-dimensional (2D culture. When grown in suspension culture, MCF10A cells showed low mammosphere-forming ability. Cells in mammospheres and 3D culture expressed both luminal and basal markers. Surprisingly, the acinar structure formed by MCF10A cells in 3D culture was positive for both basal markers and the milk proteins β-casein and α-lactalbumin. MCF10A cells exhibit a unique differentiated phenotype in 3D culture which may not exist or be rare in normal human breast tissue. Our results raise a question as to whether the commonly used MCF10A cell line is a suitable model for human mammary cell studies.

  9. Providing Real-time Sea Ice Modeling Support to the U.S. Coast Guard

    Science.gov (United States)

    Allard, Richard; Dykes, James; Hebert, David; Posey, Pamela; Rogers, Erick; Wallcraft, Alan; Phelps, Michael; Smedstad, Ole Martin; Wang, Shouping; Geiszler, Dan

    2016-04-01

    The Naval Research Laboratory (NRL) supported the U.S. Coast Guard Research Development Center (RDC) through a demonstration project during the summer and autumn of 2015. Specifically, a modeling system composed of a mesoscale atmospheric model, regional sea ice model, and regional wave model were loosely coupled to provide real-time 72-hr forecasts of environmental conditions for the Beaufort/Chukchi Seas. The system components included a 2-km regional Community Ice CodE (CICE) sea ice model, 15-km Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS) atmospheric model, and a 5-km regional WAVEWATCH III wave model. The wave model utilized modeled sea ice concentration fields to incorporate the effects of sea ice on waves. The other modeling components assimilated atmosphere, ocean, and ice observations available from satellite and in situ sources. The modeling system generated daily 72-hr forecasts of synoptic weather (including visibility), ice drift, ice thickness, ice concentration and ice strength for missions within the economic exclusion zone off the coast of Alaska and a transit to the North Pole in support of the National Science Foundation GEOTRACES cruise. Model forecasts graphics were shared on a common web page with selected graphical products made available via ftp for bandwidth limited users. Model ice thickness and ice drift show very good agreement compared with Cold Regions Research and Engineering Laboratory (CRREL) Ice Mass Balance buoys. This demonstration served as a precursor to a fully coupled atmosphere-ocean-wave-ice modeling system under development. National Ice Center (NIC) analysts used these model data products (CICE and COAMPS) along with other existing model and satellite data to produce the predicted 48-hr position of the ice edge. The NIC served as a liaison with the RDC and NRL to provide feedback on the model predictions. This evaluation provides a baseline analysis of the current models for future comparison studies

  10. Reliability assessment of offshore platforms exposed to wave-in-deck loading. Appendix F: Reliability analysis of offshore jacket structures with wave load on deck using the model correction factor method

    Energy Technology Data Exchange (ETDEWEB)

    Dalsgaard Soerensen, J. [Aalborg Univ., Aalborg (Denmark); Friis-Hansen, P. [Technical Univ. Denmark, Lyngby (Denmark); Bloch, A.; Svejgaard Nielsen, J. [Ramboell, Esbjerg (Denmark)

    2004-08-01

    Different simple stochastic models for failure related to pushover collapse are investigated. Next, a method is proposed to estimate the reliability of real offshore jacket structures. The method is based on the Model Correction Factor Method and can be used to very efficiently to estimate the reliability for total failure/collapse of jacket type platforms with wave in deck loads. A realistic example is evaluated and it is seen that it is possible to perform probabilistic reliability analysis for collapse of a jacket type platform using the model correction factor method. The total number of deterministic, complicated, non-linear (RONJA) analysis is typically as low as 10. Such reliability analyses are recommended to be used in practical applications, especially for cases with wave in deck load, where the traditional RSR analyses give poor measures of the structural reliability. (au)

  11. Are mixed explicit/implicit solvation models reliable for studying phosphate hydrolysis? A comparative study of continuum, explicit and mixed solvation models.

    Energy Technology Data Exchange (ETDEWEB)

    Kamerlin, Shina C. L.; Haranczyk, Maciej; Warshel, Arieh

    2009-05-01

    Phosphate hydrolysis is ubiquitous in biology. However, despite intensive research on this class of reactions, the precise nature of the reaction mechanism remains controversial. In this work, we have examined the hydrolysis of three homologous phosphate diesters. The solvation free energy was simulated by means of either an implicit solvation model (COSMO), hybrid quantum mechanical / molecular mechanical free energy perturbation (QM/MM-FEP) or a mixed solvation model in which N water molecules were explicitly included in the ab initio description of the reacting system (where N=1-3), with the remainder of the solvent being implicitly modelled as a continuum. Here, both COSMO and QM/MM-FEP reproduce Delta Gobs within an error of about 2kcal/mol. However, we demonstrate that in order to obtain any form of reliable results from a mixed model, it is essential to carefully select the explicit water molecules from short QM/MM runs that act as a model for the true infinite system. Additionally, the mixed models tend to be increasingly inaccurate the more explicit water molecules are placed into the system. Thus, our analysis indicates that this approach provides an unreliable way for modelling phosphate hydrolysis in solution.

  12. A test-retest reliability study of human experimental models of histaminergic and non-histaminergic itch

    DEFF Research Database (Denmark)

    Andersen, Hjalte Holm; Sørensen, Anne-Kathrine R.; Nielsen, Gebbie A. R.

    2017-01-01

    Numerous exploratory, proof-of-concept and interventional studies have used histaminergic and non-histaminergic human models of itch. However, no reliability studies for such surrogate models have been conducted. This study investigated the test-retest reliability for the response to histamine......- and cowhage- (5, 15, 25 spiculae) induced itch in healthy volunteers. Cowhage spiculae were individually applied with tweezers and 1% histamine was applied with a skin prick test (SPT) lancet, both on the volar forearm. The intensity of itch was recorded on a visual analogue scale and self-reported area...

  13. Does a population survey provide reliable influenza vaccine uptake rates among high-risk groups? A case-study of the Netherlands.

    NARCIS (Netherlands)

    Kroneman, M.W.; Essen, G.A. van; Tacken, M.A.J.B.; Paget, W.J.; Verheij, R.

    2004-01-01

    All European countries have recommendations for influenza vaccination among the elderly and chronically ill. However, only a few countries are able to provide data on influenza uptake among these groups. The aim of our study is to investigate whether a population survey is an effective method of

  14. Does a population survey provide reliable influenza vaccine uptake rates among high-risk groups? A case-study of The Netherlands.

    NARCIS (Netherlands)

    Kroneman, M.W.; Essen, G.A.; Tacken, M.A.J.B.; Paget, W.J.; Verheij, R.

    2004-01-01

    All European countries have recommendations for influenza vaccination among the elderly and chronically ill. However, only a few countries are able to provide data on influenza uptake among these groups. The aim of our study is to investigate whether a population survey is an effective method of

  15. Does a population survey provide reliable influenza vaccine uptake rates among high-risk groups? A case-study of The Netherlands.

    NARCIS (Netherlands)

    Kroneman, M.W.; Essen, G.A.; Tacken, M.A.J.B.; Paget, W.J.; Verheij, R.

    2004-01-01

    All European countries have recommendations for influenza vaccination among the elderly and chronically ill. However, only a few countries are able to provide data on influenza uptake among these groups. The aim of our study is to investigate whether a population survey is an effective method of obt

  16. Does a population survey provide reliable influenza vaccine uptake rates among high-risk groups? A case-study of the Netherlands.

    NARCIS (Netherlands)

    Kroneman, M.W.; Essen, G.A. van; Tacken, M.A.J.B.; Paget, W.J.; Verheij, R.

    2004-01-01

    All European countries have recommendations for influenza vaccination among the elderly and chronically ill. However, only a few countries are able to provide data on influenza uptake among these groups. The aim of our study is to investigate whether a population survey is an effective method of obt

  17. Improving model prediction reliability through enhanced representation of wetland soil processes and constrained model auto calibration - A paired watershed study

    Science.gov (United States)

    Sharifi, Amirreza; Lang, Megan W.; McCarty, Gregory W.; Sadeghi, Ali M.; Lee, Sangchul; Yen, Haw; Rabenhorst, Martin C.; Jeong, Jaehak; Yeo, In-Young

    2016-10-01

    Process based, distributed watershed models possess a large number of parameters that are not directly measured in field and need to be calibrated, in most cases through matching modeled in-stream fluxes with monitored data. Recently, concern has been raised regarding the reliability of this common calibration practice, because models that are deemed to be adequately calibrated based on commonly used metrics (e.g., Nash Sutcliffe efficiency) may not realistically represent intra-watershed responses or fluxes. Such shortcomings stem from the use of an evaluation criteria that only concerns the global in-stream responses of the model without investigating intra-watershed responses. In this study, we introduce a modification to the Soil and Water Assessment Tool (SWAT) model, and a new calibration technique that collectively reduce the chance of misrepresenting intra-watershed responses. The SWAT model was modified to better represent NO3 cycling in soils with various degrees of water holding capacity. The new calibration tool has the capacity to calibrate paired watersheds simultaneously within a single framework. It was found that when both proposed methodologies were applied jointly to two paired watersheds on the Delmarva Peninsula, the performance of the models as judged based on conventional metrics suffered, however, the intra-watershed responses (e.g., mass of NO3 lost to denitrification) in the two models automatically converged to realistic sums. This approach also demonstrates the capacity to spatially distinguish areas of high denitrification potential, an ability that has implications for improved management of prior converted wetlands under crop production and for identifying prominent areas for wetland restoration.

  18. Reliability model for ductile hybrid FRP rebar using randomly dispersed chopped fibers

    Science.gov (United States)

    Behnam, Bashar Ramzi

    Fiber reinforced polymer composites or simply FRP composites have become more attractive to civil engineers in the last two decades due to their unique mechanical properties. However, there are many obstacles such as low elasticity modulus, non-ductile behavior, high cost of the fibers, high manufacturing costs, and absence of rigorous characterization of the uncertainties of the mechanical properties that restrict the use of these composites. However, when FRP composites are used to develop reinforcing rebars in concrete structural members to replace the conventional steel, a huge benefit can be achieved since FRP materials don't corrode. Two FRP rebar models are proposed that make use of multiple types of fibers to achieve ductility, and chopped fibers are used to reduce the manufacturing costs. In order to reach the most optimum fractional volume of each type of fiber, to minimize the cost of the proposed rebars, and to achieve a safe design by considering uncertainties in the materials and geometry of sections, appropriate material resistance factors have been developed, and a Reliability Based Design Optimization (RBDO), has been conducted for the proposed schemes.

  19. CKow -- A More Transparent and Reliable Model for Chemical Transfer to Meat and Milk

    Energy Technology Data Exchange (ETDEWEB)

    Rosenbaum, Ralph K.; McKone, Thomas E.; Jolliet, Olivier

    2009-03-01

    The objective of this study is to increase the understanding and transparency of chemical biotransfer modeling into meat and milk and explicitly confront the uncertainties in exposure assessments of chemicals that require such estimates. In cumulative exposure assessments that include food pathways, much of the overall uncertainty is attributable to the estimation of transfer into biota and through food webs. Currently, the most commonly used meat and milk-biotransfer models date back two decades and, in spite of their widespread use in multimedia exposure models few attempts have been made to advance or improve the outdated and highly uncertain Kow regressions used in these models. Furthermore, in the range of Kow where meat and milk become the dominant human exposure pathways, these models often provide unrealistic rates and do not reflect properly the transfer dynamics. To address these issues, we developed a dynamic three-compartment cow model (called CKow), distinguishing lactating and non-lactating cows. For chemicals without available overall removal rates in the cow, a correlation is derived from measured values reported in the literature to predict this parameter from Kow. Results on carry over rates (COR) and biotransfer factors (BTF) demonstrate that a steady-state ratio between animal intake and meat concentrations is almost never reached. For meat, empirical data collected on short term experiments need to be adjusted to provide estimates of average longer term behaviors. The performance of the new model in matching measurements is improved relative to existing models--thus reducing uncertainty. The CKow model is straight forward to apply at steady state for milk and dynamically for realistic exposure durations for meat COR.

  20. How to Obtain a 100% Reliable Grid with Clean, Renewable Wind, Water, and Solar Providing 100% of all Raw Energy for All Purposes

    Science.gov (United States)

    Jacobson, M. Z.; Delucchi, M. A.; Cameron, M. A.; Frew, B. A.

    2016-12-01

    The greatest concern facing the large-scale integration of wind, water, and solar (WWS) into a power grid is the high cost of avoiding load loss caused by WWS variability and uncertainty. This talk discusses the recent development of a new grid integration model to address this issue. The model finds low-cost, no-load-loss, non-unique solutions to this problem upon electrification of all U.S. energy sectors (electricity, transportation, heating/cooling, and industry) while accounting for wind and solar time-series data from a 3-D global weather model that simulates extreme events and competition among wind turbines for available kinetic energy. Solutions are obtained by prioritizing storage for heat (in soil and water); cold (in ice and water); and electricity (in phase-change materials, pumped hydro, hydropower, and hydrogen); and using demand response. No natural gas, biofuels, or stationary batteries are needed. The resulting 2050-2055 U.S. electricity social cost for a full system is much less than for fossil fuels. These results hold for many conditions, suggesting that low-cost, stable 100% WWS systems should work many places worldwide. The paper this talk is based on was published in PNAS, 112, 15,060-15,065, 2015, doi:10.1073/pnas.1510028112.

  1. Crop Yield and Area can be Reliably Estimated Using Farmer Supplied Yield Data, Remote Sensing and Crop Models in Australia.

    Science.gov (United States)

    Lawes, R.

    2016-12-01

    The Australian grain growing region is vast and occupies where some 25 million tonnes of wheat is produced from latitudes -27 to -42, where soils, crops and climates vary considerably. Predicting the area of individual crops is time consuming and currently conducted by survey, while yield estimates are derived from these areas and from information about grain receivables with little pre-harvest information available to industry. The existing approach fails to provide reliable, timely, small scale information about production. Similarly, previous attempts to predict yield using satellite derived information rely on information collected using the existing systems to calibrate models. We have developed a crop productivity and yield model - called C-Store Crop - that uses remotely sensed vegetation indices, along with site based rainfall, radiation and temperature information. Model calibration using 3000 points derived from farmer supplied yield maps for wheat, barley, canola and chickpea showed strong relationships (>70%) between modelled plant mass and observed crop yield at the paddock scale. C-Store Crop is being applied at 250m and 25m grid resolution. Farmer supplied yield data was also used to train a combination of Radar and Landsat images collected whilst the crop is growing to discriminate between crop types. Landsat information alone was unable to discriminate legume and cereal crops. Problems such as cloud prevented accessing appropriate scenes. Inclusion of Radar information reduced errors of commission and omission. By combining the C-Store Crop model with remote estimates of crop type, we anticipate predicting crop type and crop yield with uncertainty estimates across the Australian continent.

  2. A novel model for determining the amplitude-wavelength limits of track irregularities accompanied by a reliability assessment in railway vehicle-track dynamics

    Science.gov (United States)

    Xu, Lei; Zhai, Wanming

    2017-03-01

    The loads on a vehicle and the vibrations transmitted to track infrastructures due to the operation of rolling stocks are mainly determined by the irregularities of the track profile. Hence, it is rather important to ascertain the limits of track irregularities, including amplitudes and wavelengths, to guarantee the dynamic performance of running vehicles and guiding tracks. Furthermore, the operation and management levels as well as irregularity status for different railways are highly dissimilar. Therefore, it is a necessary to conduct a reliability assessment for a specific railway line. In the present work, a large amount of measured track irregularities are concentrated as a group form of the track irregularity power spectrum density. A track irregularity inversion model is presented to obtain realistic representations of track profile deformations with information regarding amplitudes, wavelengths and probabilities. Then, the methodologies for determining the limits of track irregularities and achieving a reliability assessment are presented by introducing the probability density evolution method and development of a Wavelet-Wigner-Hough method. Using the vehicle-track interaction model, numerical studies for confirming the limits of track irregularities and evaluating the reliability of the dynamic performance of the vehicle can be conducted to provide valuable suggestions. This paper offers new possibilities for studying the limit amplitudes, characteristic wavelengths of track irregularities as well as corresponding reliabilities when a railway vehicle runs under different track geometric conditions.

  3. Towards a generic, reliable CFD modelling methodology for waste-fired grate boilers

    DEFF Research Database (Denmark)

    Rajh, Boštjan; Yin, Chungen; Samec, Niko

    Computational Fluid Dynamics (CFD) is increasingly used in industry for detailed understanding of the combustion process and for appropriate design and optimization of Waste–to–Energy (WtE) plants. In this paper, CFD modelling of waste wood combustion in a 13 MW grate-fired boiler in a WtE plant...... the appropriate inlet boundary condition for the freeboard 3D CFD simulation. Additionally, a refined WSGGM (weighted sum of gray gases model) of greater accuracy, completeness and applicability is proposed and implemented into the CFD model via user defined functions (UDF) to better address the impacts...... is presented. To reduce the risk of slagging, optimize the temperature control and enhance turbulent mixing, part of the flue gas is recycled into the grate boiler. In the simulation, a 1D in–house bed model is developed to simulate the conversion of the waste wood in the fuel bed on the grate, which provides...

  4. Model of mechanism of providing of strategic firmness of machine-building enterprise

    Directory of Open Access Journals (Sweden)

    I.V. Movchan

    2011-03-01

    Full Text Available In the article is considered theoretical aspects of strategic firmness and the developed algorithmic model of mechanism providing of strategic firmness of machine-building enterprise.

  5. Are reactive transport models reliable tools for reconstructing historical contamination scenarios?

    Science.gov (United States)

    Clement, P.

    2009-12-01

    models to reconstruct the historical concentration levels. In this presentation, I will first briefly review the details of the contamination problem and the modeling results. Later I will use the field study to answer the following questions: 1) Are reactive transport modeling tools sufficiently reliable for reconstructing historical VOC contamination at field sites? 2) What are the benefits of using reactive transport models for resolving policy problems related to a groundwater risk/exposure assessment problem? Finally, we will use this example to answer a rhetorical question—-how much complexity is too much complexity?

  6. Reliability analysis of repairable systems using system dynamics modeling and simulation

    Science.gov (United States)

    Srinivasa Rao, M.; Naikan, V. N. A.

    2014-07-01

    Repairable standby system's study and analysis is an important topic in reliability. Analytical techniques become very complicated and unrealistic especially for modern complex systems. There have been attempts in the literature to evolve more realistic techniques using simulation approach for reliability analysis of systems. This paper proposes a hybrid approach called as Markov system dynamics (MSD) approach which combines the Markov approach with system dynamics simulation approach for reliability analysis and to study the dynamic behavior of systems. This approach will have the advantages of both Markov as well as system dynamics methodologies. The proposed framework is illustrated for a standby system with repair. The results of the simulation when compared with that obtained by traditional Markov analysis clearly validate the MSD approach as an alternative approach for reliability analysis.

  7. A Comparative Evaluation of Mixed Dentition Analysis on Reliability of Cone Beam Computed Tomography Image Compared to Plaster Model.

    Science.gov (United States)

    Gowd, Snigdha; Shankar, T; Dash, Samarendra; Sahoo, Nivedita; Chatterjee, Suravi; Mohanty, Pritam

    2017-01-01

    The aim of the study was to evaluate the reliability of cone beam computed tomography (CBCT) obtained image over plaster model for the assessment of mixed dentition analysis. Thirty CBCT-derived images and thirty plaster models were derived from the dental archives, and Moyer's and Tanaka-Johnston analyses were performed. The data obtained were interpreted and analyzed statistically using SPSS 10.0/PC (SPSS Inc., Chicago, IL, USA). Descriptive and analytical analysis along with Student's t-test was performed to qualitatively evaluate the data and P plaster model; the mean for Moyer's analysis in the left and right lower arch for CBCT and plaster model was 21.2 mm, 21.1 mm and 22.5 mm, 22.5 mm, respectively. CBCT-derived images were less reliable as compared to data obtained directly from plaster model for mixed dentition analysis.

  8. Reliability Assessment of Solder Joints in Power Electronic Modules by Crack Damage Model for Wind Turbine Applications

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2011-01-01

    , it is necessary to understand the physics of their failure and be able to develop reliability prediction models. Such a model is proposed in this paper for an IGBT power electronic module. IGBTs are critical components in wind turbine converter systems. These are multi-layered devices where layers are soldered...... to each other and they operate at a thermal-power cycling environment. Temperature loadings affect the reliability of soldered joints by developing cracks and fatigue processes that eventually result in failure. Based on Miner’s rule a linear damage model that incorporates a crack development...... and propagation processes is discussed. A statistical analysis is performed for appropriate model parameter selection. Based on the proposed model, a layout for component life prediction with crack movement is described in details....

  9. From network reliability to the Ising model: A parallel scheme for estimating the joint density of states

    Science.gov (United States)

    Ren, Yihui; Eubank, Stephen; Nath, Madhurima

    2016-10-01

    Network reliability is the probability that a dynamical system composed of discrete elements interacting on a network will be found in a configuration that satisfies a particular property. We introduce a reliability property, Ising feasibility, for which the network reliability is the Ising model's partition function. As shown by Moore and Shannon, the network reliability can be separated into two factors: structural, solely determined by the network topology, and dynamical, determined by the underlying dynamics. In this case, the structural factor is known as the joint density of states. Using methods developed to approximate the structural factor for other reliability properties, we simulate the joint density of states, yielding an approximation for the partition function. Based on a detailed examination of why naïve Monte Carlo sampling gives a poor approximation, we introduce a parallel scheme for estimating the joint density of states using a Markov-chain Monte Carlo method with a spin-exchange random walk. This parallel scheme makes simulating the Ising model in the presence of an external field practical on small computer clusters for networks with arbitrary topology with ˜106 energy levels and more than 10308 microstates.

  10. 指数可靠性增长模型研究%Research on Exponential Reliability Growth Models

    Institute of Scientific and Technical Information of China (English)

    韩庆田; 李文强; 曹文静

    2012-01-01

    产品的可靠性增长试验通常有若干个阶段,每个阶段都是在设计、工艺、材料等方面有所改进时进行的,可靠度不断提高.结合产品研制阶段的可靠性增长特点,基于Duane学习曲线性质,研究了可靠性增长模型,给出了参数的极大似然估计和可靠性评估方法.实例分析结果表明模型方法简单,符合工程实际,适合小子样产品的可靠性增长评估.%During the period of development of the products, the reliability is always growing for the improvement of the designation, technology, materials, etc. Considering the reliability growth characteristics of the product during development phase, based on the nature of the Duane learning curve, the exponential reliability growth models was studied, and the parameters of the maximum likelihood estimator and reliability assessment method were given. The case study results show that the model is simple, actual project for small sample reliability growth assessment.

  11. Reliability and validation of a behavioral model of clinical behavioral formulation

    OpenAIRE

    2011-01-01

    The aim of this study was to determine the reliability and content and predictive validity of a clinical case formulation, developed from a behavioral perspective. A mixed design integrating levels of descriptive analysis and A-B case study with follow-up was used. The study established the reliability of the following descriptive and explanatory categories: (a) problem description, (b) predisposing factors, (c) precipitating factors, (d) acquisition and (e) inferred mechanism (maintenance). ...

  12. Web服务可靠性的阶段模型%A staged model for Web services reliability

    Institute of Scientific and Technical Information of China (English)

    谢春丽; 李必信; 王喜凤; 廖力

    2012-01-01

    为了更全面地分析Web服务的失效过程,提高Web服务可靠性预测的精度,提出了一种Web服务可靠性的阶段模型.描述了SOA( service-oriented architecture)下Web服务在服务发布、服务发现、服务绑定、服务组合以及服务执行等各个阶段的失效过程,并根据Web服务在不同阶段的失效过程构建各个阶段的可靠性模型,将各阶段的可靠性模型集成起来即构成了Web服务的可靠性预测模型.通过实例描述了该模型的可靠性度量方法,并将该阶段模型和传统的可靠性模型进行了比较,实验结果表明了该可靠性阶段模型是有效的,因此比传统模型更加适用于Web服务.%To more comprehensively analyze the failure process of Web services and improve the prediction accuracy of reliability, a staged model for Web services reliability is proposed. Firstly, the failure processes of service publication, service discovery, service composition, service binding and service execution in service-oriented architecture (SOA) are described and reliability models in different stage are built according to different failure processes. Then, the staged reliability models are integrated to predict the reliability of Web services. A case study was designed to model the reliability and the staged model was compared with the traditional model. The experimental results show that our staged model can obtain better reliability prediction accuracy than traditional models. Thus, it is more applicable to Web services.

  13. [Interview instrument provides no reliable assessment of suicide risk. Scientific support is lacking according to report from the Swedish Council on Health Technology Assessment (SBU)].

    Science.gov (United States)

    Runeson, Bo

    2015-12-08

    Identifying individuals at risk of future suicide or suicide attempts is of clinical importance. Instruments have been developed to facilitate the assessment of the risk of future suicidal acts. A systematic review was conducted using the standard methods of the Swedish Council on Health Technology Assessment (SBU). The ability of the instrument to predict risk for future suicide/suicide attempt was assessed at follow up. The methodological quality of eligible studies was assessed; studies with moderate or low risk of bias were analysed in accordance with GRADE. None of the included studies provided scientific evidence to support that any instrument had sufficient accuracy to predict future suicidal behaviour. There is strong evidence to support that the SAD PERSONS Scale has very low sensitivity; most persons who make future suicidal acts are not identified.

  14. Providing a Model for Successful Implementation of Customer Relationship Management (Case Study: Zahedan Industrial City

    Directory of Open Access Journals (Sweden)

    Amin-Reza Kamalian

    2013-05-01

    Full Text Available This study presents a model for Successful Implementation of Customer Relationship Management (CRM for small and medium-sized enterprises (SMEs in Zahedan industrial city. Having extensive theoretical study, the factors influencing the success of customer relationship management were identified. Using a standard questionnaire with reliability of 96.2 percent (Cronbach's alpha coefficient, existing and desired situations of these factors were compared by experts' point of view. Research population consists of industrialists and professionals in Zahedan industrial city. Because of small population size, data obtained by the entire population; i.e. 54 companies. This applied study is in descriptive-analytical type. Data analysis was performed using SPSS software. Results indicated that all factors affecting the success of implementing customer relationship management, except technology, are used in these companies.

  15. Effectiveness of Video Modeling Provided by Mothers in Teaching Play Skills to Children with Autism

    Science.gov (United States)

    Besler, Fatma; Kurt, Onur

    2016-01-01

    Video modeling is an evidence-based practice that can be used to provide instruction to individuals with autism. Studies show that this instructional practice is effective in teaching many types of skills such as self-help skills, social skills, and academic skills. However, in previous studies, videos used in the video modeling process were…

  16. Building the Bridge between Operations and Outcomes : Modelling and Evaluation of Health Service Provider Networks

    NARCIS (Netherlands)

    M. Mahdavi (Mahdi)

    2015-01-01

    markdownabstract__Abstract__ The PhD research has two objectives: - To develop generally applicable operational models which allow developing the evidence base for health service operations in provider networks. - To contribute to the evidence base by validating the model through application to hea

  17. Maximizing Energy Savings Reliability in BC Hydro Industrial Demand-side Management Programs: An Assessment of Performance Incentive Models

    Science.gov (United States)

    Gosman, Nathaniel

    of alternative performance incentive program models to manage DSM risk in BC. Three performance incentive program models were assessed and compared to BC Hydro's current large industrial DSM incentive program, Power Smart Partners -- Transmission Project Incentives, itself a performance incentive-based program. Together, the selected program models represent a continuum of program design and implementation in terms of the schedule and level of incentives provided, the duration and rigour of measurement and verification (M&V), energy efficiency measures targeted and involvement of the private sector. A multi criteria assessment framework was developed to rank the capacity of each program model to manage BC large industrial DSM risk factors. DSM risk management rankings were then compared to program costeffectiveness, targeted energy savings potential in BC and survey results from BC industrial firms on the program models. The findings indicate that the reliability of DSM energy savings in the BC large industrial sector can be maximized through performance incentive program models that: (1) offer incentives jointly for capital and low-cost operations and maintenance (O&M) measures, (2) allow flexible lead times for project development, (3) utilize rigorous M&V methods capable of measuring variable load, process-based energy savings, (4) use moderate contract lengths that align with effective measure life, and (5) integrate energy management software tools capable of providing energy performance feedback to customers to maximize the persistence of energy savings. While this study focuses exclusively on the BC large industrial sector, the findings of this research have applicability to all energy utilities serving large, energy intensive industrial sectors.

  18. Investigating the links of internal and external reliability with the system conditionality in Gauss-Markov models with uncorrelated observations

    Science.gov (United States)

    Prószyński, Witold

    2013-12-01

    The relationship between internal response-based reliability and conditionality is investigated for Gauss-Markov (GM) models with uncorrelated observations. The models with design matrices of full rank and of incomplete rank are taken into consideration. The formulas based on the Singular Value Decomposition (SVD) of the design matrix are derived which clearly indicate that the investigated concepts are independent of each other. The methods are presented of constructing for a given design matrix the matrices equivalent with respect to internal response-based reliability as well as the matrices equivalent with respect to conditionality. To analyze conditionality of GM models, in general being inconsistent systems, a substitute for condition number commonly used in numerical linear algebra is developed, called a pseudo-condition^number. Also on the basis of the SVD a formula for external reliability is proposed, being the 2-norm of a vector of parameter distortions induced by minimal detectable error in a particular observation. For systems with equal nonzero singular values of the design matrix, the formula can be expressed in terms of the index of internal response-based reliability and the pseudo-condition^number. With these measures appearing in explicit form, the formula shows, although only for the above specific systems, the character of the impact of internal response-based reliability and conditionality of the model upon its external reliability. Proofs for complementary properties concerning the pseudo-condition^number and the 2-norm of parameter distortions in systems with minimal constraints are given in the Appendices. Numerical examples are provided to illustrate the theory. Badany jest związek między niezawodnością wewnętrzną bazującą na odpowiedziach modelu a uwarunkowaniem układu dla modeli Gaussa-Markova z obserwacjami nieskorelowanymi. Rozpatrywane są przy tym modele z macierzami projektu pełnego i niepełnego rzędu. Wzory wyprowadzone

  19. Wire frame: A reliable approach to build sealed engineering geological models

    Science.gov (United States)

    Xu, Nengxiong; Tian, Hong

    2009-08-01

    The technique of 3D geological modeling (3DGM) is an effective tool for representing complex geological objects. In order to improve the accuracy of geological models applied in numerical simulation methods such as finite elements and finite differences, we can use 3DGM as a modeling tool. To do this, however, 3DGM must provide the ability to model geological and artificial objects in a unified way, and its geological model must be seamless for mesh generation. We present the concept of a sealed engineering geological model (SEGM), and describe its topological representation. Three kinds of conditions: geometric continuity, topological consistency and geological consistency, which must be satisfied by SEGM, are discussed in detail. A new method for constructing an SEGM based on a wire frame is proposed. It includes three main components: wire frame construction, interface modification and reconstruction, and block tracing. Building a unitary wire frame, which is composed of many simple arcs and connects all interfaces seamlessly, is the key of this method. An algorithm, involving two intersections computations and partition of simple arcs, is proposed for building a wire frame. Additionally, we also propose a local iterative algorithm for computing fault traces. As an example, we build an SEGM for the dam area of a hydraulic engineering project in the HuNan province of China.

  20. 多维MANET可靠性建模研究%Research of Multiple Dimensional Reliability Model for MANET

    Institute of Scientific and Technical Information of China (English)

    赵志峰; 赵曦滨; 陈丹宁

    2011-01-01

    移动自组织网络(MANET,Mobile Ad hoc Network)是一种不依赖固定基础设施且不需要中心控制的动态无线网络.由于其开放自治的无线网络环境及无中心、动态拓扑等特性,导致MANET无法保障通讯的持续性,同时容易受到各种安全攻击.因此相对于传统网络,MANET在网络的可靠性上存在很大的局限性.综合考虑了影响MANET可靠性的两大因素,即节点移动性和安全攻击,提出了多维MANET可靠性模型,并对模型结果进行了实验分析,进一步指出了影响MANET系统可靠性的关键参数.%Without infrastructure, Mobile Ad Hoc Network(MANET) is a kind of self-organized and dynamic wireless communication network that is lack of any centralized control.These characteristics make it vulnerable to communication continuity and security attack.Comparing with traditional network, MANET has great limitation in network reliability.Many researches have been focused on two main factors of reliability or reliability model, which influence the reliability of MANET.Therefore, this paper proposed a multiple dimensional reliability model,which considers both movement and security attack, for MANET.Furthermore, the experimental analysis of the model was given.According to the result of experimental analysis, the key factor of MANET reliability was presented.

  1. Digital Avionics Information System (DAIS): Life Cycle Cost Impact Modeling System Reliability, Maintainability, and Cost Model (RMCM)--Description. Users Guide. Final Report.

    Science.gov (United States)

    Goclowski, John C.; And Others

    The Reliability, Maintainability, and Cost Model (RMCM) described in this report is an interactive mathematical model with a built-in sensitivity analysis capability. It is a major component of the Life Cycle Cost Impact Model (LCCIM), which was developed as part of the DAIS advanced development program to be used to assess the potential impacts…

  2. Comparison of two model approaches in the Zambezi river basin with regard to model reliability and identifiability

    Directory of Open Access Journals (Sweden)

    H. C. Winsemius

    2006-01-01

    Full Text Available Variations of water stocks in the upper Zambezi river basin have been determined by 2 different hydrological modelling approaches. The purpose was to provide preliminary terrestrial storage estimates in the upper Zambezi, which will be compared with estimates derived from the Gravity Recovery And Climate Experiment (GRACE in a future study. The first modelling approach is GIS-based, distributed and conceptual (STREAM. The second approach uses Lumped Elementary Watersheds identified and modelled conceptually (LEW. The STREAM model structure has been assessed using GLUE (Generalized Likelihood Uncertainty Estimation a posteriori to determine parameter identifiability. The LEW approach could, in addition, be tested for model structure, because computational efforts of LEW are low. Both models are threshold models, where the non-linear behaviour of the Zambezi river basin is explained by a combination of thresholds and linear reservoirs. The models were forced by time series of gauged and interpolated rainfall. Where available, runoff station data was used to calibrate the models. Ungauged watersheds were generally given the same parameter sets as their neighbouring calibrated watersheds. It appeared that the LEW model structure could be improved by applying GLUE iteratively. Eventually, it led to better identifiability of parameters and consequently a better model structure than the STREAM model. Hence, the final model structure obtained better represents the true hydrology. After calibration, both models show a comparable efficiency in representing discharge. However the LEW model shows a far greater storage amplitude than the STREAM model. This emphasizes the storage uncertainty related to hydrological modelling in data-scarce environments such as the Zambezi river basin. It underlines the need and potential for independent observations of terrestrial storage to enhance our understanding and modelling capacity of the hydrological processes. GRACE

  3. Onboard Robust Visual Tracking for UAVs Using a Reliable Global-Local Object Model.

    Science.gov (United States)

    Fu, Changhong; Duan, Ran; Kircali, Dogan; Kayacan, Erdal

    2016-08-31

    In this paper, we present a novel onboard robust visual algorithm for long-term arbitrary 2D and 3D object tracking using a reliable global-local object model for unmanned aerial vehicle (UAV) applications, e.g., autonomous tracking and chasing a moving target. The first main approach in this novel algorithm is the use of a global matching and local tracking approach. In other words, the algorithm initially finds feature correspondences in a way that an improved binary descriptor is developed for global feature matching and an iterative Lucas-Kanade optical flow algorithm is employed for local feature tracking. The second main module is the use of an efficient local geometric filter (LGF), which handles outlier feature correspondences based on a new forward-backward pairwise dissimilarity measure, thereby maintaining pairwise geometric consistency. In the proposed LGF module, a hierarchical agglomerative clustering, i.e., bottom-up aggregation, is applied using an effective single-link method. The third proposed module is a heuristic local outlier factor (to the best of our knowledge, it is utilized for the first time to deal with outlier features in a visual tracking application), which further maximizes the representation of the target object in which we formulate outlier feature detection as a binary classification problem with the output features of the LGF module. Extensive UAV flight experiments show that the proposed visual tracker achieves real-time frame rates of more than thirty-five frames per second on an i7 processor with 640 × 512 image resolution and outperforms the most popular state-of-the-art trackers favorably in terms of robustness, efficiency and accuracy.

  4. Onboard Robust Visual Tracking for UAVs Using a Reliable Global-Local Object Model

    Science.gov (United States)

    Fu, Changhong; Duan, Ran; Kircali, Dogan; Kayacan, Erdal

    2016-01-01

    In this paper, we present a novel onboard robust visual algorithm for long-term arbitrary 2D and 3D object tracking using a reliable global-local object model for unmanned aerial vehicle (UAV) applications, e.g., autonomous tracking and chasing a moving target. The first main approach in this novel algorithm is the use of a global matching and local tracking approach. In other words, the algorithm initially finds feature correspondences in a way that an improved binary descriptor is developed for global feature matching and an iterative Lucas–Kanade optical flow algorithm is employed for local feature tracking. The second main module is the use of an efficient local geometric filter (LGF), which handles outlier feature correspondences based on a new forward-backward pairwise dissimilarity measure, thereby maintaining pairwise geometric consistency. In the proposed LGF module, a hierarchical agglomerative clustering, i.e., bottom-up aggregation, is applied using an effective single-link method. The third proposed module is a heuristic local outlier factor (to the best of our knowledge, it is utilized for the first time to deal with outlier features in a visual tracking application), which further maximizes the representation of the target object in which we formulate outlier feature detection as a binary classification problem with the output features of the LGF module. Extensive UAV flight experiments show that the proposed visual tracker achieves real-time frame rates of more than thirty-five frames per second on an i7 processor with 640 × 512 image resolution and outperforms the most popular state-of-the-art trackers favorably in terms of robustness, efficiency and accuracy. PMID:27589769

  5. Onboard Robust Visual Tracking for UAVs Using a Reliable Global-Local Object Model

    Directory of Open Access Journals (Sweden)

    Changhong Fu

    2016-08-01

    Full Text Available In this paper, we present a novel onboard robust visual algorithm for long-term arbitrary 2D and 3D object tracking using a reliable global-local object model for unmanned aerial vehicle (UAV applications, e.g., autonomous tracking and chasing a moving target. The first main approach in this novel algorithm is the use of a global matching and local tracking approach. In other words, the algorithm initially finds feature correspondences in a way that an improved binary descriptor is developed for global feature matching and an iterative Lucas–Kanade optical flow algorithm is employed for local feature tracking. The second main module is the use of an efficient local geometric filter (LGF, which handles outlier feature correspondences based on a new forward-backward pairwise dissimilarity measure, thereby maintaining pairwise geometric consistency. In the proposed LGF module, a hierarchical agglomerative clustering, i.e., bottom-up aggregation, is applied using an effective single-link method. The third proposed module is a heuristic local outlier factor (to the best of our knowledge, it is utilized for the first time to deal with outlier features in a visual tracking application, which further maximizes the representation of the target object in which we formulate outlier feature detection as a binary classification problem with the output features of the LGF module. Extensive UAV flight experiments show that the proposed visual tracker achieves real-time frame rates of more than thirty-five frames per second on an i7 processor with 640 × 512 image resolution and outperforms the most popular state-of-the-art trackers favorably in terms of robustness, efficiency and accuracy.

  6. Visual Analogue Scale for Anxiety and Amsterdam Preoperative Anxiety Scale Provide a Simple and Reliable Measurement of Preoperative Anxiety in Patients Undergoing Cardiac Surgery

    Directory of Open Access Journals (Sweden)

    Joaquín Hernández-Palazón

    2015-03-01

    Full Text Available Background: Anxiety is an emotional state characterized by apprehension and fear resulting from anticipation of a threatening event. Objectives: The present study aimed to analyze the incidence and level of preoperative anxiety in the patients scheduled for cardiac surgery by using a Visual Analogue Scale for Anxiety (VAS-A and Amsterdam Preoperative Anxiety and Information Scale (APAIS and to identify the influencing clinical factors. Patients and Methods: This prospective, longitudinal study was performed on 300 cardiac surgery patients in a single university hospital. The patients were assessed regarding their preoperative anxiety level using VAS-A, APAIS, and a set of specific anxiety-related questions. Their demographic features as well as their anesthetic and surgical characteristics (ASA physical status, EuroSCORE, preoperative Length of Stay (LoS, and surgical history were recorded, as well. Then, one-way ANOVA and t-test were applied along with odds ratio for risk assessment. Results: According to the results, 94% of the patients presented preoperative anxiety, with 37% developing high anxiety (VAS-A ≥ 7. Preoperative LoS > 2 days was the only significant risk factor for preoperative anxiety (odds ratio = 2.5, CI 95%, 1.3 - 5.1, P = 0.009. Besides, a positive correlation was found between anxiety level (APAISa and requirement of knowledge (APAISk. APAISa and APAISk scores were greater for surgery than for anesthesia. Moreover, the results showed that the most common anxieties resulted from the operation, waiting for surgery, not knowing what is happening, postoperative pain, awareness during anesthesia, and not awakening from anesthesia. Conclusions: APAIS and VAS-A provided a quantitative assessment of anxiety and a specific qualitative questionnaire for preoperative anxiety in cardiac surgery. According to the results, preoperative LoS > 2 days and lack of information related to surgery were the risk factors for high anxiety levels.

  7. Assuring reliability program effectiveness.

    Science.gov (United States)

    Ball, L. W.

    1973-01-01

    An attempt is made to provide simple identification and description of techniques that have proved to be most useful either in developing a new product or in improving reliability of an established product. The first reliability task is obtaining and organizing parts failure rate data. Other tasks are parts screening, tabulation of general failure rates, preventive maintenance, prediction of new product reliability, and statistical demonstration of achieved reliability. Five principal tasks for improving reliability involve the physics of failure research, derating of internal stresses, control of external stresses, functional redundancy, and failure effects control. A final task is the training and motivation of reliability specialist engineers.

  8. The Accelerator Reliability Forum

    CERN Document Server

    Lüdeke, Andreas; Giachino, R

    2014-01-01

    A high reliability is a very important goal for most particle accelerators. The biennial Accelerator Reliability Workshop covers topics related to the design and operation of particle accelerators with a high reliability. In order to optimize the over-all reliability of an accelerator one needs to gather information on the reliability of many different subsystems. While a biennial workshop can serve as a platform for the exchange of such information, the authors aimed to provide a further channel to allow for a more timely communication: the Particle Accelerator Reliability Forum [1]. This contribution will describe the forum and advertise it’s usage in the community.

  9. Reliability analysis using an exponential power model with bathtub-shaped failure rate function: a Bayes study.

    Science.gov (United States)

    Shehla, Romana; Khan, Athar Ali

    2016-01-01

    Models with bathtub-shaped hazard function have been widely accepted in the field of reliability and medicine and are particularly useful in reliability related decision making and cost analysis. In this paper, the exponential power model capable of assuming increasing as well as bathtub-shape, is studied. This article makes a Bayesian study of the same model and simultaneously shows how posterior simulations based on Markov chain Monte Carlo algorithms can be straightforward and routine in R. The study is carried out for complete as well as censored data, under the assumption of weakly-informative priors for the parameters. In addition to this, inference interest focuses on the posterior distribution of non-linear functions of the parameters. Also, the model has been extended to include continuous explanatory variables and R-codes are well illustrated. Two real data sets are considered for illustrative purposes.

  10. Reliability analysis of an associated system

    Institute of Scientific and Technical Information of China (English)

    陈长杰; 魏一鸣; 蔡嗣经

    2002-01-01

    Based on engineering reliability of large complex system and distinct characteristic of soft system, some new conception and theory on the medium elements and the associated system are created. At the same time, the reliability logic model of associated system is provided. In this paper, through the field investigation of the trial operation, the engineering reliability of the paste fill system in No.2 mine of Jinchuan Non-ferrous Metallic Corporation is analyzed by using the theory of associated system.

  11. Principles of performance and reliability modeling and evaluation essays in honor of Kishor Trivedi on his 70th birthday

    CERN Document Server

    Puliafito, Antonio

    2016-01-01

    This book presents the latest key research into the performance and reliability aspects of dependable fault-tolerant systems and features commentary on the fields studied by Prof. Kishor S. Trivedi during his distinguished career. Analyzing system evaluation as a fundamental tenet in the design of modern systems, this book uses performance and dependability as common measures and covers novel ideas, methods, algorithms, techniques, and tools for the in-depth study of the performance and reliability aspects of dependable fault-tolerant systems. It identifies the current challenges that designers and practitioners must face in order to ensure the reliability, availability, and performance of systems, with special focus on their dynamic behaviors and dependencies, and provides system researchers, performance analysts, and practitioners with the tools to address these challenges in their work. With contributions from Prof. Trivedi's former PhD students and collaborators, many of whom are internationally recognize...

  12. Structural equation modelling of determinants of customer satisfaction of mobile network providers: Case of Kolkata, India

    Directory of Open Access Journals (Sweden)

    Shibashish Chakraborty

    2014-12-01

    Full Text Available The Indian market of mobile network providers is growing rapidly. India is the second largest market of mobile network providers in the world and there is intense competition among existing players. In such a competitive market, customer satisfaction becomes a key issue. The objective of this paper is to develop a customer satisfaction model of mobile network providers in Kolkata. The results indicate that generic requirements (an aggregation of output quality and perceived value, flexibility, and price are the determinants of customer satisfaction. This study offers insights for mobile network providers to understand the determinants of customer satisfaction.

  13. Providing a Connection between a Bayesian Inverse Modeling Tool and a Coupled Hydrogeological Processes Modeling Software

    Science.gov (United States)

    Frystacky, H.; Osorio-Murillo, C. A.; Over, M. W.; Kalbacher, T.; Gunnell, D.; Kolditz, O.; Ames, D.; Rubin, Y.

    2013-12-01

    The Method of Anchored Distributions (MAD) is a Bayesian technique for characterizing the uncertainty in geostatistical model parameters. Open-source software has been developed in a modular framework such that this technique can be applied to any forward model software via a driver. This presentation is about the driver that has been developed for OpenGeoSys (OGS), open-source software that can simulate many hydrogeological processes, including couple processes. MAD allows the use of multiple data types for conditioning the spatially random fields and assessing model parameter likelihood. For example, if simulating flow and mass transport, the inversion target variable could be hydraulic conductivity and the inversion data types could be head, concentration, or both. The driver detects from the OGS files which processes and variables are being used in a given project and allows MAD to prompt the user to choose those that are to be modeled or to be treated deterministically. In this way, any combination of processes allowed by OGS can have MAD applied. As for the software, there are two versions, each with its own OGS driver. A Windows desktop version is available as a graphical user interface and is ideal for the learning and teaching environment. High-throughput computing can even be achieved with this version via HTCondor if large projects want to be pursued in a computer lab. In addition to this desktop application, a Linux version is available equipped with MPI such that it can be run in parallel on a computer cluster. All releases can be downloaded from the MAD Codeplex site given below.

  14. 基于概率模型的QCA触发器可靠性研究%The Reliability Study of QCA Flip-flop Based on Probability Model

    Institute of Scientific and Technical Information of China (English)

    黄宏图; 蔡理; 杨晓阔; 李政操

    2012-01-01

    通过将时序逻辑电路中的反馈回路打开,在原有电路结构的基础上增加一路输入,采用概率转移矩阵方法建立了基于QCA的RS触发器、D触发器、JK触发器的可靠性模型,深入研究了各组成元件对其可靠性影响的差异,从而为其可靠性的提高提供了依据,这对于高缺陷率的QCA电路的可靠性设计具有重要的指导意义.%The reliability of the sequential circuit based on QCA, e. g. RS flip-flop, D flip-flop , JK flip-flop is studied through opening up the feedback loop and adding one input on the basis of the former circuit structure. The reliability model is based on the probabilistic transfer matrix. And the effect of individual component on the overall reliability is deeply analyzed. The study provides the proof for the improvement of the QCA circuit. And it is important for the reliability design for the high defect-rate QCA circuit.

  15. Measuring the Quality of Services Provided for Outpatients in Kowsar Clinic in Ardebil City Based on the SERVQUAL Model

    Directory of Open Access Journals (Sweden)

    Hasan Ghobadi

    2014-12-01

    Full Text Available Background & objectives: Today, the concept of q uality of services is particularly important in health care and customer satisfaction can be defined by comparing the expectations of the services with perception of provided services. The aim of this study was to evaluate the quality of services provided for outpatients in clinic of Ardebil city based on the SERVQUAL model.   Methods: This descriptive study was conducted on 650 patients referred to outpatient clinic since July to September 201 3 using a standardized SERVQUAL questionnaire (1988 with confirmed reliability and validity. The paired t-test and Friedman test were used for analysis of data by SPSS software.   Results: 56.1 % of respondents were male and 43.9 % of them were female . The mean age of patients was 33 ± 11.91 , 68.9 % of patients were in Ardabil and 27.3 % of them had bachelor's or higher. The results showed that there is a significant difference between perceptions and expectations of the patients about five dimensions of the service quality (tangibility, reliability, assurance, responsiveness, and empathy in the studied clinic (P< 0.001. The highest mean gap and minimum gap were related to empathy and assurance, respectively.   Conclusion: Regarding to observed differences in quality , the managers and also planners have to evaluate their performance more accurately in order to have better planning for future actions. In fact, any efforts to reduce the gap between expectation and perception of patients result in greater satisfaction, loyalty and further visits to organizations.

  16. The reliable solution and computation time of variable parameters Logistic model

    CERN Document Server

    Pengfei, Wang

    2016-01-01

    The reliable computation time (RCT, marked as Tc) when applying a double precision computation of a variable parameters logistic map (VPLM) is studied. First, using the method proposed, the reliable solutions for the logistic map are obtained. Second, for a time-dependent non-stationary parameters VPLM, 10000 samples of reliable experiments are constructed, and the mean Tc is then computed. The results indicate that for each different initial value, the Tcs of the VPLM are generally different. However, the mean Tc trends to a constant value once the sample number is large enough. The maximum, minimum and probable distribution function of Tc is also obtained, which can help us to identify the robustness of applying a nonlinear time series theory to forecasting while using the VPLM output. In addition, the Tc of the fixed parameter experiments of the logistic map was obtained, and the results suggested that this Tc matches the theoretical formula predicted value.

  17. Social models provide a norm of appropriate food intake for young women.

    Directory of Open Access Journals (Sweden)

    Lenny R Vartanian

    Full Text Available It is often assumed that social models influence people's eating behavior by providing a norm of appropriate food intake, but this hypothesis has not been directly tested. In three experiments, female participants were exposed to a low-intake model, a high-intake model, or no model (control condition. Experiments 1 and 2 used a remote-confederate manipulation and were conducted in the context of a cookie taste test. Experiment 3 used a live confederate and was conducted in the context of a task during which participants were given incidental access to food. Participants also rated the extent to which their food intake was influenced by a variety of factors (e.g., hunger, taste, how much others ate. In all three experiments, participants in the low-intake conditions ate less than did participants in the high-intake conditions, and also reported a lower perceived norm of appropriate intake. Furthermore, perceived norms of appropriate intake mediated the effects of the social model on participants' food intake. Despite the observed effects of the social models, participants were much more likely to indicate that their food intake was influenced by taste and hunger than by the behavior of the social models. Thus, social models appear to influence food intake by providing a norm of appropriate eating behavior, but people may be unaware of the influence of a social model on their behavior.

  18. Gearbox Reliability Collaborative Update (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Sheng, S.; Keller, J.; Glinsky, C.

    2013-10-01

    This presentation was given at the Sandia Reliability Workshop in August 2013 and provides information on current statistics, a status update, next steps, and other reliability research and development activities related to the Gearbox Reliability Collaborative.

  19. Value-added strategy models to provide quality services in senior health business.

    Science.gov (United States)

    Yang, Ya-Ting; Lin, Neng-Pai; Su, Shyi; Chen, Ya-Mei; Chang, Yao-Mao; Handa, Yujiro; Khan, Hafsah Arshed Ali; Elsa Hsu, Yi-Hsin

    2017-06-20

    The rapid population aging is now a global issue. The increase in the elderly population will impact the health care industry and health enterprises; various senior needs will promote the growth of the senior health industry. Most senior health studies are focused on the demand side and scarcely on supply. Our study selected quality enterprises focused on aging health and analyzed different strategies to provide excellent quality services to senior health enterprises. We selected 33 quality senior health enterprises in Taiwan and investigated their excellent quality services strategies by face-to-face semi-structured in-depth interviews with CEO and managers of each enterprise in 2013. A total of 33 senior health enterprises in Taiwan. Overall, 65 CEOs and managers of 33 enterprises were interviewed individually. None. Core values and vision, organization structure, quality services provided, strategies for quality services. This study's results indicated four type of value-added strategy models adopted by senior enterprises to offer quality services: (i) residential care and co-residence model, (ii) home care and living in place model, (iii) community e-business experience model and (iv) virtual and physical portable device model. The common part in these four strategy models is that the services provided are elderly centered. These models offer virtual and physical integrations, and also offer total solutions for the elderly and their caregivers. Through investigation of successful strategy models for providing quality services to seniors, we identified opportunities to develop innovative service models and successful characteristics, also policy implications were summarized. The observations from this study will serve as a primary evidenced base for enterprises developing their senior market and, also for promoting the value co-creation possibility through dialogue between customers and those that deliver service.

  20. Material and structural mechanical modelling and reliability of thin-walled bellows at cryogenic temperatures. Application to LHC compensation system

    CERN Document Server

    Garion, Cédric; Skoczen, Blazej

    The present thesis is dedicated to the behaviour of austenitic stainless steels at cryogenic temperatures. The plastic strain induced martensitic transformation and ductile damage are taken into account in an elastic-plastic material modelling. The kinetic law of →’ transformation and the evolution laws of kinematic/isotropic mixed hardening are established. Damage issue is analysed by different ways: mesoscopic isotropic or orthotropic model and a microscopic approach. The material parameters are measured from 316L fine gauge sheet at three levels of temperature: 293 K, 77 K and 4.2 K. The model is applied to thin-walled corrugated shell, used in the LHC interconnections. The influence of the material properties on the stability is studied by a modal analysis. The reliability of the components, defined by the Weibull distribution law, is analysed from fatigue tests. The impact on reliability of geometrical imperfections and thermo-mechanical loads is also analysed.

  1. Slab2 - Providing updated subduction zone geometries and modeling tools to the community

    Science.gov (United States)

    Hayes, G. P.; Hearne, M. G.; Portner, D. E.; Borjas, C.; Moore, G.; Flamme, H.

    2015-12-01

    The U.S. Geological Survey database of global subduction zone geometries (Slab1.0) combines a variety of geophysical data sets (earthquake hypocenters, moment tensors, active source seismic survey images of the shallow subduction zone, bathymetry, trench locations, and sediment thickness information) to image the shape of subducting slabs in three dimensions, at approximately 85% of the world's convergent margins. The database is used extensively for a variety of purposes, from earthquake source imaging, to magnetotelluric modeling. Gaps in Slab1.0 exist where input data are sparse and/or where slabs are geometrically complex (and difficult to image with an automated approach). Slab1.0 also does not include information on the uncertainty in the modeled geometrical parameters, or the input data used to image them, and provides no means to reproduce the models it described. Currently underway, Slab2 will update and replace Slab1.0 by: (1) extending modeled slab geometries to all global subduction zones; (2) incorporating regional data sets that may describe slab geometry in finer detail than do previously used teleseismic data; (3) providing information on the uncertainties in each modeled slab surface; (4) modifying our modeling approach to a fully-three dimensional data interpolation, rather than following the 2-D to 3-D steps of Slab1.0; (5) migrating the slab modeling code base to a more universally distributable language, Python; and (6) providing the code base and input data we use to create our models, such that the community can both reproduce the slab geometries, and add their own data sets to ours to further improve upon those models in the future. In this presentation we describe our vision for Slab2, and the first results of this modeling process.

  2. Provider dismissal policies and clustering of vaccine-hesitant families: an agent-based modeling approach.

    Science.gov (United States)

    Buttenheim, Alison M; Cherng, Sarah T; Asch, David A

    2013-08-01

    Many pediatric practices have adopted vaccine policies that require parents who refuse to vaccinate according to the ACIP schedule to find another health care provider. Such policies may inadvertently cluster unvaccinated patients into practices that tolerate non vaccination or alternative schedules, turning them into risky pockets of low herd immunity. The objective of this study was to assess the effect of provider zero-tolerance vaccination policies on the clustering of intentionally unvaccinated children. We developed an agent-based model of parental vaccine hesitancy, provider non-vaccination tolerance, and selection of patients into pediatric practices. We ran 84 experiments across a range of parental hesitancy and provider tolerance scenarios. When the model is initialized, all providers accommodate refusals and intentionally unvaccinated children are evenly distributed across providers. As provider tolerance decreases, hesitant children become more clustered in a smaller number of practices and eventually are not able to find a practice that will accept them. Each of these effects becomes more pronounced as the level of hesitancy in the population rises. Heterogeneity in practice tolerance to vaccine-hesitant parents has the unintended result of concentrating susceptible individuals within a small number of tolerant practices, while providing little if any compensatory protection to adherent individuals. These externalities suggest an agenda for stricter policy regulation of individual practice decisions.

  3. Assessment of physician competency in patient education : Reliability and validity of a model-based instrument

    NARCIS (Netherlands)

    Wouda, Jan C.; Zandbelt, Linda C.; Smets, Ellen M. A.; van de Wiel, Harry B. M.

    2011-01-01

    Objective: Establish the inter-rater reliability and the concept, convergent and construct validity of an instrument for assessing the competency of physicians in patient education. Methods: Three raters assessed the quality of patient education in 30 outpatient consultations with the CELI instrumen

  4. 75 FR 29962 - Special Conditions: Cirrus Design Corporation Model SF50 Airplane; Function and Reliability Testing

    Science.gov (United States)

    2010-05-28

    .... Flight into Known Icing. Discussion Before Amendment 3-4, Section 3.19 of Civil Air Regulation (CAR) part... components, and equipment are reliable, and function properly.'' Amendment 3-4 to CAR part 3 became effective... section must include-- (1) For aircraft incorporating turbine engines of a type not previously used in a...

  5. A Simulation Model for Machine Efficiency Improvement Using Reliability Centered Maintenance: Case Study of Semiconductor Factory

    Directory of Open Access Journals (Sweden)

    Srisawat Supsomboon

    2014-01-01

    Full Text Available The purpose of this study was to increase the quality of product by focusing on the machine efficiency improvement. The principle of the reliability centered maintenance (RCM was applied to increase the machine reliability. The objective was to create preventive maintenance plan under reliability centered maintenance method and to reduce defects. The study target was set to reduce the Lead PPM for a test machine by simulating the proposed preventive maintenance plan. The simulation optimization approach based on evolutionary algorithms was employed for the preventive maintenance technique selection process to select the PM interval that gave the best total cost and Lead PPM values. The research methodology includes procedures such as following the priority of critical components in test machine, analyzing the damage and risk level by using Failure Mode and Effects Analysis (FMEA, calculating the suitable replacement period through reliability estimation, and optimizing the preventive maintenance plan. From the result of the study it is shown that the Lead PPM of test machine can be reduced. The cost of preventive maintenance, cost of good product, and cost of lost product were decreased.

  6. Between-Person and Within-Person Subscore Reliability: Comparison of Unidimensional and Multidimensional IRT Models

    Science.gov (United States)

    Bulut, Okan

    2013-01-01

    The importance of subscores in educational and psychological assessments is undeniable. Subscores yield diagnostic information that can be used for determining how each examinee's abilities/skills vary over different content domains. One of the most common criticisms about reporting and using subscores is insufficient reliability of subscores.…

  7. Modeling Potential Surface and Shallow Groundwater Storage Provided by Beaver Ponds Across Watersheds

    Science.gov (United States)

    Hafen, K.; Wheaton, J. M.; Macfarlane, W.

    2016-12-01

    Damming of streams by North American Beaver (Castor canadensis) has been shown to provide a host of potentially desirable hydraulic and hydrologic impacts. Notably, increases in surface water storage and groundwater storage may alter the timing and delivery of water around individual dams and dam complexes. Anecdotal evidence suggests these changes may be important for increasing and maintaining baseflow and even helping some intermittent streams flow perennially. In the arid west, these impacts could be particularly salient in the face of climate change. However, few studies have examined the hydrologic impacts of beaver dams at scales large enough to provide insight for water management, in part because understanding or modeling these impacts at large spatial scales has been precluded by uncertainty concerning the number of beaver dams a drainage network can support. Using the recently developed Beaver Restoration Assessment Tool (BRAT) to identify possible densities and spatial configurations of beaver dams, we developed a model that predicts the area and volume of surface water storage associated with dams of various sizes, and applied this model at different dam densities across multiple watersheds (HUC12) in northern Utah. We then used model results as inputs to the MODFLOW groundwater model to identify the subsequent changes to shallow groundwater storage. The spatially explicit water storage estimates produced by our approach will be useful in evaluating potential beaver restoration and conservation, and will also provide necessary information for developing hydrologic models to specifically identify the effects beaver dams may have on water delivery and timing.

  8. Web服务组合的可靠性动态评估模型%Reliability dynamic evaluation model of Web services composition

    Institute of Scientific and Technical Information of China (English)

    梁员宁; 陈喆; 谢立军

    2012-01-01

    为了合理、高效、动态地评估Web服务组合的可靠性,为服务请求者提供高质量的组合服务,提出了一个Web服务组合的可靠性动态评估模型.该模型对服务提供者发布至UDDI注册中心的Web服务进行语义预先处理,根据语义Web服务间的逻辑组合关系,基于预推理技术构造Web服务的自动组合框架,提出了Web服务的自动组合算法,建立Web服务组合方案的路径结构;利用随机Petri网对满足服务请求者需求的服务组合路径结构进行可靠性建模,结合在线获取的Web服务可靠性信息,对Web服务组合的可靠性进行动态评估.实验示例结果分析表明,提出的模型能确保Web服务组合方案的有效性和提高服务组合的效率,对Web服务组合的可靠性评估具有较强动态性和灵活适应性.%To evaluate the reliability of Web service composition logically, efficiently and dynamically, and afford a high quality composite service for service applicant, this paper proposed a reliability dynamic evaluation model of Web services composition. The model foreclosed the semantic to the Web services of UDDI(universal description, discovery and integration) register center which issued by the services providers. According to the logic of the combination between semantic Web services relationship, basing on the reasoning technology to construct an automatic combination frame, this paper put forward an automatic combination algorithm, and set up the path structures of Web service composite schemes. And it then established the reliability models for the service composite path structures which satisfied the requirement of services applicants by stochastic Petri net, evaluated the reliability of Web services composition dynamically combining the online reliability information of Web services. The experiment results indicate that the model can insure the validity of the Web service composite schemes and enhance the efficiency of services

  9. Tools and Algorithms to Link Horizontal Hydrologic and Vertical Hydrodynamic Models and Provide a Stochastic Modeling Framework

    Science.gov (United States)

    Salah, Ahmad M.; Nelson, E. James; Williams, Gustavious P.

    2010-04-01

    We present algorithms and tools we developed to automatically link an overland flow model to a hydrodynamic water quality model with different spatial and temporal discretizations. These tools run the linked models which provide a stochastic simulation frame. We also briefly present the tools and algorithms we developed to facilitate and analyze stochastic simulations of the linked models. We demonstrate the algorithms by linking the Gridded Surface Subsurface Hydrologic Analysis (GSSHA) model for overland flow with the CE-QUAL-W2 model for water quality and reservoir hydrodynamics. GSSHA uses a two-dimensional horizontal grid while CE-QUAL-W2 uses a two-dimensional vertical grid. We implemented the algorithms and tools in the Watershed Modeling System (WMS) which allows modelers to easily create and use models. The algorithms are general and could be used for other models. Our tools create and analyze stochastic simulations to help understand uncertainty in the model application. While a number of examples of linked models exist, the ability to perform automatic, unassisted linking is a step forward and provides the framework to easily implement stochastic modeling studies.

  10. Tools and Algorithms to Link Horizontal Hydrologic and Vertical Hydrodynamic Models and Provide a Stochastic Modeling Framework

    Directory of Open Access Journals (Sweden)

    Ahmad M Salah

    2010-12-01

    Full Text Available We present algorithms and tools we developed to automatically link an overland flow model to a hydrodynamic water quality model with different spatial and temporal discretizations. These tools run the linked models which provide a stochastic simulation frame. We also briefly present the tools and algorithms we developed to facilitate and analyze stochastic simulations of the linked models. We demonstrate the algorithms by linking the Gridded Surface Subsurface Hydrologic Analysis (GSSHA model for overland flow with the CE-QUAL-W2 model for water quality and reservoir hydrodynamics. GSSHA uses a two-dimensional horizontal grid while CE-QUAL-W2 uses a two-dimensional vertical grid. We implemented the algorithms and tools in the Watershed Modeling System (WMS which allows modelers to easily create and use models. The algorithms are general and could be used for other models. Our tools create and analyze stochastic simulations to help understand uncertainty in the model application. While a number of examples of linked models exist, the ability to perform automatic, unassisted linking is a step forward and provides the framework to easily implement stochastic modeling studies.

  11. Parameter sensitivity analysis of stochastic models provides insights into cardiac calcium sparks.

    Science.gov (United States)

    Lee, Young-Seon; Liu, Ona Z; Hwang, Hyun Seok; Knollmann, Bjorn C; Sobie, Eric A

    2013-03-05

    We present a parameter sensitivity analysis method that is appropriate for stochastic models, and we demonstrate how this analysis generates experimentally testable predictions about the factors that influence local Ca(2+) release in heart cells. The method involves randomly varying all parameters, running a single simulation with each set of parameters, running simulations with hundreds of model variants, then statistically relating the parameters to the simulation results using regression methods. We tested this method on a stochastic model, containing 18 parameters, of the cardiac Ca(2+) spark. Results show that multivariable linear regression can successfully relate parameters to continuous model outputs such as Ca(2+) spark amplitude and duration, and multivariable logistic regression can provide insight into how parameters affect Ca(2+) spark triggering (a probabilistic process that is all-or-none in a single simulation). Benchmark studies demonstrate that this method is less computationally intensive than standard methods by a factor of 16. Importantly, predictions were tested experimentally by measuring Ca(2+) sparks in mice with knockout of the sarcoplasmic reticulum protein triadin. These mice exhibit multiple changes in Ca(2+) release unit structures, and the regression model both accurately predicts changes in Ca(2+) spark amplitude (30% decrease in model, 29% decrease in experiments) and provides an intuitive and quantitative understanding of how much each alteration contributes to the result. This approach is therefore an effective, efficient, and predictive method for analyzing stochastic mathematical models to gain biological insight.

  12. Providing evidence of likely being on time – Counterexample generation for CTMC model checking

    NARCIS (Netherlands)

    Han, T.; Katoen, J.P.; Namjoshi, K.; Yoneda, T.; Higashino, T.; Okamura, Y.

    2007-01-01

    Probabilistic model checkers typically provide a list of individual state probabilities on the refutation of a temporal logic formula. For large state spaces, this information is far too detailed to act as useful diagnostic feedback. For quantitative (constrained) reachability problems, sets of path

  13. Physical Models that Provide Guidance in Visualization Deconstruction in an Inorganic Context

    Science.gov (United States)

    Schiltz, Holly K.; Oliver-Hoyo, Maria T.

    2012-01-01

    Three physical model systems have been developed to help students deconstruct the visualization needed when learning symmetry and group theory. The systems provide students with physical and visual frames of reference to facilitate the complex visualization involved in symmetry concepts. The permanent reflection plane demonstration presents an…

  14. Using a Behavior Modeling Approach to Teach Students the Art of Providing and Receiving Verbal Feedback

    Science.gov (United States)

    Maritz, Carol A.

    2008-01-01

    Using a behavior modeling approach, this study examined how students' perceived self-efficacy improved as they developed, delivered, and evaluated professional presentations. Using journal entries and a self-efficacy assessment, students' perceived self-efficacy increased as they learned to provide and receive verbal peer feedback, and to stage…

  15. Using a Behavior Modeling Approach to Teach Students the Art of Providing and Receiving Verbal Feedback

    Science.gov (United States)

    Maritz, Carol A.

    2008-01-01

    Using a behavior modeling approach, this study examined how students' perceived self-efficacy improved as they developed, delivered, and evaluated professional presentations. Using journal entries and a self-efficacy assessment, students' perceived self-efficacy increased as they learned to provide and receive verbal peer feedback, and to stage…

  16. Selection of reliable biomarkers from PCR array analyses using relative distance computational model: methodology and proof-of-concept study.

    Directory of Open Access Journals (Sweden)

    Chunsheng Liu

    Full Text Available It is increasingly evident about the difficulty to monitor chemical exposure through biomarkers as almost all the biomarkers so far proposed are not specific for any individual chemical. In this proof-of-concept study, adult male zebrafish (Danio rerio were exposed to 5 or 25 µg/L 17β-estradiol (E2, 100 µg/L lindane, 5 nM 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD or 15 mg/L arsenic for 96 h, and the expression profiles of 59 genes involved in 7 pathways plus 2 well characterized biomarker genes, vtg1 (vitellogenin1 and cyp1a1 (cytochrome P450 1A1, were examined. Relative distance (RD computational model was developed to screen favorable genes and generate appropriate gene sets for the differentiation of chemicals/concentrations selected. Our results demonstrated that the known biomarker genes were not always good candidates for the differentiation of pair of chemicals/concentrations, and other genes had higher potentials in some cases. Furthermore, the differentiation of 5 chemicals/concentrations examined were attainable using expression data of various gene sets, and the best combination was the set consisting of 50 genes; however, as few as two genes (e.g. vtg1 and hspa5 [heat shock protein 5] were sufficient to differentiate the five chemical/concentration groups in the present test. These observations suggest that multi-parameter arrays should be more reliable for biomonitoring of chemical exposure than traditional biomarkers, and the RD computational model provides an effective tool for the selection of parameters and generation of parameter sets.

  17. Selection of reliable biomarkers from PCR array analyses using relative distance computational model: methodology and proof-of-concept study.

    Science.gov (United States)

    Liu, Chunsheng; Xu, Hongyan; Lam, Siew Hong; Gong, Zhiyuan

    2013-01-01

    It is increasingly evident about the difficulty to monitor chemical exposure through biomarkers as almost all the biomarkers so far proposed are not specific for any individual chemical. In this proof-of-concept study, adult male zebrafish (Danio rerio) were exposed to 5 or 25 µg/L 17β-estradiol (E2), 100 µg/L lindane, 5 nM 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) or 15 mg/L arsenic for 96 h, and the expression profiles of 59 genes involved in 7 pathways plus 2 well characterized biomarker genes, vtg1 (vitellogenin1) and cyp1a1 (cytochrome P450 1A1), were examined. Relative distance (RD) computational model was developed to screen favorable genes and generate appropriate gene sets for the differentiation of chemicals/concentrations selected. Our results demonstrated that the known biomarker genes were not always good candidates for the differentiation of pair of chemicals/concentrations, and other genes had higher potentials in some cases. Furthermore, the differentiation of 5 chemicals/concentrations examined were attainable using expression data of various gene sets, and the best combination was the set consisting of 50 genes; however, as few as two genes (e.g. vtg1 and hspa5 [heat shock protein 5]) were sufficient to differentiate the five chemical/concentration groups in the present test. These observations suggest that multi-parameter arrays should be more reliable for biomonitoring of chemical exposure than traditional biomarkers, and the RD computational model provides an effective tool for the selection of parameters and generation of parameter sets.

  18. Using Models to Provide Predicted Ranges for Building-Human Interfaces: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Long, N.; Scheib, J.; Pless, S.; Schott, M.

    2013-09-01

    Most building energy consumption dashboards provide only a snapshot of building performance; whereas some provide more detailed historic data with which to compare current usage. This paper will discuss the Building Agent(tm) platform, which has been developed and deployed in a campus setting at the National Renewable Energy Laboratory as part of an effort to maintain the aggressive energyperformance achieved in newly constructed office buildings and laboratories. The Building Agent(tm) provides aggregated and coherent access to building data, including electric energy, thermal energy, temperatures, humidity, and lighting levels, and occupant feedback, which are displayed in various manners for visitors, building occupants, facility managers, and researchers. This paper focuseson the development of visualizations for facility managers, or an energy performance assurance role, where metered data are used to generate models that provide live predicted ranges of building performance by end use. These predicted ranges provide simple, visual context for displayed performance data without requiring users to also assess historical information or trends. Several energymodelling techniques were explored including static lookup-based performance targets, reduced-order models derived from historical data using main effect variables such as solar radiance for lighting performance, and integrated energy models using a whole-building energy simulation program.

  19. Modeling Customer Loyalty by System Dynamics Methodology (Case Study: Internet Service Provider Company

    Directory of Open Access Journals (Sweden)

    Alireza Bafandeh Zendeh

    2016-03-01

    Full Text Available Due to the complexity of the customer loyalty, we tried to provide a conceptual model to explain it in an Internet service provider company with system dynamics approach. To do so, the customer’s loyalty for statistical population was analyzed according to Sterman’s modeling methodology. First of all the reference modes (historical behavior of customer loyalty was evaluated. Then dynamic hypotheses was developed by utilizing causal - loop diagrams and stock-flow maps, based on theoretical literature. In third stage, initial conditions of variables, parameters, and mathematical functions between them were estimated. The model was tested, finally advertising, quality of services improvement and continuing the current situation scenarios were evaluated. Results showed improving the quality of service scenario is more effectiveness in compare to others

  20. The Armored Vehicle Fire Control System Reliability Modeling%装甲火控可靠性建模

    Institute of Scientific and Technical Information of China (English)

    郝玉生; 梁宝生; 武云鹏; 张永昌

    2016-01-01

    装甲武器是陆军及其两栖机械化部队实施地面突击的主要兵器,也是海军陆战部队实施抢滩登陆、岛屿攻防的主要兵器。装甲火控是装甲武器的重要组成部分,其可靠性直接影响装甲武器的打击效能。两栖装甲武器肩负海上、陆上的作战使命,使用环境更加恶劣,可靠性问题更加突出。描述了装甲火控的一般组成、功能及工作方式,基于两栖装甲武器的使用特点,分析了装甲火控基本任务、工作方式和作战环境的关系,建立了不同任务剖面的可靠性模型,为装甲火控可靠性设计、分析提供了一种参考。%Mechannized armored vehicles army weapon system and its amphibious force key groud assault weapons,and Marine forces key onboard,islands of offensive and defensive weapon.Fire control system is an important part of armored vehicle weapons system,its reliabilitiy directly affect armored vehicle combat efficiency of weapon system.Amphibious armored vehicle combat mission of weapon system on sea and land,using the environment worse,reliability problems become more prominent. Describes the armored vehicle fire control system,the general composition,function and working mode, based on the operating characteristics of amphibious armored vehicles,the basic task of the armored vehicle fire control system are analyzed,the relationship between work and operational environment,set up different task profile,the reliability of the model for the reliabilitiy design of the armored vehicle fire control system,analysis provides a reference.

  1. THE PROBLEMS OF MODELING THE RELIABILITY STRUCTURE OF THE COMPLEX TECHNICAL SYSTEM ON THE BASIS OF A STEAM‐WATER SYSTEM OF THE ENGINE ROOM

    Directory of Open Access Journals (Sweden)

    Leszek CHYBOWSKI

    2012-04-01

    Full Text Available In the paper the concept of a system structure with particular emphasis on the reliability structure has been presented. Advantages and disadvantages of modeling the reliability structure of a system using reliability block diagrams (RBD have been shown. RBD models of a marine steam‐water system constructed according to the concept of ‘multi‐component’, ‘one component’ and mixed models have been discussed. Critical remarks on the practical application of models which recognize only the structural surplus have been dealt with. The significant value of the model by professors Smalko and Jaźwiński called by them ‘default reliability structure’ has been pointed out. The necessity of building a new type of models: quality‐quantity, useful in the methodology developed by the author's multi-criteria analysis of importance of elements in the reliability structure of complex technical systems.

  2. Discrete Address Beacon System (DABS) Software System Reliability Modeling and Prediction.

    Science.gov (United States)

    1981-06-01

    matrix in Table 1. Only software modules identified in the table were included in the reliability analysis. Other software modules which are off...0025 00 PREMATURE DATA REG CANC DAB007 NM b0/’ S 5 N0044 JD MESSAGE EXPIRATION DA007 DL #e5/C 80 1 iC,045 JD COMM SCENARIO PROBLEM DAB007 DL 05/ 0: bCG

  3. The reliable solution and computation time of variable parameters logistic model

    Science.gov (United States)

    Wang, Pengfei; Pan, Xinnong

    2017-04-01

    The study investigates the reliable computation time (RCT, termed as T c) by applying a double-precision computation of a variable parameters logistic map (VPLM). Firstly, by using the proposed method, we obtain the reliable solutions for the logistic map. Secondly, we construct 10,000 samples of reliable experiments from a time-dependent non-stationary parameters VPLM and then calculate the mean T c. The results indicate that, for each different initial value, the T cs of the VPLM are generally different. However, the mean T c trends to a constant value when the sample number is large enough. The maximum, minimum, and probable distribution functions of T c are also obtained, which can help us to identify the robustness of applying a nonlinear time series theory to forecasting by using the VPLM output. In addition, the T c of the fixed parameter experiments of the logistic map is obtained, and the results suggest that this T c matches the theoretical formula-predicted value.

  4. A Tutorial on Nonlinear Time-Series Data Mining in Engineering Asset Health and Reliability Prediction: Concepts, Models, and Algorithms

    Directory of Open Access Journals (Sweden)

    Ming Dong

    2010-01-01

    Full Text Available The primary objective of engineering asset management is to optimize assets service delivery potential and to minimize the related risks and costs over their entire life through the development and application of asset health and usage management in which the health and reliability prediction plays an important role. In real-life situations where an engineering asset operates under dynamic operational and environmental conditions, the lifetime of an engineering asset is generally described as monitored nonlinear time-series data and subject to high levels of uncertainty and unpredictability. It has been proved that application of data mining techniques is very useful for extracting relevant features which can be used as parameters for assets diagnosis and prognosis. In this paper, a tutorial on nonlinear time-series data mining in engineering asset health and reliability prediction is given. Besides that an overview on health and reliability prediction techniques for engineering assets is covered, this tutorial will focus on concepts, models, algorithms, and applications of hidden Markov models (HMMs and hidden semi-Markov models (HSMMs in engineering asset health prognosis, which are representatives of recent engineering asset health prediction techniques.

  5. A New Biobjective Model to Optimize Integrated Redundancy Allocation and Reliability-Centered Maintenance Problems in a System Using Metaheuristics

    Directory of Open Access Journals (Sweden)

    Shima MohammadZadeh Dogahe

    2015-01-01

    Full Text Available A novel integrated model is proposed to optimize the redundancy allocation problem (RAP and the reliability-centered maintenance (RCM simultaneously. A system of both repairable and nonrepairable components has been considered. In this system, electronic components are nonrepairable while mechanical components are mostly repairable. For nonrepairable components, a redundancy allocation problem is dealt with to determine optimal redundancy strategy and number of redundant components to be implemented in each subsystem. In addition, a maintenance scheduling problem is considered for repairable components in order to identify the best maintenance policy and optimize system reliability. Both active and cold standby redundancy strategies have been taken into account for electronic components. Also, net present value of the secondary cost including operational and maintenance costs has been calculated. The problem is formulated as a biobjective mathematical programming model aiming to reach a tradeoff between system reliability and cost. Three metaheuristic algorithms are employed to solve the proposed model: Nondominated Sorting Genetic Algorithm (NSGA-II, Multiobjective Particle Swarm Optimization (MOPSO, and Multiobjective Firefly Algorithm (MOFA. Several test problems are solved using the mentioned algorithms to test efficiency and effectiveness of the solution approaches and obtained results are analyzed.

  6. Models for measuring and predicting shareholder value: A study of third party software service providers

    Indian Academy of Sciences (India)

    N Viswanadham; Poornima Luthra

    2005-04-01

    In this study, we use the strategic profit model (SPM) and the economic value-added (EVA to measure shareholder value). SPM measures the return on net worth (RONW) which is defined as the return on assets (ROA) multiplied by the financial leverage. EVA is defined as the firm’s net operating profit after taxes (NOPAT) minus the capital charge. Both, RONW and EVA provide an indication of how much shareholder value a firm creates for its shareholders, year on year. With the increasing focus on creation of shareholder value and core competencies, many companies are outsourcing their information technology (IT) related activities to third party software companies. Indian software companies have become leaders in providing these services. Companies from several other countries are also competing for the top slot. We use the SPM and EVA models to analyse the four listed players of the software industry using the publicly available published data. We compare the financial data obtained from the models, and use peer average data to provide customized recommendations for each company to improve their shareholder value. Assuming that the companies follow these rules, we also predict future RONW and EVA for the companies for the financial year 2005. Finally, we make several recommendations to software providers for effectively competing in the global arena.

  7. Conceptual Model of Providing Traffic Navigation Services to Visually Impaired Persons

    Directory of Open Access Journals (Sweden)

    Marko Periša

    2014-05-01

    Full Text Available In order to include people of reduced mobility in the traffic system it is necessary to provide accessibility and information of the users to all the facilities surrounding them. By analysing the currently available information and communication technologies a new conceptual model of providing navigation services to the visually impaired persons has been proposed. This model is based on Cloud Computing platform, and this research describes the method of navigating the users based on accurate and updated data. The users’ requirements have been analysed according to the needs of the movement of visually impaired persons along the traffic network. The information and communication solutions with the function of informing these groups of users have to provide accurate and updated data, which is made possible by the proposed model. This research was conducted on the most frequent routes in the city of Zagreb. With the review of model efficiency user’s sense of security is increased in the amount of 87%.

  8. Combining models of behaviour with operational data to provide enhanced condition monitoring of AGR cores

    Energy Technology Data Exchange (ETDEWEB)

    West, Graeme M., E-mail: graeme.west@strath.ac.uk; Wallace, Christopher J.; McArthur, Stephen D.J.

    2014-06-01

    Highlights: • Combining laboratory model outputs with operational data. • Isolation of single component from noisy data. • Better understanding of the health of graphite cores. • Extended plant operation through leveraging existing data sources. - Abstract: Installation of new monitoring equipment in Nuclear Power Plants (NPPs) is often difficult and expensive and therefore maximizing the information that can be extracted from existing monitoring equipment is highly desirable. This paper describes the process of combining models derived from laboratory experimentation with current operational plant data to infer an underlying measure of health. A demonstration of this process is provided where the fuel channel bore profile, a measure of core health, is inferred from data gathered during the refuelling process of an Advanced Gas-cooled Reactor (AGR) nuclear power plant core. Laboratory simulation was used to generate a model of an interaction between the fuel assembly and the core. This model is used to isolate a single frictional component from a noisy input signal and use this friction component as a measure of health to assess the current condition of the graphite bricks that comprise the core. In addition, the model is used to generate an expected refuelling response (the noisy input signal) for a given set of channel bore diameter measurements for either insertion of new fuel or removal of spent fuel, providing validation of the model. This benefit of this work is that it provides a greater understanding of the health of the graphite core, which is important for continued and extended operation of the AGR plants in the UK.

  9. Towards a realistic interpretation of quantum physics providing a physical model of the natural world

    CERN Document Server

    Santos, Emilio

    2012-01-01

    It is stressed the advantage of a realistic interpretation of quantum mechanics providing a physical model of the quantum world. After some critical comments on the most popular interpretations, the difficulties for a model are pointed out and possible solutions proposed. In particular the existence of discrete states, the quantum jumps, the alleged lack of objective properties, measurement theory, the probabilistic character of quantum physics, the wave-particle duality and the Bell inequalities are commented. It is conjectured that an intuitive picture of the quantum world could be obtained compatible with the quantum predictions for actual experiments, although maybe incompatible with alleged predictions for ideal, unrealizable, experiments.

  10. Model specification and the reliability of fMRI results: implications for longitudinal neuroimaging studies in psychiatry.

    Directory of Open Access Journals (Sweden)

    Jay C Fournier

    Full Text Available Functional Magnetic Resonance Imagine (fMRI is an important assessment tool in longitudinal studies of mental illness and its treatment. Understanding the psychometric properties of fMRI-based metrics, and the factors that influence them, will be critical for properly interpreting the results of these efforts. The current study examined whether the choice among alternative model specifications affects estimates of test-retest reliability in key emotion processing regions across a 6-month interval. Subjects (N = 46 performed an emotional-faces paradigm during fMRI in which neutral faces dynamically morphed into one of four emotional faces. Median voxelwise intraclass correlation coefficients (mvICCs were calculated to examine stability over time in regions showing task-related activity as well as in bilateral amygdala. Four modeling choices were evaluated: a default model that used the canonical hemodynamic response function (HRF, a flexible HRF model that included additional basis functions, a modified CompCor (mCompCor model that added corrections for physiological noise in the global signal, and a final model that combined the flexible HRF and mCompCor models. Model residuals were examined to determine the degree to which each pipeline met modeling assumptions. Results indicated that the choice of modeling approaches impacts both the degree to which model assumptions are met and estimates of test-retest reliability. ICC estimates in the visual cortex increased from poor (mvICC = 0.31 in the default pipeline to fair (mvICC = 0.45 in the full alternative pipeline - an increase of 45%. In nearly all tests, the models with the fewest assumption violations generated the highest ICC estimates. Implications for longitudinal treatment studies that utilize fMRI are discussed.

  11. A general pairwise interaction model provides an accurate description of in vivo transcription factor binding sites.

    Directory of Open Access Journals (Sweden)

    Marc Santolini

    Full Text Available The identification of transcription factor binding sites (TFBSs on genomic DNA is of crucial importance for understanding and predicting regulatory elements in gene networks. TFBS motifs are commonly described by Position Weight Matrices (PWMs, in which each DNA base pair contributes independently to the transcription factor (TF binding. However, this description ignores correlations between nucleotides at different positions, and is generally inaccurate: analysing fly and mouse in vivo ChIPseq data, we show that in most cases the PWM model fails to reproduce the observed statistics of TFBSs. To overcome this issue, we introduce the pairwise interaction model (PIM, a generalization of the PWM model. The model is based on the principle of maximum entropy and explicitly describes pairwise correlations between nucleotides at different positions, while being otherwise as unconstrained as possible. It is mathematically equivalent to considering a TF-DNA binding energy that depends additively on each nucleotide identity at all positions in the TFBS, like the PWM model, but also additively on pairs of nucleotides. We find that the PIM significantly improves over the PWM model, and even provides an optimal description of TFBS statistics within statistical noise. The PIM generalizes previous approaches to interdependent positions: it accounts for co-variation of two or more base pairs, and predicts secondary motifs, while outperforming multiple-motif models consisting of mixtures of PWMs. We analyse the structure of pairwise interactions between nucleotides, and find that they are sparse and dominantly located between consecutive base pairs in the flanking region of TFBS. Nonetheless, interactions between pairs of non-consecutive nucleotides are found to play a significant role in the obtained accurate description of TFBS statistics. The PIM is computationally tractable, and provides a general framework that should be useful for describing and predicting

  12. Partnerships to provide care and medicine for chronic diseases: a model for emerging markets.

    Science.gov (United States)

    Goroff, Michael; Reich, Michael R

    2010-12-01

    The challenge of expanding access to treatment and medicine for chronic diseases in emerging markets is both a public health imperative and a commercial opportunity. Cross-sector partnerships-involving a pharmaceutical manufacturer; a local health care provider; and other private, public, and nonprofit entities-could address this challenge. Such partnerships would provide integrated, comprehensive care and medicines for a specific chronic disease, with medicines directly supplied to the partnership at preferential prices by the manufacturer. The model discussed here requires additional specification, using real numbers and specific contexts, to assess its feasibility. Still, we believe that this model has the potential for public health and private business to cooperate in addressing the rising problem of chronic diseases in emerging markets.

  13. An Inventory Model for Deteriorating Item with Reliability Consideration and Trade Credit

    Directory of Open Access Journals (Sweden)

    S. R. Singh

    2014-10-01

    Full Text Available In today’s global market every body want to buy products of high level quality and to achieve a high level product quality supplier have to invest in improving reliability of production process. In present article we have studies reliable production process with stock dependent unit production and holding cost. Demand is exponential function of time and infinite production process wit non- instantaneous deterioration rate are considered in this paper. Whole study has been done under the effect of trade credit. The main objective of this paper is to optimize the total relevant cost for reliable production process. Numerical example and sensitivity analysis is given at the end of this paper.   Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

  14. Providing comprehensive and consistent access to astronomical observatory archive data: the NASA archive model

    Science.gov (United States)

    McGlynn, Thomas; Fabbiano, Giuseppina; Accomazzi, Alberto; Smale, Alan; White, Richard L.; Donaldson, Thomas; Aloisi, Alessandra; Dower, Theresa; Mazzerella, Joseph M.; Ebert, Rick; Pevunova, Olga; Imel, David; Berriman, Graham B.; Teplitz, Harry I.; Groom, Steve L.; Desai, Vandana R.; Landry, Walter

    2016-07-01

    Since the turn of the millennium a constant concern of astronomical archives have begun providing data to the public through standardized protocols unifying data from disparate physical sources and wavebands across the electromagnetic spectrum into an astronomical virtual observatory (VO). In October 2014, NASA began support for the NASA Astronomical Virtual Observatories (NAVO) program to coordinate the efforts of NASA astronomy archives in providing data to users through implementation of protocols agreed within the International Virtual Observatory Alliance (IVOA). A major goal of the NAVO collaboration has been to step back from a piecemeal implementation of IVOA standards and define what the appropriate presence for the US and NASA astronomy archives in the VO should be. This includes evaluating what optional capabilities in the standards need to be supported, the specific versions of standards that should be used, and returning feedback to the IVOA, to support modifications as needed. We discuss a standard archive model developed by the NAVO for data archive presence in the virtual observatory built upon a consistent framework of standards defined by the IVOA. Our standard model provides for discovery of resources through the VO registries, access to observation and object data, downloads of image and spectral data and general access to archival datasets. It defines specific protocol versions, minimum capabilities, and all dependencies. The model will evolve as the capabilities of the virtual observatory and needs of the community change.

  15. A Global Remote Laboratory Experimentation Network and the Experiment Service Provider Business Model and Plans

    Directory of Open Access Journals (Sweden)

    Tor Ivar Eikaas

    2003-07-01

    Full Text Available This paper presents results from the IST KAII Trial project ReLAX - Remote LAboratory eXperimentation trial (IST 1999-20827, and contributes with a framework for a global remote laboratory experimentation network supported by a new business model. The paper presents this new Experiment Service Provider business model that aims at bringing physical experimentation back into the learning arena, where remotely operable laboratory experiments used in advanced education and training schemes are made available to a global education and training market in industry and academia. The business model is based on an approach where individual experiment owners offer remote access to their high-quality laboratory facilities to users around the world. The usage can be for research, education, on-the-job training etc. The access to these facilities is offered via an independent operating company - the Experiment Service Provider. The Experiment Service Provider offers eCommerce services like booking, access control, invoicing, dispute resolution, quality control, customer evaluation services and a unified Lab Portal.

  16. Natural Circulation in Water Cooled Nuclear Power Plants Phenomena, models, and methodology for system reliability assessments

    Energy Technology Data Exchange (ETDEWEB)

    Jose Reyes

    2005-02-14

    In recent years it has been recognized that the application of passive safety systems (i.e., those whose operation takes advantage of natural forces such as convection and gravity), can contribute to simplification and potentially to improved economics of new nuclear power plant designs. In 1991 the IAEA Conference on ''The Safety of Nuclear Power: Strategy for the Future'' noted that for new plants the use of passive safety features is a desirable method of achieving simplification and increasing the reliability of the performance of essential safety functions, and should be used wherever appropriate''.

  17. Reliability analysis on passive residual heat removal of AP1000 based on Grey model

    Energy Technology Data Exchange (ETDEWEB)

    Qi, Shi; Zhou, Tao; Shahzad, Muhammad Ali; Li, Yu [North China Electric Power Univ., Beijing (China). School of Nuclear Science and Engineering; Beijing Key Laboratory of Passive Safety Technology for Nuclear Energy, Beijing (China); Jiang, Guangming [Nuclear Power Institute of China, Chengdu (China). Science and Technology on Reactor System Design Technology Laboratory

    2017-06-15

    It is common to base the design of passive systems on the natural laws of physics, such as gravity, heat conduction, inertia. For AP1000, a generation-III reactor, such systems have an inherent safety associated with them due to the simplicity of their structures. However, there is a fairly large amount of uncertainty in the operating conditions of these passive safety systems. In some cases, a small deviation in the design or operating conditions can affect the function of the system. The reliability of the passive residual heat removal is analysed.

  18. ASYMPTOTIC PROPERTY OF THE TIME-DEPENDENT SOLUTION OF A RELIABILITY MODEL

    Institute of Scientific and Technical Information of China (English)

    Geni Gupur; GUO Baozhu

    2005-01-01

    We discuss a transfer line consisting of a reliable machine, an unreliable machine and a storage buffer. This transfer line can be described by a group of partial differential equations with integral boundary conditions. First we show that the operator corresponding to these equations generates a positive contraction C0-semigroup T(t), and prove that T(t) is a quasi-compact operator. Next we verify that 0 is an eigenvalue of this operator and its adjoint operator with geometric multiplicity one. Last, by using the above results we obtain that the time-dependent solution of these equations converges strongly to their steady-state solution.

  19. Wind farms providing secondary frequency regulation: Evaluating the performance of model-based receding horizon control

    Science.gov (United States)

    Shapiro, Carl R.; Meyers, Johan; Meneveau, Charles; Gayme, Dennice F.

    2016-09-01

    We investigate the use of wind farms to provide secondary frequency regulation for a power grid. Our approach uses model-based receding horizon control of a wind farm that is tested using a large eddy simulation (LES) framework. In order to enable real-time implementation, the control actions are computed based on a time-varying one-dimensional wake model. This model describes wake advection and interactions, both of which play an important role in wind farm power production. This controller is implemented in an LES model of an 84-turbine wind farm represented by actuator disk turbine models. Differences between the velocities at each turbine predicted by the wake model and measured in LES are used for closed-loop feedback. The controller is tested on two types of regulation signals, “RegA” and “RegD”, obtained from PJM, an independent system operator in the eastern United States. Composite performance scores, which are used by PJM to qualify plants for regulation, are used to evaluate the performance of the controlled wind farm. Our results demonstrate that the controlled wind farm consistently performs well, passing the qualification threshold for all fastacting RegD signals. For the RegA signal, which changes over slower time scales, the controlled wind farm's average performance surpasses the threshold, but further work is needed to enable the controlled system to achieve qualifying performance all of the time.

  20. How reliable are satellite precipitation estimates for driving hydrological models: a verification study over the Mediterranean area

    Science.gov (United States)

    Camici, Stefania; Ciabatta, Luca; Massari, Christian; Brocca, Luca

    2017-04-01

    , TMPA 3B42-RT, CMORPH, PERSIANN and a new soil moisture-derived rainfall datasets obtained through the application of SM2RAIN algorithm (Brocca et al., 2014) to ASCAT (Advanced SCATterometer) soil moisture product are used in the analysis. The performances obtained with SRPs are compared with those obtained by using ground data during the 6-year period from 2010 to 2015. In addition, the performance obtained by an integration of the above mentioned SRPs is also investigated to see whether merged rainfall observations are able to improve flood simulation. Preliminary analysis were also carried out by using the IMERG early run product of GPM mission. The results highlight that SRPs should be used with caution for rainfall-runoff modelling in the Mediterranean region. Bias correction and model recalibration are necessary steps, even though not always sufficient to achieve satisfactory performances. Indeed, some of the products provide unreliable outcomes, mainly in smaller basins (<500 km2) that, however, represent the main target for flood modelling in the Mediterranean area. The better performances are obtained by integrating different SRPs, and particularly by merging TMPA 3B42-RT and SM2RAIN-ASCAT products. The promising results of the integrated product are expected to increase the confidence on the use of SRPs in hydrological modeling, even in challenging areas as the Mediterranean. REFERENCES Brocca, L., Ciabatta, L., Massari, C., Moramarco, T., Hahn, S., Hasenauer, S., Kidd, R., Dorigo, W., Wagner, W., Levizzani, V. (2014). Soil as a natural rain gauge: estimating global rainfall from satellite soil moisture data. Journal of Geophysical Research, 119(9), 5128-5141, doi:10.1002/2014JD021489. Masseroni, D., Cislaghi, A., Camici, S., Massari, C., Brocca, L. (2017). A reliable rainfall-runoff model for flood forecasting: review and application to a semiurbanized watershed at high flood risk in Italy. Hydrology Research, in press, doi:10.2166/nh.2016.037.