Singularity of Some Software Reliability Models and Parameter Estimation Method
无
2000-01-01
According to the principle, “The failure data is the basis of software reliability analysis”, we built a software reliability expert system (SRES) by adopting the artificial intelligence technology. By reasoning out the conclusion from the fitting results of failure data of a software project, the SRES can recommend users “the most suitable model” as a software reliability measurement model. We believe that the SRES can overcome the inconsistency in applications of software reliability models well. We report investigation results of singularity and parameter estimation methods of experimental models in SRES.
Reliability Estimation of the Pultrusion Process Using the First-Order Reliability Method (FORM)
Baran, Ismet; Tutum, Cem C.; Hattel, Jesper H.
2013-01-01
In the present study the reliability estimation of the pultrusion process of a flat plate is analyzed by using the first order reliability method (FORM). The implementation of the numerical process model is validated by comparing the deterministic temperature and cure degree profiles with correspond
Reliability Estimation of the Pultrusion Process Using the First-Order Reliability Method (FORM)
Baran, Ismet; Tutum, Cem C.; Hattel, Jesper H.
2013-08-01
In the present study the reliability estimation of the pultrusion process of a flat plate is analyzed by using the first order reliability method (FORM). The implementation of the numerical process model is validated by comparing the deterministic temperature and cure degree profiles with corresponding analyses in the literature. The centerline degree of cure at the exit (CDOCE) being less than a critical value and the maximum composite temperature ( T max) during the process being greater than a critical temperature are selected as the limit state functions (LSFs) for the FORM. The cumulative distribution functions of the CDOCE and T max as well as the correlation coefficients are obtained by using the FORM and the results are compared with corresponding Monte-Carlo simulations (MCS). According to the results obtained from the FORM, an increase in the pulling speed yields an increase in the probability of T max being greater than the resin degradation temperature. A similar trend is also seen for the probability of the CDOCE being less than 0.8.
Implementation and Analysis of Probabilistic Methods for Gate-Level Circuit Reliability Estimation
WANG Zhen; JIANG Jianhui; YANG Guang
2007-01-01
The development of VLSI technology results in the dramatically improvement of the performance of integrated circuits. However, it brings more challenges to the aspect of reliability. Integrated circuits become more susceptible to soft errors. Therefore, it is imperative to study the reliability of circuits under the soft error. This paper implements three probabilistic methods (two pass, error propagation probability, and probabilistic transfer matrix) for estimating gate-level circuit reliability on PC. The functions and performance of these methods are compared by experiments using ISCAS85 and 74-series circuits.
Donald D. Anderson
2012-01-01
Full Text Available Recent findings suggest that contact stress is a potent predictor of subsequent symptomatic osteoarthritis development in the knee. However, much larger numbers of knees (likely on the order of hundreds, if not thousands need to be reliably analyzed to achieve the statistical power necessary to clarify this relationship. This study assessed the reliability of new semiautomated computational methods for estimating contact stress in knees from large population-based cohorts. Ten knees of subjects from the Multicenter Osteoarthritis Study were included. Bone surfaces were manually segmented from sequential 1.0 Tesla magnetic resonance imaging slices by three individuals on two nonconsecutive days. Four individuals then registered the resulting bone surfaces to corresponding bone edges on weight-bearing radiographs, using a semi-automated algorithm. Discrete element analysis methods were used to estimate contact stress distributions for each knee. Segmentation and registration reliabilities (day-to-day and interrater for peak and mean medial and lateral tibiofemoral contact stress were assessed with Shrout-Fleiss intraclass correlation coefficients (ICCs. The segmentation and registration steps of the modeling approach were found to have excellent day-to-day (ICC 0.93–0.99 and good inter-rater reliability (0.84–0.97. This approach for estimating compartment-specific tibiofemoral contact stress appears to be sufficiently reliable for use in large population-based cohorts.
Denise Güthlin
Full Text Available Due to time and financial constraints indices are often used to obtain landscape-scale estimates of relative species abundance. Using two different field methods and comparing the results can help to detect possible bias or a non monotonic relationship between the index and the true abundance, providing more reliable results. We used data obtained from camera traps and feces counts to independently estimate relative abundance of red foxes in the Black Forest, a forested landscape in southern Germany. Applying negative binomial regression models, we identified landscape parameters that influence red fox abundance, which we then used to predict relative red fox abundance. We compared the estimated regression coefficients of the landscape parameters and the predicted abundance of the two methods. Further, we compared the costs and the precision of the two field methods. The predicted relative abundances were similar between the two methods, suggesting that the two indices were closely related to the true abundance of red foxes. For both methods, landscape diversity and edge density best described differences in the indices and had positive estimated effects on the relative fox abundance. In our study the costs of each method were of similar magnitude, but the sample size obtained from the feces counts (262 transects was larger than the camera trap sample size (88 camera locations. The precision of the camera traps was lower than the precision of the feces counts. The approach we applied can be used as a framework to compare and combine the results of two or more different field methods to estimate abundance and by this enhance the reliability of the result.
Reliability estimation using kriging metamodel
Cho, Tae Min; Ju, Byeong Hyeon; Lee, Byung Chai [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Jung, Do Hyun [Korea Automotive Technology Institute, Chonan (Korea, Republic of)
2006-08-15
In this study, the new method for reliability estimation is proposed using kriging metamodel. Kriging metamodel can be determined by appropriate sampling range and sampling numbers because there are no random errors in the Design and Analysis of Computer Experiments(DACE) model. The first kriging metamodel is made based on widely ranged sampling points. The Advanced First Order Reliability Method(AFORM) is applied to the first kriging metamodel to estimate the reliability approximately. Then, the second kriging metamodel is constructed using additional sampling points with updated sampling range. The Monte-Carlo Simulation(MCS) is applied to the second kriging metamodel to evaluate the reliability. The proposed method is applied to numerical examples and the results are almost equal to the reference reliability.
Kainz, Hans; Hajek, Martin; Modenese, Luca; Saxby, David J; Lloyd, David G; Carty, Christopher P
2017-03-01
In human motion analysis predictive or functional methods are used to estimate the location of the hip joint centre (HJC). It has been shown that the Harrington regression equations (HRE) and geometric sphere fit (GSF) method are the most accurate predictive and functional methods, respectively. To date, the comparative reliability of both approaches has not been assessed. The aims of this study were to (1) compare the reliability of the HRE and the GSF methods, (2) analyse the impact of the number of thigh markers used in the GSF method on the reliability, (3) evaluate how alterations to the movements that comprise the functional trials impact HJC estimations using the GSF method, and (4) assess the influence of the initial guess in the GSF method on the HJC estimation. Fourteen healthy adults were tested on two occasions using a three-dimensional motion capturing system. Skin surface marker positions were acquired while participants performed quite stance, perturbed and non-perturbed functional trials, and walking trials. Results showed that the HRE were more reliable in locating the HJC than the GSF method. However, comparison of inter-session hip kinematics during gait did not show any significant difference between the approaches. Different initial guesses in the GSF method did not result in significant differences in the final HJC location. The GSF method was sensitive to the functional trial performance and therefore it is important to standardize the functional trial performance to ensure a repeatable estimate of the HJC when using the GSF method.
Bioreactance is a reliable method for estimating cardiac output at rest and during exercise.
Jones, T W; Houghton, D; Cassidy, S; MacGowan, G A; Trenell, M I; Jakovljevic, D G
2015-09-01
Bioreactance is a novel noninvasive method for cardiac output measurement that involves analysis of blood flow-dependent changes in phase shifts of electrical currents applied across the thorax. The present study evaluated the test-retest reliability of bioreactance for assessing haemodynamic variables at rest and during exercise. 22 healthy subjects (26 (4) yrs) performed an incremental cycle ergometer exercise protocol relative to their individual power output at maximal O2 consumption (Wmax) on two separate occasions (trials 1 and 2). Participants cycled for five 3 min stages at 20, 40, 60, 80 and 90% Wmax. Haemodynamic and cardiorespiratory variables were assessed at rest and continuously during the exercise protocol. Cardiac output was not significantly different between trials at rest (P=0.948), or between trials at any stage of the exercise protocol (all P>0.30). There was a strong relationship between cardiac output estimates between the trials (ICC=0.95, Prest (P=0.989) or during exercise (all P>0.15), and strong relationships between trials were found (ICC=0.83, Prest and during different stages of graded exercise testing including maximal exertion. © The Author 2015. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
张玉卓
1998-01-01
The quantitative evaluation of errors involved in a particular numerical modelling is of prime importance for the effectiveness and reliability of the method. Errors in Distinct Element Modelling are generated mainly through three resources as simplification of physical model, determination of parameters and boundary conditions. A measure of errors which represent the degree of numerical solution 'close to true value' is proposed through fuzzy probability in this paper. The main objective of this paper is to estimate the reliability of Distinct Element Method in rock engineering practice by varying the parameters and boundary conditions. The accumulation laws of standard errors induced by improper determination of parameters and boundary conditions are discussed in delails. Furthermore, numerical experiments are given to illustrate the estimation of fuzzy reliability. Example shows that fuzzy reliability falls between 75%-98% when the relative standard errors of input data is under 10 %.
An easy and reliable automated method to estimate oxidative stress in the clinical setting.
Vassalle, Cristina
2008-01-01
During the last few years, reliable and simple tests have been proposed to estimate oxidative stress in vivo. Many of them can be easily adapted to automated analyzers, permitting the simultaneous processing of a large number of samples in a greatly reduced time, avoiding manual sample and reagent handling, and reducing variability sources. In this chapter, description of protocols for the estimation of reactive oxygen metabolites and the antioxidant capacity (respectively the d-ROMs and OXY Adsorbent Test, Diacron, Grosseto, Italy) by using the clinical chemistry analyzer SYNCHRON, CX 9 PRO (Beckman Coulter, Brea, CA, USA) is reported as an example of such an automated procedure that can be applied in the clinical setting. Furthermore, a calculation to compute a global oxidative stress index (Oxidative-INDEX), reflecting both oxidative and antioxidant counterparts, and, therefore, a potentially more powerful parameter, is also described.
Annegret Grimm
Full Text Available Reliable estimates of population size are fundamental in many ecological studies and biodiversity conservation. Selecting appropriate methods to estimate abundance is often very difficult, especially if data are scarce. Most studies concerning the reliability of different estimators used simulation data based on assumptions about capture variability that do not necessarily reflect conditions in natural populations. Here, we used data from an intensively studied closed population of the arboreal gecko Gehyra variegata to construct reference population sizes for assessing twelve different population size estimators in terms of bias, precision, accuracy, and their 95%-confidence intervals. Two of the reference populations reflect natural biological entities, whereas the other reference populations reflect artificial subsets of the population. Since individual heterogeneity was assumed, we tested modifications of the Lincoln-Petersen estimator, a set of models in programs MARK and CARE-2, and a truncated geometric distribution. Ranking of methods was similar across criteria. Models accounting for individual heterogeneity performed best in all assessment criteria. For populations from heterogeneous habitats without obvious covariates explaining individual heterogeneity, we recommend using the moment estimator or the interpolated jackknife estimator (both implemented in CAPTURE/MARK. If data for capture frequencies are substantial, we recommend the sample coverage or the estimating equation (both models implemented in CARE-2. Depending on the distribution of catchabilities, our proposed multiple Lincoln-Petersen and a truncated geometric distribution obtained comparably good results. The former usually resulted in a minimum population size and the latter can be recommended when there is a long tail of low capture probabilities. Models with covariates and mixture models performed poorly. Our approach identified suitable methods and extended options to
Bardot, Leon; McClelland, Elizabeth
2000-10-01
The mode of origin of volcaniclastic deposits can be difficult to determine from field constraints, and the palaeomagnetic technique of emplacement temperature (Te) determination provides a powerful discriminatory test for primary volcanic origin. This technique requires that the low-blocking-temperature (Tb) component of remanence in the direction of the Earth's field in inherited lithic clasts is of thermal origin and was acquired during transport and cooling in a hot pyroclastic flow; otherwise, the Te determination may be inaccurate. If the low-Tb component is not of thermal origin it may be a viscous remanent magnetization (VRM) or a chemical remanent magnetization (CRM). The acquisition of a VRM depends on the duration of exposure to an applied magnetic field, and thus the laboratory unblocking temperature (Tub) of a VRM of a certain age imposes a minimum Te that can be determined for that deposit. Palaeointensity experiments were carried out to assess the magnetic origin (pTRM, CRM, or a combination of both) of the low-Tb component in a number of samples from pyroclastic deposits from Santorini, Greece. Seven of the 24 samples used in these experiments passed the stringent tests for reliable palaointensity determination. These values demonstrated, for six of the samples, that the low-Tb component was of thermal origin and therefore that the estimate of Te was valid. In the other 17 samples, valuable information was gained about the characteristics of the magnetic alteration that occurred during the palaeointensity experiments, allowing assessment of the reliability of Te estimates in these cases. These cases showed that if a CRM is present it has a direction parallel to the applied field, and not parallel to the direction of the parent grain. They also show that, even if a CRM is present, it does not necessarily affect the estimate of Te. Two samples used in these experiments displayed curvature between their two components of magnetization. Data from this
Estimation of Bridge Reliability Distributions
Thoft-Christensen, Palle
In this paper it is shown how the so-called reliability distributions can be estimated using crude Monte Carlo simulation. The main purpose is to demonstrate the methodology. Therefor very exact data concerning reliability and deterioration are not needed. However, it is intended in the paper to ...
Nguyen, Thi-Hau; Ranwez, Vincent; Berry, Vincent; Scornavacca, Celine
2013-01-01
The genome content of extant species is derived from that of ancestral genomes, distorted by evolutionary events such as gene duplications, transfers and losses. Reconciliation methods aim at recovering such events and at localizing them in the species history, by comparing gene family trees to species trees. These methods play an important role in studying genome evolution as well as in inferring orthology relationships. A major issue with reconciliation methods is that the reliability of predicted evolutionary events may be questioned for various reasons: Firstly, there may be multiple equally optimal reconciliations for a given species tree–gene tree pair. Secondly, reconciliation methods can be misled by inaccurate gene or species trees. Thirdly, predicted events may fluctuate with method parameters such as the cost or rate of elementary events. For all of these reasons, confidence values for predicted evolutionary events are sorely needed. It was recently suggested that the frequency of each event in the set of all optimal reconciliations could be used as a support measure. We put this proposition to the test here and also consider a variant where the support measure is obtained by additionally accounting for suboptimal reconciliations. Experiments on simulated data show the relevance of event supports computed by both methods, while resorting to suboptimal sampling was shown to be more effective. Unfortunately, we also show that, unlike the majority-rule consensus tree for phylogenies, there is no guarantee that a single reconciliation can contain all events having above 50% support. In this paper, we detail how to rely on the reconciliation graph to efficiently identify the median reconciliation. Such median reconciliation can be found in polynomial time within the potentially exponential set of most parsimonious reconciliations. PMID:24124449
Alghali, R.; Kamaruddin, A. F.; Mokhtar, N.
2016-12-01
Introduction: The application of forensic odontology using teeth and bones becomes the most commonly used methods to determine age of unknown individuals. Objective: The aim of this study was to determine the reliability of Malay formula of Demirjian and Malay formula of Cameriere methods in determining the dental age that is closely matched with the chronological age of Malay children in Kepala Batas region. Methodology: This is a retrospective cross-sectional study. 126 good quality dental panoramic radiographs (DPT) of healthy Malay children aged 8-16 years (49 boys and 77 girls) were selected and measured. All radiographs were taken at Dental Specialist Clinic, Advanced Medical and Dental Institute, Universiti Sains Malaysia. The measurements were carried out using new Malay formula of both Demirjian and Cameriere methods by calibrated examiner. Results: The intraclass correlation coefficient (ICC) analysis between the chronological age with Demirjian and Cameriere has been calculated. The Demirjian method has shown a better percentage (91.4%) of ICC compared to Cameriere (89.2%) which also indicates a high association, with good reliability. However, by comparing between Demirjian and Cameriere, it can be concluded that Demirjian has a better reliability. Conclusion: Thus, the results suggested that, modified Demirjian method is more reliable than modified Cameriere method among the population in Kepala Batas region.
Aghamousa, Amir
2014-01-01
The observable time delays between the multiple images of strong lensing systems with time variable sources can provide us with some valuable information to probe the expansion history of the Universe. Estimation of these time delays can be very challenging due to complexities of the observed data where there are seasonal gaps, various noises and systematics such as unknown microlensing effects. In this paper we introduce a novel approach to estimate the time delays for strong lensing systems implementing various statistical methods of data analysis including the method of smoothing and cross-correlation. The method we introduce in this paper has been recently used in TDC0 and TDC1 Strong Lens Time Delay Challenges and has shown its power in reliable and precise estimation of time delays dealing with data with different complexities.
Are there reliable methods to estimate the nuclear orientation of Seyfert galaxies?
Marin, F
2016-01-01
Orientation, together with accretion and evolution, is one of the three main drivers in the Grand Unification of Active Galactic Nuclei (AGN). Being unresolved, determining the true inclination of those powerful sources is always difficult and indirect, yet it remains a vital clue to apprehend the numerous, panchromatic and complex spectroscopic features we detect. There are only a hundred inclinations derived so far; in this context, can we be sure that we measure the true orientation of AGN? To answer this question, four methods to estimate the nuclear inclination of AGN are investigated and compared to inclination-dependent observables (hydrogen column density, Balmer linewidth, optical polarization, and flux ratios within the IR and relative to X-rays). Among these orientation indicators, the method developed by Fisher, Crenshaw, Kraemer et al., mapping and modeling the radial velocities of the [O iii] emission region in AGN, is the most successful. The [O iii]-mapping technique shows highly statistically...
Estimation of AM fungal colonization - Comparability and reliability of classical methods.
Füzy, Anna; Biró, Ibolya; Kovács, Ramóna; Takács, Tünde
2015-12-01
The characterization of mycorrhizal status in hosts can be a good indicator of symbiotic associations in inoculation experiments or in ecological research. The most common microscopic-based observation methods, such as (i) the gridline intersect method, (ii) the magnified intersections method and (iii) the five-class system of Trouvelot were tested to find the most simple, easily executable, effective and objective ones and their appropriate parameters for characterization of mycorrhizal status. In a pot experiment, white clover (Trifolium repens L.) host plant was inoculated with 6 (BEG144; syn. Rhizophagus intradices) in pumice substrate to monitor the AMF colonization properties during host growth. Eleven (seven classical and four new) colonization parameters were estimated by three researchers in twelve sampling times during plant growth. Variations among methods, observers, parallels, or individual plants were determined and analysed to select the most appropriate parameters and sampling times for monitoring. The comparability of the parameters of the three methods was also tested. As a result of the experiment classical parameters were selected for hyphal colonization: colonization frequency in the first stage or colonization density in the later period, and arbuscular richness of roots. A new parameter was recommended to determine vesicule and spore content of colonized roots at later stages of symbiosis.
Zeeshan Ali Siddiqui
2016-01-01
Full Text Available Component-based software system (CBSS development technique is an emerging discipline that promises to take software development into a new era. As hardware systems are presently being constructed from kits of parts, software systems may also be assembled from components. It is more reliable to reuse software than to create. It is the glue code and individual components reliability that contribute to the reliability of the overall system. Every component contributes to overall system reliability according to the number of times it is being used, some components are of critical usage, known as usage frequency of component. The usage frequency decides the weight of each component. According to their weights, each component contributes to the overall reliability of the system. Therefore, ranking of components may be obtained by analyzing their reliability impacts on overall application. In this paper, we propose the application of fuzzy multi-objective optimization on the basis of ratio analysis, Fuzzy-MOORA. The method helps us find the best suitable alternative, software component, from a set of available feasible alternatives named software components. It is an accurate and easy to understand tool for solving multi-criteria decision making problems that have imprecise and vague evaluation data. By the use of ratio analysis, the proposed method determines the most suitable alternative among all possible alternatives, and dimensionless measurement will realize the job of ranking of components for estimating CBSS reliability in a non-subjective way. Finally, three case studies are shown to illustrate the use of the proposed technique.
Wouter D Weeda
Full Text Available The amplitude and latency of single-trial EEG/MEG signals may provide valuable information concerning human brain functioning. In this article we propose a new method to reliably estimate single-trial amplitude and latency of EEG/MEG signals. The advantages of the method are fourfold. First, no a-priori specified template function is required. Second, the method allows for multiple signals that may vary independently in amplitude and/or latency. Third, the method is less sensitive to noise as it models data with a parsimonious set of basis functions. Finally, the method is very fast since it is based on an iterative linear least squares algorithm. A simulation study shows that the method yields reliable estimates under different levels of latency variation and signal-to-noise ratioÕs. Furthermore, it shows that the existence of multiple signals can be correctly determined. An application to empirical data from a choice reaction time study indicates that the method describes these data accurately.
Bayesian Regression and Neuro-Fuzzy Methods Reliability Assessment for Estimating Streamflow
Yaseen A. Hamaamin
2016-07-01
Full Text Available Accurate and efficient estimation of streamflow in a watershed’s tributaries is prerequisite parameter for viable water resources management. This study couples process-driven and data-driven methods of streamflow forecasting as a more efficient and cost-effective approach to water resources planning and management. Two data-driven methods, Bayesian regression and adaptive neuro-fuzzy inference system (ANFIS, were tested separately as a faster alternative to a calibrated and validated Soil and Water Assessment Tool (SWAT model to predict streamflow in the Saginaw River Watershed of Michigan. For the data-driven modeling process, four structures were assumed and tested: general, temporal, spatial, and spatiotemporal. Results showed that both Bayesian regression and ANFIS can replicate global (watershed and local (subbasin results similar to a calibrated SWAT model. At the global level, Bayesian regression and ANFIS model performance were satisfactory based on Nash-Sutcliffe efficiencies of 0.99 and 0.97, respectively. At the subbasin level, Bayesian regression and ANFIS models were satisfactory for 155 and 151 subbasins out of 155 subbasins, respectively. Overall, the most accurate method was a spatiotemporal Bayesian regression model that outperformed other models at global and local scales. However, all ANFIS models performed satisfactory at both scales.
Lane, Ginny G.; White, Amy E.; Henson, Robin K.
2002-01-01
Conducted a reliability generalizability study on the Coopersmith Self-Esteem Inventory (CSEI; S. Coopersmith, 1967) to examine the variability of reliability estimates across studies and to identify study characteristics that may predict this variability. Results show that reliability for CSEI scores can vary considerably, especially at the…
Mohammed, Rezwana Begum; Kalyan, V Siva; Tircouveluri, Saritha; Vegesna, Goutham Chakravarthy; Chirla, Anil; Varma, D Maruthi
2014-07-01
Determining the age of a person in the absence of documentary evidence of birth is essential for legal and medico-legal purpose. Fishman method of skeletal maturation is widely used for this purpose; however, the reliability of this method for people with all geographic locations is not well-established. In this study, we assessed various stages of carpal and metacarpal bone maturation and tested the reliability of Fishman method of skeletal maturation to estimate the age in South Indian population. We also evaluated the correlation between the chronological age (CA) and predicted age based on the Fishman method of skeletal maturation. Digital right hand-wrist radiographs of 330 individuals aged 9-20 years were obtained and the skeletal maturity stage for each subject was determined using Fishman method. The skeletal maturation indicator scores were obtained and analyzed with reference to CA and sex. Data was analyzed using the SPSS software package (version 12, SPSS Inc., Chicago, IL, USA). The study subjects had a tendency toward late maturation with the mean skeletal age (SA) estimated being significantly lowers (P maturity stages. Nevertheless, significant correlation was observed in this study between SA and CA for males (r = 0.82) and females (r = 0.85). Interestingly, female subjects were observed to be advanced in SA compared with males. Fishman method of skeletal maturation can be used as an alternative tool for the assessment of mean age of an individual of unknown CA in South Indian children.
Structural Reliability Methods
Ditlevsen, Ove Dalager; Madsen, H. O.
of structural reliability, including the theoretical basis for these methods. Partial safety factor codes under current practice are briefly introduced and discussed. A probabilistic code format for obtaining a formal reliability evaluation system that catches the most essential features of the nature......The structural reliability methods quantitatively treat the uncertainty of predicting the behaviour and properties of a structure given the uncertain properties of its geometry, materials, and the actions it is supposed to withstand. This book addresses the probabilistic methods for evaluation...
Bayesian methods for estimating the reliability in complex hierarchical networks (interim report).
Marzouk, Youssef M.; Zurn, Rena M.; Boggs, Paul T.; Diegert, Kathleen V. (Sandia National Laboratories, Albuquerque, NM); Red-Horse, John Robert (Sandia National Laboratories, Albuquerque, NM); Pebay, Philippe Pierre
2007-05-01
Current work on the Integrated Stockpile Evaluation (ISE) project is evidence of Sandia's commitment to maintaining the integrity of the nuclear weapons stockpile. In this report, we undertake a key element in that process: development of an analytical framework for determining the reliability of the stockpile in a realistic environment of time-variance, inherent uncertainty, and sparse available information. This framework is probabilistic in nature and is founded on a novel combination of classical and computational Bayesian analysis, Bayesian networks, and polynomial chaos expansions. We note that, while the focus of the effort is stockpile-related, it is applicable to any reasonably-structured hierarchical system, including systems with feedback.
Mission Reliability Estimation for Repairable Robot Teams
Stephen B. Stancliff
2008-11-01
Full Text Available Many of the most promising applications for mobile robots require very high reliability. The current generation of mobile robots is, for the most part, highly unreliable. The few mobile robots that currently demonstrate high reliability achieve this reliability at a high financial cost. In order for mobile robots to be more widely used, it will be necessary to find ways to provide high mission reliability at lower cost. Comparing alternative design paradigms in a principled way requires methods for comparing the reliability of different robot and robot team configurations. In this paper, we present the first principled quantitative method for performing mission reliability estimation for mobile robot teams. We also apply this method to an example robot mission, examining the cost-reliability tradeoffs among different team configurations. Using conservative estimates of the cost-reliability relationship, our results show that it is possible to significantly reduce the cost of a robotic mission by using cheaper, lower-reliability components and providing spares.
Levillain, Joseph; Thongo M'Bou, Armel; Deleporte, Philippe; Saint-André, Laurent; Jourdan, Christophe
2011-07-01
Despite their importance for plant production, estimations of below-ground biomass and its distribution in the soil are still difficult and time consuming, and no single reliable methodology is available for different root types. To identify the best method for root biomass estimations, four different methods, with labour requirements, were tested at the same location. The four methods, applied in a 6-year-old Eucalyptus plantation in Congo, were based on different soil sampling volumes: auger (8 cm in diameter), monolith (25 × 25 cm quadrate), half Voronoi trench (1·5 m(3)) and a full Voronoi trench (3 m(3)), chosen as the reference method. With the reference method (0-1m deep), fine-root biomass (FRB, diameter biomass (MRB diameter 2-10 mm) at 2·0 t ha(-1), coarse-root biomass (CRB, diameter >10 mm) at 5·6 t ha(-1) and stump biomass at 6·8 t ha(-1). Total below-ground biomass was estimated at 16·2 t ha(-1) (root : shoot ratio equal to 0·23) for this 800 tree ha(-1) eucalypt plantation density. The density of FRB was very high (0·56 t ha(-1)) in the top soil horizon (0-3 cm layer) and decreased greatly (0·3 t ha(-1)) with depth (50-100 cm). Without labour requirement considerations, no significant differences were found between the four methods for FRB and MRB; however, CRB was better estimated by the half and full Voronoi trenches. When labour requirements were considered, the most effective method was auger coring for FRB, whereas the half and full Voronoi trenches were the most appropriate methods for MRB and CRB, respectively. As CRB combined with stumps amounted to 78 % of total below-ground biomass, a full Voronoi trench is strongly recommended when estimating total standing root biomass. Conversely, for FRB estimation, auger coring is recommended with a design pattern accounting for the spatial variability of fine-root distribution.
Sathyachandran, S. K.; Roy, D. P.; Boschetti, L.
2014-12-01
The Fire Radiative Power (FRP) [MW] is a measure of the rate of biomass combustion and can be retrieved from ground based and satellite observations using middle infra-red measurements. The temporal integral of FRP is the Fire Radiative Energy (FRE) [MJ] and is related linearly to the total biomass consumption and so pyrogenic emissions. Satellite derived biomass consumption and emissions estimates have been derived conventionally by computing the summed total FRP, or the average FRP (arithmetic average of FRP retrievals), over spatial geographic grids for fixed time periods. These two methods are prone to estimation bias, especially under irregular sampling conditions such as provided by polar-orbiting satellites, because the FRP can vary rapidly in space and time as a function of the fire behavior. Linear temporal integration of FRP taking into account when the FRP values were observed and using the trapezoidal rule for numerical integration has been suggested as an alternate FRE estimation method. In this study FRP data measured rapidly with a dual-band radiometer over eight prescribed fires are used to compute eight FRE values using the sum, mean and trapezoidal estimation approaches under a variety of simulated irregular sampling conditions. The estimated values are compared to biomass consumed measurements for each of the eight fires to provide insights into which method provides more accurate and precise biomass consumption estimates. The three methods are also applied to continental MODIS FRP data to study their differences using polar orbiting satellite data. The research findings indicate that trapezoidal FRP numerical integration provides the most reliable estimator.
DISTRIBUTED MONITORING SYSTEM RELIABILITY ESTIMATION WITH CONSIDERATION OF STATISTICAL UNCERTAINTY
Yi Pengxing; Yang Shuzi; Du Runsheng; Wu Bo; Liu Shiyuan
2005-01-01
Taking into account the whole system structure and the component reliability estimation uncertainty, a system reliability estimation method based on probability and statistical theory for distributed monitoring systems is presented. The variance and confidence intervals of the system reliability estimation are obtained by expressing system reliability as a linear sum of products of higher order moments of component reliability estimates when the number of component or system survivals obeys binomial distribution. The eigenfunction of binomial distribution is used to determine the moments of component reliability estimates, and a symbolic matrix which can facilitate the search of explicit system reliability estimates is proposed. Furthermore, a case of application is used to illustrate the procedure, and with the help of this example, various issues such as the applicability of this estimation model, and measures to improve system reliability of monitoring systems are discussed.
Bayesian Missile System Reliability from Point Estimates
2014-10-28
OCT 2014 2. REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Bayesian Missile System Reliability from Point Estimates 5a. CONTRACT...Principle (MEP) to convert point estimates to probability distributions to be used as priors for Bayesian reliability analysis of missile data, and...illustrate this approach by applying the priors to a Bayesian reliability model of a missile system. 15. SUBJECT TERMS priors, Bayesian , missile
Reliabilities of genomic estimated breeding values in Danish Jersey
Thomasen, Jørn Rind; Guldbrandtsen, Bernt; Su, Guosheng;
2012-01-01
In order to optimize the use of genomic selection in breeding plans, it is essential to have reliable estimates of the genomic breeding values. This study investigated reliabilities of direct genomic values (DGVs) in the Jersey population estimated by three different methods. The validation methods...... of DGV. The data set consisted of 1003 Danish Jersey bulls with conventional estimated breeding values (EBVs) for 14 different traits included in the Nordic selection index. The bulls were genotyped for Single-nucleotide polymorphism (SNP) markers using the Illumina 54 K chip. A Bayesian method was used...... index pre-selection only. Averaged across traits, the estimates of reliability of DGVs ranged from 0.20 for validation on the most recent 3 years of bulls and up to 0.42 for expected reliabilities. Reliabilities from the cross-validation were on average 0.24. For the individual traits, the reliability...
Perez Sanchez-Canete, Enrique; Scott, Russell L.; Barron-Gafford, Greg; van Haren, Joost
2016-04-01
Soil CO2 fluxes represent a major source of CO2 emissions, where small changes in their estimation provoke large changes in the quantification of the global carbon cycle. Recently, the gradient method that employs soil CO2 probes at multiple depths has been offered as a way to inexpensively and continuously measure soil CO2 flux. However, the use of the gradient method can yield inappropriate flux estimates due to the uncertainties mainly associated with the inappropriate determination of the soil diffusion coefficient. Therefore, in-situ methods to determine diffusion coefficient are necessary to obtain accurate CO2 fluxes. Here the data obtained during one year with two automatic soil CO2 chambers along with CO2 molar fraction data from 4 probes at 10 cm depth, were used to determine a model of soil diffusion coefficient (Ds), which was applied later to obtain the soil CO2 fluxes by the gradient method. Another Ds model was obtained by injection and sampling of SF6 during several campaigns with different soil water content levels. Both Ds models obtained in situ were compared with another 13 Ds models published. We addressed three questions: 1) Can we use a previously published model, or do we need to determine Ds in situ? 2) How accurate are the CO2 fluxes estimates obtained by the gradient method for different Ds models, compared with chamber-measured CO2 fluxes? 3) Can we take a limited number of chamber measurements to obtain a good Ds model, or we need longer calibration periods? Comparing the cumulative soil respiration for the different diffusion models, we found that the model with empirical calibration to the soil chambers had the best agreement with the chamber fluxes (SF6 model underestimated by chamber fluxes by 23% and the published models ranged from an underestimate of 78% to an overestimate of 14%. Most importantly, we found that a few days of measurements with a soil respiration chamber (with widely varying soil water content) are enough to build
Reliability Estimates for Power Supplies
Lee C. Cadwallader; Peter I. Petersen
2005-09-01
Failure rates for large power supplies at a fusion facility are critical knowledge needed to estimate availability of the facility or to set priorties for repairs and spare components. A study of the "failure to operate on demand" and "failure to continue to operate" failure rates has been performed for the large power supplies at DIII-D, which provide power to the magnet coils, the neutral beam injectors, the electron cyclotron heating systems, and the fast wave systems. When one of the power supplies fails to operate, the research program has to be either temporarily changed or halted. If one of the power supplies for the toroidal or ohmic heating coils fails, the operations have to be suspended or the research is continued at de-rated parameters until a repair is completed. If one of the power supplies used in the auxiliary plasma heating systems fails the research is often temporarily changed until a repair is completed. The power supplies are operated remotely and repairs are only performed when the power supplies are off line, so that failure of a power supply does not cause any risk to personnel. The DIII-D Trouble Report database was used to determine the number of power supply faults (over 1,700 reports), and tokamak annual operations data supplied the number of shots, operating times, and power supply usage for the DIII-D operating campaigns between mid-1987 and 2004. Where possible, these power supply failure rates from DIII-D will be compared to similar work that has been performed for the Joint European Torus equipment. These independent data sets support validation of the fusion-specific failure rate values.
A new simulation estimator of system reliability
Sheldon M. Ross
1994-01-01
Full Text Available A basic identity is proven and applied to obtain new simulation estimators concerning (a system reliability, (b a multi-valued system. We show that the variance of this new estimator is often of the order α2 when the usual raw estimator has variance of the order α and α is small. We also indicate how this estimator can be combined with standard variance reduction techniques of antithetic variables, stratified sampling and importance sampling.
Feischl, Michael; Gantner, Gregor; Praetorius, Dirk
2015-06-01
We consider the Galerkin boundary element method (BEM) for weakly-singular integral equations of the first-kind in 2D. We analyze some residual-type a posteriori error estimator which provides a lower as well as an upper bound for the unknown Galerkin BEM error. The required assumptions are weak and allow for piecewise smooth parametrizations of the boundary, local mesh-refinement, and related standard piecewise polynomials as well as NURBS. In particular, our analysis gives a first contribution to adaptive BEM in the frame of isogeometric analysis (IGABEM), for which we formulate an adaptive algorithm which steers the local mesh-refinement and the multiplicity of the knots. Numerical experiments underline the theoretical findings and show that the proposed adaptive strategy leads to optimal convergence.
Reliability estimation in a multilevel confirmatory factor analysis framework.
Geldhof, G John; Preacher, Kristopher J; Zyphur, Michael J
2014-03-01
Scales with varying degrees of measurement reliability are often used in the context of multistage sampling, where variance exists at multiple levels of analysis (e.g., individual and group). Because methodological guidance on assessing and reporting reliability at multiple levels of analysis is currently lacking, we discuss the importance of examining level-specific reliability. We present a simulation study and an applied example showing different methods for estimating multilevel reliability using multilevel confirmatory factor analysis and provide supporting Mplus program code. We conclude that (a) single-level estimates will not reflect a scale's actual reliability unless reliability is identical at each level of analysis, (b) 2-level alpha and composite reliability (omega) perform relatively well in most settings, (c) estimates of maximal reliability (H) were more biased when estimated using multilevel data than either alpha or omega, and (d) small cluster size can lead to overestimates of reliability at the between level of analysis. We also show that Monte Carlo confidence intervals and Bayesian credible intervals closely reflect the sampling distribution of reliability estimates under most conditions. We discuss the estimation of credible intervals using Mplus and provide R code for computing Monte Carlo confidence intervals.
A Latent Class Approach to Estimating Test-Score Reliability
van der Ark, L. Andries; van der Palm, Daniel W.; Sijtsma, Klaas
2011-01-01
This study presents a general framework for single-administration reliability methods, such as Cronbach's alpha, Guttman's lambda-2, and method MS. This general framework was used to derive a new approach to estimating test-score reliability by means of the unrestricted latent class model. This new approach is the latent class reliability…
Estabrook, Ryne; Neale, Michael
2013-01-01
Factor score estimation is a controversial topic in psychometrics, and the estimation of factor scores from exploratory factor models has historically received a great deal of attention. However, both confirmatory factor models and the existence of missing data have generally been ignored in this debate. This article presents a simulation study…
The reliability of DSM impact estimates
Vine, E.L. [Lawrence Berkeley Lab., CA (United States); Kushler, M.G. [Michigan Public Service Commission, Lansing, MI (United States)
1995-05-01
Demand-side management (DSM) critics continue to question the reliability of DSM program savings, and therefore, the need for funding such programs. In this paper, the authors examine the issues underlying the discussion of reliability of DSM program savings (e.g., bias and precision) and compare the levels of precision of DSM impact estimates for three utilities. Overall, the precision results from all three companies appear quite similar and, for the most part, demonstrate reasonably good precision levels around DSM savings estimate. The conclude by recommending activities for program managers and evaluators for increasing the understanding of the factors leading to DSM uncertainty and for reducing the level of DSM uncertainty.
Jasbir Arora
2016-06-01
Full Text Available The indestructible nature of teeth against most of the environmental abuses makes its use in disaster victim identification (DVI. The present study has been undertaken to examine the reliability of Gustafson’s qualitative method and Kedici’s quantitative method of measuring secondary dentine for age estimation among North Western adult Indians. 196 (M = 85; F = 111 single rooted teeth were collected from the Department of Oral Health Sciences, PGIMER, Chandigarh. Ground sections were prepared and the amount of secondary dentine formed was scored qualitatively according to Gustafson’s (0–3 scoring system (method 1 and quantitatively following Kedici’s micrometric measurement method (method 2. Out of 196 teeth 180 samples (M = 80; F = 100 were found to be suitable for measuring secondary dentine following Kedici’s method. Absolute mean error of age was calculated by both methodologies. Results clearly showed that in pooled data, method 1 gave an error of ±10.4 years whereas method 2 exhibited an error of approximately ±13 years. A statistically significant difference was noted in absolute mean error of age between two methods of measuring secondary dentine for age estimation. Further, it was also revealed that teeth extracted for periodontal reasons severely decreased the accuracy of Kedici’s method however, the disease had no effect while estimating age by Gustafson’s method. No significant gender differences were noted in the absolute mean error of age by both methods which suggest that there is no need to separate data on the basis of gender.
Ignatova, Irina; French, Andrew S; Immonen, Esa-Ville; Frolov, Roman; Weckström, Matti
2014-06-01
Shannon's seminal approach to estimating information capacity is widely used to quantify information processing by biological systems. However, the Shannon information theory, which is based on power spectrum estimation, necessarily contains two sources of error: time delay bias error and random error. These errors are particularly important for systems with relatively large time delay values and for responses of limited duration, as is often the case in experimental work. The window function type and size chosen, as well as the values of inherent delays cause changes in both the delay bias and random errors, with possibly strong effect on the estimates of system properties. Here, we investigated the properties of these errors using white-noise simulations and analysis of experimental photoreceptor responses to naturalistic and white-noise light contrasts. Photoreceptors were used from several insect species, each characterized by different visual performance, behavior, and ecology. We show that the effect of random error on the spectral estimates of photoreceptor performance (gain, coherence, signal-to-noise ratio, Shannon information rate) is opposite to that of the time delay bias error: the former overestimates information rate, while the latter underestimates it. We propose a new algorithm for reducing the impact of time delay bias error and random error, based on discovering, and then using that size of window, at which the absolute values of these errors are equal and opposite, thus cancelling each other, allowing minimally biased measurement of neural coding.
Jensen, Jørgen Juncher
2007-01-01
In on-board decision support systems efficient procedures are needed for real-time estimation of the maximum ship responses to be expected within the next few hours, given on-line information on the sea state and user defined ranges of possible headings and speeds. For linear responses standard...
Adaptive Response Surface Techniques in Reliability Estimation
Enevoldsen, I.; Faber, M. H.; Sørensen, John Dalsgaard
1993-01-01
Problems in connection with estimation of the reliability of a component modelled by a limit state function including noise or first order discontinuitics are considered. A gradient free adaptive response surface algorithm is developed. The algorithm applies second order polynomial surfaces deter...
Donk, Roland D; Fehlings, Michael G; Verhagen, Wim I M; Arnts, Hisse; Groenewoud, Hans; Verbeek, André L M; Bartels, Ronald H M A
2017-05-01
OBJECTIVE Although there is increasing recognition of the importance of cervical spinal sagittal balance, there is a lack of consensus as to the optimal method to accurately assess the cervical sagittal alignment. Cervical alignment is important for surgical decision making. Sagittal balance of the cervical spine is generally assessed using one of two methods; namely, measuring the angle between C-2 and C-7, and drawing a line between C-2 and C-7. Here, the best method to assess sagittal alignment of the cervical spine is investigated. METHODS Data from 138 patients enrolled in a randomized controlled trial (Procon) were analyzed. Two investigators independently measured the angle between C-2 and C-7 by using Harrison's posterior tangent method, and also estimated the shape of the sagittal curve by using a modified Toyama method. The mean angles of each quantitative assessment of the sagittal alignment were calculated and the results were compared. The interrater reliability for both methods was estimated using Cronbach's alpha. RESULTS For both methods the interrater reliability was high: for the posterior tangent method it was 0.907 and for the modified Toyama technique it was 0.984. For a lordotic cervical spine, defined by the modified Toyama method, the mean angle (defined by Harrison's posterior tangent method) was 23.4° ± 9.9° (range 0.4°-52.4°), for a kyphotic cervical spine it was -2.2° ± 9.2° (range -16.1° to 16.9°), and for a straight cervical spine it was 10.5° ± 8.2° (range -11° to 36°). CONCLUSIONS An absolute measurement of the angle between C-2 and C-7 does not unequivocally define the sagittal cervical alignment. As can be seen from the minimum and maximum values, even a positive angle between C-2 and C-7 could be present in a kyphotic spine. For this purpose, the modified Toyama method (drawing a line from the posterior inferior part of the vertebral body of C-2 to the posterior upper part of the vertebral body of C-7 without any
Reliability estimates for flawed mortar projectile bodies
Cordes, J.A. [US Army ARDEC, AMSRD-AAR-MEF-E, Analysis and Evaluation Division, Fuze and Precision Armaments Technology Directorate, US Army Armament Research Development and Engineering Center, Picatinny Arsenal, NJ 07806-5000 (United States)], E-mail: jennifer.cordes@us.army.mil; Thomas, J.; Wong, R.S.; Carlucci, D. [US Army ARDEC, AMSRD-AAR-MEF-E, Analysis and Evaluation Division, Fuze and Precision Armaments Technology Directorate, US Army Armament Research Development and Engineering Center, Picatinny Arsenal, NJ 07806-5000 (United States)
2009-12-15
The Army routinely screens mortar projectiles for defects in safety-critical parts. In 2003, several lots of mortar projectiles had a relatively high defect rate, 0.24%. Before releasing the projectiles, the Army reevaluated the chance of a safety-critical failure. Limit state functions and Monte Carlo simulations were used to estimate reliability. Measured distributions of wall thickness, defect rate, material strength, and applied loads were used with calculated stresses to estimate the probability of failure. The results predicted less than one failure in one million firings. As of 2008, the mortar projectiles have been used without any safety-critical incident.
MEASUREMENT: ACCOUNTING FOR RELIABILITY IN PERFORMANCE ESTIMATES.
Waterman, Brian; Sutter, Robert; Burroughs, Thomas; Dunagan, W Claiborne
2014-01-01
When evaluating physician performance measures, physician leaders are faced with the quandary of determining whether departures from expected physician performance measurements represent a true signal or random error. This uncertainty impedes the physician leader's ability and confidence to take appropriate performance improvement actions based on physician performance measurements. Incorporating reliability adjustment into physician performance measurement is a valuable way of reducing the impact of random error in the measurements, such as those caused by small sample sizes. Consequently, the physician executive has more confidence that the results represent true performance and is positioned to make better physician performance improvement decisions. Applying reliability adjustment to physician-level performance data is relatively new. As others have noted previously, it's important to keep in mind that reliability adjustment adds significant complexity to the production, interpretation and utilization of results. Furthermore, the methods explored in this case study only scratch the surface of the range of available Bayesian methods that can be used for reliability adjustment; further study is needed to test and compare these methods in practice and to examine important extensions for handling specialty-specific concerns (e.g., average case volumes, which have been shown to be important in cardiac surgery outcomes). Moreover, it's important to note that the provider group average as a basis for shrinkage is one of several possible choices that could be employed in practice and deserves further exploration in future research. With these caveats, our results demonstrate that incorporating reliability adjustment into physician performance measurements is feasible and can notably reduce the incidence of "real" signals relative to what one would expect to see using more traditional approaches. A physician leader who is interested in catalyzing performance improvement
Sequential Bayesian technique: An alternative approach for software reliability estimation
S Chatterjee; S S Alam; R B Misra
2009-04-01
This paper proposes a sequential Bayesian approach similar to Kalman ﬁlter for estimating reliability growth or decay of software. The main advantage of proposed method is that it shows the variation of the parameter over a time, as new failure data become available. The usefulness of the method is demonstrated with some real life data
Advanced reliability methods - A review
Forsyth, David S.
2016-02-01
There are a number of challenges to the current practices for Probability of Detection (POD) assessment. Some Nondestructive Testing (NDT) methods, especially those that are image-based, may not provide a simple relationship between a scalar NDT response and a damage size. Some damage types are not easily characterized by a single scalar metric. Other sensing paradigms, such as structural health monitoring, could theoretically replace NDT but require a POD estimate. And the cost of performing large empirical studies to estimate POD can be prohibitive. The response of the research community has been to develop new methods that can be used to generate the same information, POD, in a form that can be used by engineering designers. This paper will highlight approaches to image-based data and complex defects, Model Assisted POD estimation, and Bayesian methods for combining information. This paper will also review the relationship of the POD estimate, confidence bounds, tolerance bounds, and risk assessment.
Estimating a municipal water supply reliability
O.G. Okeola
2015-12-01
Full Text Available The availability and adequacy of water in a river basin determine the design of water resources projects such as water supply. There is a further need to regularly appraise availability of such resource for municipality at a distant future to help in articulating contingent plan to handle its vulnerability. This paper attempts to empirically determine the reliability of water resource for a municipal water supply. An approach was first developed to estimate municipality water demand that lack socioeconometric data using a purpose-specific model. Hydrological assessment of river Oyun basin was carried out using Markov model and sequent peak analysis to determine the reliability extent for the future demand need. The two models were then applied to Offa municipality in Kwara state, Nigeria. The finding revealed the reliability and adequacy of the resource up till year 2020. The need to start exploring a well-coordinated conjunctive use of resources is recommended. The study can serve as an organized baseline for future work that will consider physiographic characteristics of the basin and climatic dynamics. The findings can be a vital input into the demand management process for long-term sustainable water supply of the town and by extension to urban township with similar characteristic.
SA BASED SOFTWARE DEPLOYMENT RELIABILITY ESTIMATION CONSIDERING COMPONENT DEPENDENCE
Su Xihong; Liu Hongwei; Wu Zhibo; Yang Xiaozong; Zuo Decheng
2011-01-01
Reliability is one of the most critical properties of software system.System deployment architecture is the allocation of system software components on host nodes.Software Architecture (SA)based software deployment models help to analyze reliability of different deployments.Though many approaches for architecture-based reliability estimation exist,little work has incorporated the influence of system deployment and hardware resources into reliability estimation.There are many factors influencing system deployment.By translating the multi-dimension factors into degree matrix of component dependence,we provide the definition of component dependence and propose a method of calculating system reliability of deployments.Additionally,the parameters that influence the optimal deployment may change during system execution.The existing software deployment architecture may be ill-suited for the given environment,and the system needs to be redeployed to improve reliability.An approximate algorithm,A*_D,to increase system reliability is presented.When the number of components and host nodes is relative large,experimental results show that this algorithm can obtain better deployment than stochastic and greedy algorithms.
Reliability studies of diagnostic methods in Indian traditional Ayurveda medicine
Kurande, Vrinda Hitendra; Waagepetersen, Rasmus; Toft, Egon
2013-01-01
as prakriti classification), method development (pulse diagnosis), quality assurance for diagnosis and treatment and in the conduct of clinical studies. Several reliability studies are conducted in western medicine. The investigation of the reliability of traditional Chinese, Japanese and Sasang medicine...... to reliability estimates and different study designs and statistical analysis is given for future studies in Ayurveda....
Lower bounds to the reliabilities of factor score estimators
Hessen, D.J.|info:eu-repo/dai/nl/256041717
2017-01-01
Under the general common factor model, the reliabilities of factor score estimators might be of more interest than the reliability of the total score (the unweighted sum of item scores). In this paper, lower bounds to the reliabilities of Thurstone’s factor score estimators, Bartlett’s factor score
Analytic Estimation of Standard Error and Confidence Interval for Scale Reliability.
Raykov, Tenko
2002-01-01
Proposes an analytic approach to standard error and confidence interval estimation of scale reliability with fixed congeneric measures. The method is based on a generally applicable estimator stability evaluation procedure, the delta method. The approach, which combines wide-spread point estimation of composite reliability in behavioral scale…
Estimation of the Reliability of Distributed Applications
Marian Pompiliu CRISTESCU; Laurentiu CIOVICA
2010-01-01
In this paper the reliability is presented as an important feature for use in mission-critical distributed applications. Certain aspects of distributed systems make the requested level of reliability more difficult. An obvious benefit of distributed systems is that they serve the global business and social environment in which we live and work. Another benefit is that they can improve the quality of services, in terms of reliability, availability and performance, for the complex systems. The ...
Adriano Andrejew Ferreira
2009-01-01
Full Text Available In this paper, two methods for assessing the degree of melanization of pupal exuviae from the butterfly Heliconius erato phyllis , Fabricius 1775 (Lepidoptera, Nymphalidae, Heliconiini are compared. In the first method, which was qualitative, the exuviae were classified by scoring the degree of melanization, whereas in the second method, which was quantitative, the exuviae were classified by optical density followed by analysis with appropriate software. The heritability (h 2 of the degree of melanization was estimated by regression and analysis of variance. The estimates of h 2 were similar with both methods, indicating that the qualitative method could be particularly suitable for field work. The low estimates obtained for heritability may have resulted from the small sample size ( n = 7-18 broods, including the parents or from the allocation-priority hypothesis in which pupal color would be a lower priority trait compared to morphological traits and adequate larval development.
A Note on Structural Equation Modeling Estimates of Reliability
Yang, Yanyun; Green, Samuel B.
2010-01-01
Reliability can be estimated using structural equation modeling (SEM). Two potential problems with this approach are that estimates may be unstable with small sample sizes and biased with misspecified models. A Monte Carlo study was conducted to investigate the quality of SEM estimates of reliability by themselves and relative to coefficient…
Dependent systems reliability estimation by structural reliability approach
Kostandyan, Erik; Sørensen, John Dalsgaard
2014-01-01
) and the component lifetimes follow some continuous and non-negative cumulative distribution functions. An illustrative example utilizing the proposed method is provided, where damage is modeled by a fracture mechanics approach with correlated components and a failure assessment diagram is applied for failure...... identification. Application of the proposed method can be found in many real world systems....
Hardware and software reliability estimation using simulations
Swern, Frederic L.
1994-01-01
The simulation technique is used to explore the validation of both hardware and software. It was concluded that simulation is a viable means for validating both hardware and software and associating a reliability number with each. This is useful in determining the overall probability of system failure of an embedded processor unit, and improving both the code and the hardware where necessary to meet reliability requirements. The methodologies were proved using some simple programs, and simple hardware models.
Simulator for Software Project Reliability Estimation
Sanjana,
2011-07-01
Full Text Available Several models are there for software development processes, each describing approaches to a variety of tasks or activities that take place during the process. Without project management, softwareprojects can easily be delivered late or over budget. With large numbers of software projects not meeting their expectations in terms of functionality, cost, or delivery schedule, effective project management appears to be lacking.IEEE defines reliability as “the ability of a system to perform its required function under stated conditions for a specified period of time. To most software project managers, reliability is equated to correctness that is number of bugs found and fixed. The purpose is to develop a simulator forestimating the reliability of the software project using PERT approach keeping in view the criticality index of each task.
叶宝娟; 温忠麟
2012-01-01
Reliability is very important in evaluating the quality of a test. Based on the confirmatory factor analysis, composite reliabili- ty is a good index to estimate the test reliability for general applications. As is well known, point estimate contains limited information a- bout a population parameter and cannot indicate how far it can be from the population parameter. The confidence interval of the parame- ter can provide more information. In evaluating the quality of a test, the confidence interval of composite reliability has received atten- tion in recent years. There are three approaches to estimating the confidence interval of composite reliability of an unidimensional test: the Bootstrap method, the Delta method, and the direct use of the standard error of a software output (e. g. , LISREL). The Bootstrap method pro- vides empirical results of the standard error, and is the most credible method. But it needs data simulation techniques, and its computa- tion process is rather complex. The Delta method computes the standard error of composite reliability by approximate calculation. It is simpler than the Bootstrap method. The LISREL software can directly prompt the standard error, and it is the easiest among the three methods. By simulation study, it had been found that the interval estimates obtained by the Delta method and the Bootstrap method were almost identical, whereas the results obtained by LISREL and by the Bootstrap method were substantially different ( Ye & Wen, 2011 ). The Delta method is recommended when the confidence interval of composite reliability of a unidimensional test is estimated, because the Delta method is simpler than the Bootstrap method. There was little research about how to compute the confidence interval of composite reliability of a multidimensional test. We de- duced a formula by using the Delta method for computing the standard error of composite reliability of a multidimensional test. Based on the standard error, the
Reliability Estimation for Double Containment Piping
L. Cadwallader; T. Pinna
2012-08-01
Double walled or double containment piping is considered for use in the ITER international project and other next-generation fusion device designs to provide an extra barrier for tritium gas and other radioactive materials. The extra barrier improves confinement of these materials and enhances safety of the facility. This paper describes some of the design challenges in designing double containment piping systems. There is also a brief review of a few operating experiences of double walled piping used with hazardous chemicals in different industries. This paper recommends approaches for the reliability analyst to use to quantify leakage from a double containment piping system in conceptual and more advanced designs. The paper also cites quantitative data that can be used to support such reliability analyses.
D. Sümeyra Demirkıran
2014-03-01
Full Text Available Concept of age estimation plays an important role on both civil law and regulation of criminal behaviors. In forensic medicine, age estimation is practiced for individual requests as well for request of the court. In this study it is aimed to compile the methods of age estimation and to make recommendations for the solution of the problems encountered. In radiological method the epiphyseal lines of the bones and views of the teeth are used. In order to estimate the age by comparing bone radiographs; Greulich-Pyle Atlas (GPA, Tanner-Whitehouse Atlas (TWA and “Adli Tıpta Yaş Tayini (ATYT” books are used. Bone age is found to be 2 years older averagely than chronologic age, especially in puberty, according to the forensic age estimations described in the ATYT book. For the age estimation with teeth, Demirjian method is used. In time different methods are developed by modifying Demirjian method. However no accurate method was found. Histopathological studies are done on bone marrow cellularity and dermis cells. No correlation was found between histopathoogical findings and choronologic age. Important ethical and legal issues are brought with current age estimation methods especially in teenage period. Therefore it is required to prepare atlases of bone age compatible with our society by collecting the findings of the studies in Turkey. Another recommendation could be to pay attention to the courts of age raising trials of teenage women and give special emphasis on birth and population records
Generalized Agile Estimation Method
Shilpa Bahlerao
2011-01-01
Full Text Available Agile cost estimation process always possesses research prospects due to lack of algorithmic approaches for estimating cost, size and duration. Existing algorithmic approach i.e. Constructive Agile Estimation Algorithm (CAEA is an iterative estimation method that incorporates various vital factors affecting the estimates of the project. This method has lots of advantages but at the same time has some limitations also. These limitations may due to some factors such as number of vital factors and uncertainty involved in agile projects etc. However, a generalized agile estimation may generate realistic estimates and eliminates the need of experts. In this paper, we have proposed iterative Generalized Estimation Method (GEM and presented algorithm based on it for agile with case studies. GEM based algorithm various project domain classes and vital factors with prioritization level. Further, it incorporates uncertainty factor to quantify the risk of project for estimating cost, size and duration. It also provides flexibility to project managers for deciding on number of vital factors, uncertainty level and project domains thereby maintaining the agility.
Del Pico, Wayne J
2014-01-01
Simplify the estimating process with the latest data, materials, and practices Electrical Estimating Methods, Fourth Edition is a comprehensive guide to estimating electrical costs, with data provided by leading construction database RS Means. The book covers the materials and processes encountered by the modern contractor, and provides all the information professionals need to make the most precise estimate. The fourth edition has been updated to reflect the changing materials, techniques, and practices in the field, and provides the most recent Means cost data available. The complexity of el
Reliability design method for steam turbine blades
Jinyuan SHI
2008-01-01
Based on theories of probability and statistics, and taking static stresses, dynamic stresses, endurance strength, safety ratios, vibration frequencies and exciting force frequencies of blades as random variables, a reliabil-ity design method for steam turbine blades is presented. The purport and calculation method for blade reliability are expounded. The distribution parameters of random variables are determined after analysis and numerical cal-culation of test data. The fatigue strength and the vibra-tion design reliability of turbine blades are determined with the aid of a probabilistic design method and by inter-ference models for stress distribution and strength distri-bution. Some blade reliability design calculation formulas for a dynamic stress design method, a safety ratio design method for fatigue strength, and a vibration reliability design method for the first and second types of tuned blades and a packet of blades on a disk connected closely, are given together with some practical examples. With these methods, the design reliability of steam turbine blades can be guaranteed in the design stage. This research may provide some scientific basis for reliability design of steam turbine blades.
Cho, Tae Min; Lee, Byung Chai [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)
2010-01-15
In this study, an effective method for reliability-based design optimization (RBDO) is proposed enhancing sequential optimization and reliability assessment (SORA) method by convex approximations. In SORA, reliability estimation and deterministic optimization are performed sequentially. The sensitivity and function value of probabilistic constraint at the most probable point (MPP) are obtained in the reliability analysis loop. In this study, the convex approximations for probabilistic constraint are constructed by utilizing the sensitivity and function value of the probabilistic constraint at the MPP. Hence, the proposed method requires much less function evaluations of probabilistic constraints in the deterministic optimization than the original SORA method. The efficiency and accuracy of the proposed method were verified through numerical examples
IRT-Estimated Reliability for Tests Containing Mixed Item Formats
Shu, Lianghua; Schwarz, Richard D.
2014-01-01
As a global measure of precision, item response theory (IRT) estimated reliability is derived for four coefficients (Cronbach's a, Feldt-Raju, stratified a, and marginal reliability). Models with different underlying assumptions concerning test-part similarity are discussed. A detailed computational example is presented for the targeted…
Reliability Methods for Shield Design Process
Tripathi, R. K.; Wilson, J. W.
2002-01-01
Providing protection against the hazards of space radiation is a major challenge to the exploration and development of space. The great cost of added radiation shielding is a potential limiting factor in deep space operations. In this enabling technology, we have developed methods for optimized shield design over multi-segmented missions involving multiple work and living areas in the transport and duty phase of space missions. The total shield mass over all pieces of equipment and habitats is optimized subject to career dose and dose rate constraints. An important component of this technology is the estimation of two most commonly identified uncertainties in radiation shield design, the shielding properties of materials used and the understanding of the biological response of the astronaut to the radiation leaking through the materials into the living space. The largest uncertainty, of course, is in the biological response to especially high charge and energy (HZE) ions of the galactic cosmic rays. These uncertainties are blended with the optimization design procedure to formulate reliability-based methods for shield design processes. The details of the methods will be discussed.
Steven E. Stemler
2004-03-01
Full Text Available This article argues that the general practice of describing interrater reliability as a single, unified concept is..at best imprecise, and at worst potentially misleading. Rather than representing a single concept, different..statistical methods for computing interrater reliability can be more accurately classified into one of three..categories based upon the underlying goals of analysis. The three general categories introduced and..described in this paper are: 1 consensus estimates, 2 consistency estimates, and 3 measurement estimates...The assumptions, interpretation, advantages, and disadvantages of estimates from each of these three..categories are discussed, along with several popular methods of computing interrater reliability coefficients..that fall under the umbrella of consensus, consistency, and measurement estimates. Researchers and..practitioners should be aware that different approaches to estimating interrater reliability carry with them..different implications for how ratings across multiple judges should be summarized, which may impact the..validity of subsequent study results.
Reliability of the cervical vertebrae maturation (CVM) method.
Predko-Engel, A.; Kaminek, M.; Langova, K.; Kowalski, P.; Fudalej, P.S.
2015-01-01
OBJECTIVE: To assess the reliability of the cervical vertebrae maturation method (CVM). BACKGROUND: Skeletal maturity estimation can influence the manner and time of orthodontic treatment. The CVM method evaluates skeletal growth on the basis of the changes in the morphology of cervical vertebrae C2
Unbiased risk estimation method for covariance estimation
Lescornel, Hélène; Chabriac, Claudie
2011-01-01
We consider a model selection estimator of the covariance of a random process. Using the Unbiased Risk Estimation (URE) method, we build an estimator of the risk which allows to select an estimator in a collection of model. Then, we present an oracle inequality which ensures that the risk of the selected estimator is close to the risk of the oracle. Simulations show the efficiency of this methodology.
Reliability-based concurrent subspace optimization method
FAN Hui; LI Wei-ji
2008-01-01
To avoid the high computational cost and much modification in the process of applying traditional re-liability-based design optimization method, a new reliability-based concurrent subspace optimization approach is proposed based on the comparison and analysis of the existing muhidisciplinary optimization techniques and reli-ability assessment methods. It is shown through a canard configuration optimization for a three-surface transport that the proposed method is computationally efficient and practical with the least modification to the current de-terministic optimization process.
Software Reliability Estimation of the Reactor Protection System for Lungmen Nuclear Power Station
Wang, Jung Ya; Chou, Hwai Pwu [Tsing Hua National University, Hsinchu (China)
2014-08-15
In this paper, a software reliability estimation method is applied to estimate the software reliability of the reactor protection system (RPS) for Lungmen ABWR. In order to estimate the software failure probability, a flow network model of software is constructed. The total number of executions and the execution time of each software statement are obtained, and the reliability of each statement is obtained. During the test, the one-time test scenario follows a Bernoulli distribution and the multiple-test scenarios follow a binomial distribution. The software reliability of the digital trip module (DTM) and the trip logic unit (TLU) of the RPS of Lungmen ABWR can then be estimated. The results show that the RPS software has a good reliability.
Reliability estimation for single dichotomous items based on Mokken's IRT model
Meijer, R R; Sijtsma, K; Molenaar, Ivo W
1995-01-01
Item reliability is of special interest for Mokken's nonparametric item response theory, and is useful for the evaluation of item quality in nonparametric test construction research. It is also of interest for nonparametric person-fit analysis. Three methods for the estimation of the reliability of
Reliability estimation for single dichotomous items based on Mokken's IRT model
Meijer, Rob R.; Sijtsma, Klaas; Molenaar, Ivo W.
1995-01-01
Item reliability is of special interest for Mokken’s nonparametric item response theory, and is useful for the evaluation of item quality in nonparametric test construction research. It is also of interest for nonparametric person-fit analysis. Three methods for the estimation of the reliability of
Reliability methods in OpenEarthTools
Den Heijer, C.
2012-01-01
OpenEarthTools contains, apart from a lot of other tools in various programming languages, the probabilistic reliability methods FORM and Monte Carlo. This document aims at describing and providing background information and examples on the FORM and Monte Carlo implementation available in OpenEarthT
Jensen, Jørgen Juncher
2015-01-01
For non-linear systems the estimation of fatigue damage under stochastic loadings can be rather time-consuming. Usually Monte Carlo simulation (MCS) is applied, but the coefficient-of-variation (COV) can be large if only a small set of simulations can be done due to otherwise excessive CPU time. ...... the COV. For a specific example dealing with stresses in a tendon in a tension leg platform the COV is thereby reduced by a factor of three....
Estimating Reliability of Disturbances in Satellite Time Series Data Based on Statistical Analysis
Zhou, Z.-G.; Tang, P.; Zhou, M.
2016-06-01
Normally, the status of land cover is inherently dynamic and changing continuously on temporal scale. However, disturbances or abnormal changes of land cover — caused by such as forest fire, flood, deforestation, and plant diseases — occur worldwide at unknown times and locations. Timely detection and characterization of these disturbances is of importance for land cover monitoring. Recently, many time-series-analysis methods have been developed for near real-time or online disturbance detection, using satellite image time series. However, the detection results were only labelled with "Change/ No change" by most of the present methods, while few methods focus on estimating reliability (or confidence level) of the detected disturbances in image time series. To this end, this paper propose a statistical analysis method for estimating reliability of disturbances in new available remote sensing image time series, through analysis of full temporal information laid in time series data. The method consists of three main steps. (1) Segmenting and modelling of historical time series data based on Breaks for Additive Seasonal and Trend (BFAST). (2) Forecasting and detecting disturbances in new time series data. (3) Estimating reliability of each detected disturbance using statistical analysis based on Confidence Interval (CI) and Confidence Levels (CL). The method was validated by estimating reliability of disturbance regions caused by a recent severe flooding occurred around the border of Russia and China. Results demonstrated that the method can estimate reliability of disturbances detected in satellite image with estimation error less than 5% and overall accuracy up to 90%.
Maximized Reliability Estimates for Some Research Scales of the MMPI.
Wagner, Edwin E.; And Others
1990-01-01
This study, using data for 200 psychiatric/chemical dependency patients, attempted to justify subscales of the Minnesota Multiphasic Personality Inventory (MMPI). Distributions of all possible split-half correlations for certain research scales of the MMPI revealed negative skewness resulting in spuriously lowered reliability estimates. The scales…
Estimating the Reliability of a Test Containing Multiple Item Formats.
Qualls, Audrey L.
1995-01-01
Classically parallel, tau-equivalently parallel, and congenerically parallel models representing various degrees of part-test parallelism and their appropriateness for tests composed of multiple item formats are discussed. An appropriate reliability estimate for a test with multiple item formats is presented and illustrated. (SLD)
万东; 张忠会
2015-01-01
Based on the reliability, this paper presents a kind of estimation method integrating customer damage func-tion.The outage loss of customers in each type in the time of peak load is acquired based on the method of customer survey .Taking the factors such as the energy consumption ratio and load rating of various users into account, the com-posite function of the outage loss of customers is set up.The estimation method is proposed based on the effects of the reliability to users made by the elements in the different positions on the line.This method can provide a reference for the engineers to make economic decisions about network planning and construction.Finally, the test system of IEEE RBTS is used to illustrate the feasibility and effectiveness of this method.%提出了一种综合用户停电损失函数和基于可靠性计算的用户停电损失估算方法。文章以用户调查方法为基础，获得峰荷状态下各类用户的停电损失，同时考虑各类用户的用电量比例和负荷率等因素，建立综合用户停电损失函数。根据元件在线路上不同位置对用户可靠性的影响，提出了用户停电损失的估算方法。该方法对工程人员进行电网规划和建设时的经济决策具有一定的参考作用。最后，以IEEE RBTS为例说明了该方法的可行性和有效性。
Babak Mehmandoust
2014-03-01
Full Text Available The heat of vaporization of a pure substance at its normal boiling temperature is a very important property in many chemical processes. In this work, a new empirical method was developed to predict vaporization enthalpy of pure substances. This equation is a function of normal boiling temperature, critical temperature, and critical pressure. The presented model is simple to use and provides an improvement over the existing equations for 452 pure substances in wide boiling range. The results showed that the proposed correlation is more accurate than the literature methods for pure substances in a wide boiling range (20.3–722 K.
Mehmandoust, Babak; Sanjari, Ehsan; Vatani, Mostafa
2014-03-01
The heat of vaporization of a pure substance at its normal boiling temperature is a very important property in many chemical processes. In this work, a new empirical method was developed to predict vaporization enthalpy of pure substances. This equation is a function of normal boiling temperature, critical temperature, and critical pressure. The presented model is simple to use and provides an improvement over the existing equations for 452 pure substances in wide boiling range. The results showed that the proposed correlation is more accurate than the literature methods for pure substances in a wide boiling range (20.3-722 K).
Parameter estimation and reliable fault detection of electric motors
Dusan PROGOVAC; Le Yi WANG; George YIN
2014-01-01
Accurate model identification and fault detection are necessary for reliable motor control. Motor-characterizing parameters experience substantial changes due to aging, motor operating conditions, and faults. Consequently, motor parameters must be estimated accurately and reliably during operation. Based on enhanced model structures of electric motors that accommodate both normal and faulty modes, this paper introduces bias-corrected least-squares (LS) estimation algorithms that incorporate functions for correcting estimation bias, forgetting factors for capturing sudden faults, and recursive structures for efficient real-time implementation. Permanent magnet motors are used as a benchmark type for concrete algorithm development and evaluation. Algorithms are presented, their properties are established, and their accuracy and robustness are evaluated by simulation case studies under both normal operations and inter-turn winding faults. Implementation issues from different motor control schemes are also discussed.
Terry, Leann; Kelley, Ken
2012-11-01
Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.
Donk, R.D.; Fehlings, M.G.; Verhagen, W.I.; Arnts, H.; Groenewoud, H.; Verbeek, A.L.M.; Bartels, R.H.M.A.
2017-01-01
OBJECTIVE Although there is increasing recognition of the importance of cervical spinal sagittal balance, there is a lack of consensus as to the optimal method to accurately assess the cervical sagittal alignment. Cervical alignment is important for surgical decision making. Sagittal balance of the
Reliability estimation for single-unit ceramic crown restorations.
Lekesiz, H
2014-09-01
The objective of this study was to evaluate the potential of a survival prediction method for the assessment of ceramic dental restorations. For this purpose, fast-fracture and fatigue reliabilities for 2 bilayer (metal ceramic alloy core veneered with fluorapatite leucite glass-ceramic, d.Sign/d.Sign-67, by Ivoclar; glass-infiltrated alumina core veneered with feldspathic porcelain, VM7/In-Ceram Alumina, by Vita) and 3 monolithic (leucite-reinforced glass-ceramic, Empress, and ProCAD, by Ivoclar; lithium-disilicate glass-ceramic, Empress 2, by Ivoclar) single posterior crown restorations were predicted, and fatigue predictions were compared with the long-term clinical data presented in the literature. Both perfectly bonded and completely debonded cases were analyzed for evaluation of the influence of the adhesive/restoration bonding quality on estimations. Material constants and stress distributions required for predictions were calculated from biaxial tests and finite element analysis, respectively. Based on the predictions, In-Ceram Alumina presents the best fast-fracture resistance, and ProCAD presents a comparable resistance for perfect bonding; however, ProCAD shows a significant reduction of resistance in case of complete debonding. Nevertheless, it is still better than Empress and comparable with Empress 2. In-Ceram Alumina and d.Sign have the highest long-term reliability, with almost 100% survivability even after 10 years. When compared with clinical failure rates reported in the literature, predictions show a promising match with clinical data, and this indicates the soundness of the settings used in the proposed predictions. © International & American Associations for Dental Research.
Objectivity, Reliability, and Validity of Search Engine Count Estimates
Dietmar Janetzko
2008-01-01
Full Text Available Count estimates ("hits" provided by Web search engines have received much attention as a yardstick to measure a variety of phenomena of interest as diverse as, e.g., language statistics, popularity of authors, or similarity between words. Common to these activities is the intention to use Web search engines not only for search but for ad hoc measurement. Using search engine count estimates (SECEs in this way means that a phenomenon of interest, e.g., the popularity of an author, is conceived of as a measurand, and SECEs are taken to be its quantitative measures. However, the data quality of SECEs has not yet been studied systematically, and concerns have been raised against the use of this kind of data. This article examines the data quality of SECEs focusing on classical goodness criteria, i.e., objectivity, reliability, and validity. The results of a series of studies indicate that with the exception of Boolean queries that use disjunction or negation objectivity as well as test-retest reliability and parallel-test reliability of SECEs is good for most types of browsers and search engines examined. Estimation of validity required model development (all-subsets regression revealing satisfying results by using an explorative approach to feature selection. The ﬁndings are discussed in the light of previous objections and perspectives for using Web search count estimates are delineated.
Improving Sample Estimate Reliability and Validity with Linked Ego Networks
Lu, Xin
2012-01-01
Respondent-driven sampling (RDS) is currently widely used in public health, especially for the study of hard-to-access populations such as injecting drug users and men who have sex with men. The method works like a snowball sample but can, given that some assumptions are met, generate unbiased population estimates. However, recent studies have shown that traditional RDS estimators are likely to generate large variance and estimate error. To improve the performance of traditional estimators, we propose a method to generate estimates with ego network data collected by RDS. By simulating RDS processes on an empirical human social network with known population characteristics, we have shown that the precision of estimates on the composition of network link types is greatly improved with ego network data. The proposed estimator for population characteristics shows superior advantage over traditional RDS estimators, and most importantly, the new method exhibits strong robustness to the recruitment preference of res...
Causal Effect Estimation Methods
2014-01-01
Relationship between two popular modeling frameworks of causal inference from observational data, namely, causal graphical model and potential outcome causal model is discussed. How some popular causal effect estimators found in applications of the potential outcome causal model, such as inverse probability of treatment weighted estimator and doubly robust estimator can be obtained by using the causal graphical model is shown. We confine to the simple case of binary outcome and treatment vari...
Optimized Vertex Method and Hybrid Reliability
Smith, Steven A.; Krishnamurthy, T.; Mason, B. H.
2002-01-01
A method of calculating the fuzzy response of a system is presented. This method, called the Optimized Vertex Method (OVM), is based upon the vertex method but requires considerably fewer function evaluations. The method is demonstrated by calculating the response membership function of strain-energy release rate for a bonded joint with a crack. The possibility of failure of the bonded joint was determined over a range of loads. After completing the possibilistic analysis, the possibilistic (fuzzy) membership functions were transformed to probability density functions and the probability of failure of the bonded joint was calculated. This approach is called a possibility-based hybrid reliability assessment. The possibility and probability of failure are presented and compared to a Monte Carlo Simulation (MCS) of the bonded joint.
Fault Diagnosis and Reliability Analysis Using Fuzzy Logic Method
Miao Zhinong; Xu Yang; Zhao Xiangyu
2006-01-01
A new fuzzy logic fault diagnosis method is proposed. In this method, fuzzy equations are employed to estimate the component state of a system based on the measured system performance and the relationship between component state and system performance which is called as "performance-parameter" knowledge base and constructed by expert. Compared with the traditional fault diagnosis method, this fuzzy logic method can use humans intuitive knowledge and dose not need a precise mapping between system performance and component state. Simulation proves its effectiveness in fault diagnosis. Then, the reliability analysis is performed based on the fuzzy logic method.
Reliable Estimation of Prediction Uncertainty for Physicochemical Property Models.
Proppe, Jonny; Reiher, Markus
2017-07-11
One of the major challenges in computational science is to determine the uncertainty of a virtual measurement, that is the prediction of an observable based on calculations. As highly accurate first-principles calculations are in general unfeasible for most physical systems, one usually resorts to parameteric property models of observables, which require calibration by incorporating reference data. The resulting predictions and their uncertainties are sensitive to systematic errors such as inconsistent reference data, parametric model assumptions, or inadequate computational methods. Here, we discuss the calibration of property models in the light of bootstrapping, a sampling method that can be employed for identifying systematic errors and for reliable estimation of the prediction uncertainty. We apply bootstrapping to assess a linear property model linking the (57)Fe Mössbauer isomer shift to the contact electron density at the iron nucleus for a diverse set of 44 molecular iron compounds. The contact electron density is calculated with 12 density functionals across Jacob's ladder (PWLDA, BP86, BLYP, PW91, PBE, M06-L, TPSS, B3LYP, B3PW91, PBE0, M06, TPSSh). We provide systematic-error diagnostics and reliable, locally resolved uncertainties for isomer-shift predictions. Pure and hybrid density functionals yield average prediction uncertainties of 0.06-0.08 mm s(-1) and 0.04-0.05 mm s(-1), respectively, the latter being close to the average experimental uncertainty of 0.02 mm s(-1). Furthermore, we show that both model parameters and prediction uncertainty depend significantly on the composition and number of reference data points. Accordingly, we suggest that rankings of density functionals based on performance measures (e.g., the squared coefficient of correlation, r(2), or the root-mean-square error, RMSE) should not be inferred from a single data set. This study presents the first statistically rigorous calibration analysis for theoretical M
Software Development Cost Estimation Methods
Bogdan Stepien
2003-01-01
Full Text Available Early estimation of project size and completion time is essential for successful project planning and tracking. Multiple methods have been proposed to estimate software size and cost parameters. Suitability of the estimation methods depends on many factors like software application domain, product complexity, availability of historical data, team expertise etc. Most common and widely used estimation techniques are described and analyzed. Current research trends in software estimation cost are also presented.
Review of Quantitative Software Reliability Methods
Chu, T.L.; Yue, M.; Martinez-Guridi, M.; Lehner, J.
2010-09-17
The current U.S. Nuclear Regulatory Commission (NRC) licensing process for digital systems rests on deterministic engineering criteria. In its 1995 probabilistic risk assessment (PRA) policy statement, the Commission encouraged the use of PRA technology in all regulatory matters to the extent supported by the state-of-the-art in PRA methods and data. Although many activities have been completed in the area of risk-informed regulation, the risk-informed analysis process for digital systems has not yet been satisfactorily developed. Since digital instrumentation and control (I&C) systems are expected to play an increasingly important role in nuclear power plant (NPP) safety, the NRC established a digital system research plan that defines a coherent set of research programs to support its regulatory needs. One of the research programs included in the NRC's digital system research plan addresses risk assessment methods and data for digital systems. Digital I&C systems have some unique characteristics, such as using software, and may have different failure causes and/or modes than analog I&C systems; hence, their incorporation into NPP PRAs entails special challenges. The objective of the NRC's digital system risk research is to identify and develop methods, analytical tools, and regulatory guidance for (1) including models of digital systems into NPP PRAs, and (2) using information on the risks of digital systems to support the NRC's risk-informed licensing and oversight activities. For several years, Brookhaven National Laboratory (BNL) has worked on NRC projects to investigate methods and tools for the probabilistic modeling of digital systems, as documented mainly in NUREG/CR-6962 and NUREG/CR-6997. However, the scope of this research principally focused on hardware failures, with limited reviews of software failure experience and software reliability methods. NRC also sponsored research at the Ohio State University investigating the modeling of
Probabilistic confidence for decisions based on uncertain reliability estimates
Reid, Stuart G.
2013-05-01
Reliability assessments are commonly carried out to provide a rational basis for risk-informed decisions concerning the design or maintenance of engineering systems and structures. However, calculated reliabilities and associated probabilities of failure often have significant uncertainties associated with the possible estimation errors relative to the 'true' failure probabilities. For uncertain probabilities of failure, a measure of 'probabilistic confidence' has been proposed to reflect the concern that uncertainty about the true probability of failure could result in a system or structure that is unsafe and could subsequently fail. The paper describes how the concept of probabilistic confidence can be applied to evaluate and appropriately limit the probabilities of failure attributable to particular uncertainties such as design errors that may critically affect the dependability of risk-acceptance decisions. This approach is illustrated with regard to the dependability of structural design processes based on prototype testing with uncertainties attributable to sampling variability.
Reliability estimation for 18Ni steel under low cycle fatigue using probabilistic technique
Lee, Ouk Sub; Choi, Hye Bin; Kim, Dong Hyeok; Kim, Hong Min [Inha Univ., Incheon (Korea, Republic of)
2008-07-01
In this study, the fatigue life of 18Ni Maraging steel under both low and high cyclic conditions is estimated by using FORM (First Order Reliability Method). Fatigue models based on strain approach such as coffin? Manson Fatigue theory and Morrow mean stress method are utilized. The limit state function including these two models was established. A case study for a material with the given special material properties was carried out to show the application of the proposed process of the reliability estimation. The effect of mean stress of the varying fatigue loading on the failure probability has also been investigated.
Estimation of Reliability and Cost Relationship for Architecture-based Software
Hui Guan; Wei-Ru Chen; Ning Huang; Hong-Ji Yang
2010-01-01
In this paper, we propose a new method to estimate the relationship between software reliability and software development cost taking into account the complexity for developing the software system and the size of software intended to develop during the implementation phase of the software development life cycle. On the basis of estimated relationship, a set of empirical data has been used to validate the correctness of the proposed model by comparing the result with the other existing models. The outcome of this work shows that the method proposed here is a relatively straightforward one in formulating the relationship between reliability and cost during implementation phase.
ESTIMATING RELIABILITY OF DISTURBANCES IN SATELLITE TIME SERIES DATA BASED ON STATISTICAL ANALYSIS
Z.-G. Zhou
2016-06-01
Full Text Available Normally, the status of land cover is inherently dynamic and changing continuously on temporal scale. However, disturbances or abnormal changes of land cover — caused by such as forest fire, flood, deforestation, and plant diseases — occur worldwide at unknown times and locations. Timely detection and characterization of these disturbances is of importance for land cover monitoring. Recently, many time-series-analysis methods have been developed for near real-time or online disturbance detection, using satellite image time series. However, the detection results were only labelled with “Change/ No change” by most of the present methods, while few methods focus on estimating reliability (or confidence level of the detected disturbances in image time series. To this end, this paper propose a statistical analysis method for estimating reliability of disturbances in new available remote sensing image time series, through analysis of full temporal information laid in time series data. The method consists of three main steps. (1 Segmenting and modelling of historical time series data based on Breaks for Additive Seasonal and Trend (BFAST. (2 Forecasting and detecting disturbances in new time series data. (3 Estimating reliability of each detected disturbance using statistical analysis based on Confidence Interval (CI and Confidence Levels (CL. The method was validated by estimating reliability of disturbance regions caused by a recent severe flooding occurred around the border of Russia and China. Results demonstrated that the method can estimate reliability of disturbances detected in satellite image with estimation error less than 5% and overall accuracy up to 90%.
Validation of Land Cover Products Using Reliability Evaluation Methods
Wenzhong Shi
2015-06-01
Full Text Available Validation of land cover products is a fundamental task prior to data applications. Current validation schemes and methods are, however, suited only for assessing classification accuracy and disregard the reliability of land cover products. The reliability evaluation of land cover products should be undertaken to provide reliable land cover information. In addition, the lack of high-quality reference data often constrains validation and affects the reliability results of land cover products. This study proposes a validation schema to evaluate the reliability of land cover products, including two methods, namely, result reliability evaluation and process reliability evaluation. Result reliability evaluation computes the reliability of land cover products using seven reliability indicators. Process reliability evaluation analyzes the reliability propagation in the data production process to obtain the reliability of land cover products. Fuzzy fault tree analysis is introduced and improved in the reliability analysis of a data production process. Research results show that the proposed reliability evaluation scheme is reasonable and can be applied to validate land cover products. Through the analysis of the seven indicators of result reliability evaluation, more information on land cover can be obtained for strategic decision-making and planning, compared with traditional accuracy assessment methods. Process reliability evaluation without the need for reference data can facilitate the validation and reflect the change trends of reliabilities to some extent.
A Data-Driven Reliability Estimation Approach for Phased-Mission Systems
Hua-Feng He
2014-01-01
Full Text Available We attempt to address the issues associated with reliability estimation for phased-mission systems (PMS and present a novel data-driven approach to achieve reliability estimation for PMS using the condition monitoring information and degradation data of such system under dynamic operating scenario. In this sense, this paper differs from the existing methods only considering the static scenario without using the real-time information, which aims to estimate the reliability for a population but not for an individual. In the presented approach, to establish a linkage between the historical data and real-time information of the individual PMS, we adopt a stochastic filtering model to model the phase duration and obtain the updated estimation of the mission time by Bayesian law at each phase. At the meanwhile, the lifetime of PMS is estimated from degradation data, which are modeled by an adaptive Brownian motion. As such, the mission reliability can be real time obtained through the estimated distribution of the mission time in conjunction with the estimated lifetime distribution. We demonstrate the usefulness of the developed approach via a numerical example.
Availability and Reliability of FSO Links Estimated from Visibility
M. Tatarko
2012-06-01
Full Text Available This paper is focused on estimation availability and reliability of FSO systems. Shortcut FSO means Free Space Optics. It is a system which allows optical transmission between two steady points. We can say that it is a last mile communication system. It is an optical communication system, but the propagation media is air. This solution of last mile does not require expensive optical fiber and establishing of connection is very simple. But there are some drawbacks which have a bad influence of quality of services and availability of the link. Number of phenomena in the atmosphere such as scattering, absorption and turbulence cause a large variation of receiving optical power and laser beam attenuation. The influence of absorption and turbulence can be significantly reduced by an appropriate design of FSO link. But the visibility has the main influence on quality of the optical transmission channel. Thus, in typical continental area where rain, snow or fog occurs is important to know their values. This article gives a description of device for measuring weather conditions and information about estimation of availability and reliability of FSO links in Slovakia.
Reliability estimation for multiunit nuclear and fossil-fired industrial energy systems
Sullivan, W. G.; Wilson, J. V.; Klepper, O. H.
1977-06-29
As petroleum-based fuels grow increasingly scarce and costly, nuclear energy may become an important alternative source of industrial energy. Initial applications would most likely include a mix of fossil-fired and nuclear sources of process energy. A means for determining the overall reliability of these mixed systems is a fundamental aspect of demonstrating their feasibility to potential industrial users. Reliability data from nuclear and fossil-fired plants are presented, and several methods of applying these data for calculating the reliability of reasonably complex industrial energy supply systems are given. Reliability estimates made under a number of simplifying assumptions indicate that multiple nuclear units or a combination of nuclear and fossil-fired plants could provide adequate reliability to meet industrial requirements for continuity of service.
Methods of statistical model estimation
Hilbe, Joseph
2013-01-01
Methods of Statistical Model Estimation examines the most important and popular methods used to estimate parameters for statistical models and provide informative model summary statistics. Designed for R users, the book is also ideal for anyone wanting to better understand the algorithms used for statistical model fitting. The text presents algorithms for the estimation of a variety of regression procedures using maximum likelihood estimation, iteratively reweighted least squares regression, the EM algorithm, and MCMC sampling. Fully developed, working R code is constructed for each method. Th
Lord, Sarah Peregrine; Can, Doğan; Yi, Michael; Marin, Rebeca; Dunn, Christopher W; Imel, Zac E; Georgiou, Panayiotis; Narayanan, Shrikanth; Steyvers, Mark; Atkins, David C
2015-02-01
The current paper presents novel methods for collecting MISC data and accurately assessing reliability of behavior codes at the level of the utterance. The MISC 2.1 was used to rate MI interviews from five randomized trials targeting alcohol and drug use. Sessions were coded at the utterance-level. Utterance-based coding reliability was estimated using three methods and compared to traditional reliability estimates of session tallies. Session-level reliability was generally higher compared to reliability using utterance-based codes, suggesting that typical methods for MISC reliability may be biased. These novel methods in MI fidelity data collection and reliability assessment provided rich data for therapist feedback and further analyses. Beyond implications for fidelity coding, utterance-level coding schemes may elucidate important elements in the counselor-client interaction that could inform theories of change and the practice of MI. Copyright © 2015 Elsevier Inc. All rights reserved.
Perceptual and Acoustic Reliability Estimates for the Speech Disorders Classification System (SDCS)
Shriberg, Lawrence D.; Fourakis, Marios; Hall, Sheryl D.; Karlsson, Heather B.; Lohmeier, Heather L.; McSweeny, Jane L.; Potter, Nancy L.; Scheer-Cohen, Alison R.; Strand, Edythe A.; Tilkens, Christie M.; Wilson, David L.
2010-01-01
A companion paper describes three extensions to a classification system for paediatric speech sound disorders termed the Speech Disorders Classification System (SDCS). The SDCS uses perceptual and acoustic data reduction methods to obtain information on a speaker's speech, prosody, and voice. The present paper provides reliability estimates for…
Perceptual and Acoustic Reliability Estimates for the Speech Disorders Classification System (SDCS)
Shriberg, Lawrence D.; Fourakis, Marios; Hall, Sheryl D.; Karlsson, Heather B.; Lohmeier, Heather L.; McSweeny, Jane L.; Potter, Nancy L.; Scheer-Cohen, Alison R.; Strand, Edythe A.; Tilkens, Christie M.; Wilson, David L.
2010-01-01
A companion paper describes three extensions to a classification system for paediatric speech sound disorders termed the Speech Disorders Classification System (SDCS). The SDCS uses perceptual and acoustic data reduction methods to obtain information on a speaker's speech, prosody, and voice. The present paper provides reliability estimates for…
Reliable dual tensor model estimation in single and crossing fibers based on jeffreys prior
J. Yang (Jianfei); D.H.J. Poot; M.W.A. Caan (Matthan); Su, T. (Tanja); C.B. Majoie (Charles); L.J. van Vliet (Lucas); F. Vos (Frans)
2016-01-01
textabstractPurpose This paper presents and studies a framework for reliable modeling of diffusion MRI using a data-acquisition adaptive prior. Methods Automated relevance determination estimates the mean of the posterior distribution of a rank-2 dual tensor model exploiting Jeffreys prior (JARD).
Jones, Mark Nicholas; Frutiger, Jerome; Abildskov, Jens;
We present a new software tool called SAFEPROPS which is able to estimate major safety-related and environmental properties for organic compounds. SAFEPROPS provides accurate, reliable and fast predictions using the Marrero-Gani group contribution (MG-GC) method. It is implemented using Python...... as the main programming language, while the necessary parameters together with their correlation matrix are obtained from a SQLite database which has been populated using off-line parameter and error estimation routines (Eq. 3-8)....
Reliability of fish size estimates obtained from multibeam imaging sonar
Hightower, Joseph E.; Magowan, Kevin J.; Brown, Lori M.; Fox, Dewayne A.
2013-01-01
Multibeam imaging sonars have considerable potential for use in fisheries surveys because the video-like images are easy to interpret, and they contain information about fish size, shape, and swimming behavior, as well as characteristics of occupied habitats. We examined images obtained using a dual-frequency identification sonar (DIDSON) multibeam sonar for Atlantic sturgeon Acipenser oxyrinchus oxyrinchus, striped bass Morone saxatilis, white perch M. americana, and channel catfish Ictalurus punctatus of known size (20–141 cm) to determine the reliability of length estimates. For ranges up to 11 m, percent measurement error (sonar estimate – total length)/total length × 100 varied by species but was not related to the fish's range or aspect angle (orientation relative to the sonar beam). Least-square mean percent error was significantly different from 0.0 for Atlantic sturgeon (x̄ = −8.34, SE = 2.39) and white perch (x̄ = 14.48, SE = 3.99) but not striped bass (x̄ = 3.71, SE = 2.58) or channel catfish (x̄ = 3.97, SE = 5.16). Underestimating lengths of Atlantic sturgeon may be due to difficulty in detecting the snout or the longer dorsal lobe of the heterocercal tail. White perch was the smallest species tested, and it had the largest percent measurement errors (both positive and negative) and the lowest percentage of images classified as good or acceptable. Automated length estimates for the four species using Echoview software varied with position in the view-field. Estimates tended to be low at more extreme azimuthal angles (fish's angle off-axis within the view-field), but mean and maximum estimates were highly correlated with total length. Software estimates also were biased by fish images partially outside the view-field and when acoustic crosstalk occurred (when a fish perpendicular to the sonar and at relatively close range is detected in the side lobes of adjacent beams). These sources of
Method matters: Understanding diagnostic reliability in DSM-IV and DSM-5.
Chmielewski, Michael; Clark, Lee Anna; Bagby, R Michael; Watson, David
2015-08-01
Diagnostic reliability is essential for the science and practice of psychology, in part because reliability is necessary for validity. Recently, the DSM-5 field trials documented lower diagnostic reliability than past field trials and the general research literature, resulting in substantial criticism of the DSM-5 diagnostic criteria. Rather than indicating specific problems with DSM-5, however, the field trials may have revealed long-standing diagnostic issues that have been hidden due to a reliance on audio/video recordings for estimating reliability. We estimated the reliability of DSM-IV diagnoses using both the standard audio-recording method and the test-retest method used in the DSM-5 field trials, in which different clinicians conduct separate interviews. Psychiatric patients (N = 339) were diagnosed using the SCID-I/P; 218 were diagnosed a second time by an independent interviewer. Diagnostic reliability using the audio-recording method (N = 49) was "good" to "excellent" (M κ = .80) and comparable to the DSM-IV field trials estimates. Reliability using the test-retest method (N = 218) was "poor" to "fair" (M κ = .47) and similar to DSM-5 field-trials' estimates. Despite low test-retest diagnostic reliability, self-reported symptoms were highly stable. Moreover, there was no association between change in self-report and change in diagnostic status. These results demonstrate the influence of method on estimates of diagnostic reliability. (c) 2015 APA, all rights reserved).
Estimation of Small s-t Reliabilities in Acyclic Networks
Laumanns, Marco
2007-01-01
In the classical s-t network reliability problem a fixed network G is given including two designated vertices s and t (called terminals). The edges are subject to independent random failure, and the task is to compute the probability that s and t are connected in the resulting network, which is known to be #P-complete. In this paper we are interested in approximating the s-t reliability in case of a directed acyclic original network G. We introduce and analyze a specialized version of the Monte-Carlo algorithm given by Karp and Luby. For the case of uniform edge failure probabilities, we give a worst-case bound on the number of samples that have to be drawn to obtain an epsilon-delta approximation, being sharper than the original upper bound. We also derive a variance reduction of the estimator which reduces the expected number of iterations to perform to achieve the desired accuracy when applied in conjunction with different stopping rules. Initial computational results on two types of random networks (direc...
Structural Reliability Methods for Wind Power Converter System Component Reliability Assessment
Kostandyan, Erik; Sørensen, John Dalsgaard
2012-01-01
is defined by the threshold model. The attention is focused on crack propagation in solder joints of electrical components due to the temperature loadings. Structural Reliability approaches are used to incorporate model, physical and statistical uncertainties. Reliability estimation by means of structural...
Estimated Value of Service Reliability for Electric Utility Customers in the United States
Sullivan, M.J.; Mercurio, Matthew; Schellenberg, Josh
2009-06-01
Information on the value of reliable electricity service can be used to assess the economic efficiency of investments in generation, transmission and distribution systems, to strategically target investments to customer segments that receive the most benefit from system improvements, and to numerically quantify the risk associated with different operating, planning and investment strategies. This paper summarizes research designed to provide estimates of the value of service reliability for electricity customers in the US. These estimates were obtained by analyzing the results from 28 customer value of service reliability studies conducted by 10 major US electric utilities over the 16 year period from 1989 to 2005. Because these studies used nearly identical interruption cost estimation or willingness-to-pay/accept methods it was possible to integrate their results into a single meta-database describing the value of electric service reliability observed in all of them. Once the datasets from the various studies were combined, a two-part regression model was used to estimate customer damage functions that can be generally applied to calculate customer interruption costs per event by season, time of day, day of week, and geographical regions within the US for industrial, commercial, and residential customers. Estimated interruption costs for different types of customers and of different duration are provided. Finally, additional research and development designed to expand the usefulness of this powerful database and analysis are suggested.
Is biomass a reliable estimate of plant fitness?1
Younginger, Brett S.; Sirová, Dagmara; Cruzan, Mitchell B.; Ballhorn, Daniel J.
2017-01-01
The measurement of fitness is critical to biological research. Although the determination of fitness for some organisms may be relatively straightforward under controlled conditions, it is often a difficult or nearly impossible task in nature. Plants are no exception. The potential for long-distance pollen dispersal, likelihood of multiple reproductive events per inflorescence, varying degrees of reproductive growth in perennials, and asexual reproduction all confound accurate fitness measurements. For these reasons, biomass is frequently used as a proxy for plant fitness. However, the suitability of indirect fitness measurements such as plant size is rarely evaluated. This review outlines the important associations between plant performance, fecundity, and fitness. We make a case for the reliability of biomass as an estimate of fitness when comparing conspecifics of the same age class. We reviewed 170 studies on plant fitness and discuss the metrics commonly employed for fitness estimations. We find that biomass or growth rate are frequently used and often positively associated with fecundity, which in turn suggests greater overall fitness. Our results support the utility of biomass as an appropriate surrogate for fitness under many circumstances, and suggest that additional fitness measures should be reported along with biomass or growth rate whenever possible. PMID:28224055
Reliability studies of diagnostic methods in Indian traditional Ayurveda medicine
Kurande, Vrinda Hitendra; Waagepetersen, Rasmus; Toft, Egon
2013-01-01
as prakriti classification), method development (pulse diagnosis), quality assurance for diagnosis and treatment and in the conduct of clinical studies. Several reliability studies are conducted in western medicine. The investigation of the reliability of traditional Chinese, Japanese and Sasang medicine...... diagnoses is in the formative stage. However, reliability studies in Ayurveda are in the preliminary stage. In this paper, examples are provided to illustrate relevant concepts of reliability studies of diagnostic methods and their implication in practice, education, and training. An introduction...
Methods for estimating the semivariogram
Lophaven, Søren Nymand; Carstensen, Niels Jacob; Rootzen, Helle
2002-01-01
. In the existing literature various methods for modelling the semivariogram have been proposed, while only a few studies have been made on comparing different approaches. In this paper we compare eight approaches for modelling the semivariogram, i.e. six approaches based on least squares estimation...... maximum likelihood performed better than the least squares approaches. We also applied maximum likelihood and least squares estimation to a real dataset, containing measurements of salinity at 71 sampling stations in the Kattegat basin. This showed that the calculation of spatial predictions...... is insensitive to the choice of estimation method, but also that the uncertainties of predictions were reduced when applying maximum likelihood....
2014-01-01
The purpose of this paper is to create an interval estimation of the fuzzy system reliability for the repairable multistate series–parallel system (RMSS). Two-sided fuzzy confidence interval for the fuzzy system reliability is constructed. The performance of fuzzy confidence interval is considered based on the coverage probability and the expected length. In order to obtain the fuzzy system reliability, the fuzzy sets theory is applied to the system reliability problem when dealing with uncertainties in the RMSS. The fuzzy number with a triangular membership function is used for constructing the fuzzy failure rate and the fuzzy repair rate in the fuzzy reliability for the RMSS. The result shows that the good interval estimator for the fuzzy confidence interval is the obtained coverage probabilities the expected confidence coefficient with the narrowest expected length. The model presented herein is an effective estimation method when the sample size is n ≥ 100. In addition, the optimal α-cut for the narrowest lower expected length and the narrowest upper expected length are considered. PMID:24987728
Estimating uncertainty and reliability of social network data using Bayesian inference.
Farine, Damien R; Strandburg-Peshkin, Ariana
2015-09-01
Social network analysis provides a useful lens through which to view the structure of animal societies, and as a result its use is increasingly widespread. One challenge that many studies of animal social networks face is dealing with limited sample sizes, which introduces the potential for a high level of uncertainty in estimating the rates of association or interaction between individuals. We present a method based on Bayesian inference to incorporate uncertainty into network analyses. We test the reliability of this method at capturing both local and global properties of simulated networks, and compare it to a recently suggested method based on bootstrapping. Our results suggest that Bayesian inference can provide useful information about the underlying certainty in an observed network. When networks are well sampled, observed networks approach the real underlying social structure. However, when sampling is sparse, Bayesian inferred networks can provide realistic uncertainty estimates around edge weights. We also suggest a potential method for estimating the reliability of an observed network given the amount of sampling performed. This paper highlights how relatively simple procedures can be used to estimate uncertainty and reliability in studies using animal social network analysis.
Empirical Study of Travel Time Estimation and Reliability
Ruimin Li; Huajun Chai; Jin Tang
2013-01-01
This paper explores the travel time distribution of different types of urban roads, the link and path average travel time, and variance estimation methods by analyzing the large-scale travel time dataset detected from automatic number plate readers installed throughout Beijing. The results show that the best-fitting travel time distribution for different road links in 15 min time intervals differs for different traffic congestion levels. The average travel time for all links on all days can b...
Lirong Sha; Tongyu Wang
2016-01-01
In order to evaluate the failure probability of a complicated structure, the structural responses usually need to be estimated by some numerical analysis methods such as finite element method ( FEM) . The response surface method ( RSM) can be used to reduce the computational effort required for reliability analysis when the performance functions are implicit. However, the conventional RSM is time⁃consuming or cumbersome if the number of random variables is large. This paper proposes a Legendre orthogonal neural network ( LONN)⁃based RSM to estimate the structural reliability. In this method, the relationship between the random variables and structural responses is established by a LONN model. Then the LONN model is connected to a reliability analysis method, i.e. first⁃order reliability methods (FORM) to calculate the failure probability of the structure. Numerical examples show that the proposed approach is applicable to structural reliability analysis, as well as the structure with implicit performance functions.
Reliability Estimations of Control Systems Effected by Several Interference Sources
DengBei-xing; JiangMing-hu; LiXing
2003-01-01
In order to establish the sufficient and necessary condition that arbitrarily reliable systems can not be constructed with function elements under interference sources, it is very important to expand set of interference sources with the above property. In this paper, the models of two types of interference sources are raised respectively: interference source possessing real input vectors and constant reliable interferen cesource. We study the reliability of the systems effected by the interference sources, and the lower bound of the reliability is presented. The results show that it is impossible that arbitrarily reliable systems can not be constructed with the elements effected by above interference sources.
Numerical Model based Reliability Estimation of Selective Laser Melting Process
Mohanty, Sankhya; Hattel, Jesper Henri
2014-01-01
Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....
HDMR methods to assess reliability in slope stability analyses
Kozubal, Janusz; Pula, Wojciech; Vessia, Giovanna
2014-05-01
Stability analyses of complex rock-soil deposits shall be tackled considering the complex structure of discontinuities within rock mass and embedded soil layers. These materials are characterized by a high variability in physical and mechanical properties. Thus, to calculate the slope safety factor in stability analyses two issues must be taken into account: 1) the uncertainties related to structural setting of the rock-slope mass and 2) the variability in mechanical properties of soils and rocks. High Dimensional Model Representation (HDMR) (Chowdhury et al. 2009; Chowdhury and Rao 2010) can be used to carry out the reliability index within complex rock-soil slopes when numerous random variables with high coefficient of variations are considered. HDMR implements the inverse reliability analysis, meaning that the unknown design parameters are sought provided that prescribed reliability index values are attained. Such approach uses implicit response functions according to the Response Surface Method (RSM). The simple RSM can be efficiently applied when less than four random variables are considered; as the number of variables increases, the efficiency in reliability index estimation decreases due to the great amount of calculations. Therefore, HDMR method is used to improve the computational accuracy. In this study, the sliding mechanism in Polish Flysch Carpathian Mountains have been studied by means of HDMR. The Southern part of Poland where Carpathian Mountains are placed is characterized by a rather complicated sedimentary pattern of flysh rocky-soil deposits that can be simplified into three main categories: (1) normal flysch, consisting of adjacent sandstone and shale beds of approximately equal thickness, (2) shale flysch, where shale beds are thicker than adjacent sandstone beds, and (3) sandstone flysch, where the opposite holds. Landslides occur in all flysch deposit types thus some configurations of possible unstable settings (within fractured rocky
Yifan Wang
2014-05-01
Full Text Available A control method based on real-time operational reliability evaluation for space manipulator is presented for improving the success rate of a manipulator during the execution of a task. In this paper, a method for quantitative analysis of operational reliability is given when manipulator is executing a specified task; then a control model which could control the quantitative operational reliability is built. First, the control process is described by using a state space equation. Second, process parameters are estimated in real time using Bayesian method. Third, the expression of the system's real-time operational reliability is deduced based on the state space equation and process parameters which are estimated using Bayesian method. Finally, a control variable regulation strategy which considers the cost of control is given based on the Theory of Statistical Process Control. It is shown via simulations that this method effectively improves the operational reliability of space manipulator control system.
Reliability Analysis of Slope Stability by Central Point Method
Li, Chunge; WU Congliang
2015-01-01
Given uncertainty and variability of the slope stability analysis parameter, the paper proceed from the perspective of probability theory and statistics based on the reliability theory. Through the central point method of reliability analysis, performance function about the reliability of slope stability analysis is established. What’s more, the central point method and conventional limit equilibrium methods do comparative analysis by calculation example. The approach’s numerical ...
Reliability of panoramic radiography in chronological age estimation
Ramanpal Singh Makkad
2013-01-01
Full Text Available Introduction: There has been a strong relationship between the growth rate of bone and teeth, which can be utilized for the purpose of age identification of an individual. Aims and Objective: The present study was designed to determine the relationship between the dental age, the age from dental panoramic radiography, skeletal age, and chronological age. Materials and Methods: The study included 270 individuals, averaging between 17 years and 25 years of age from out-patient department of New Horizon Dental College and Hospital, Sakri, Bilaspur, Chhattisgarh, India, for third molar surgery. Panoramic and hand wrist radiographs were taken, the films were digitally processed for visualization of the wisdom teeth. The confirmations of ages were repeated again at an interval of 4 weeks by a radiologist. The extracted wisdom teeth were placed in 10% formalin and were examined by one dental surgeon to estimate the age on the basis of root formation. Student′s t-test was adopted for statistical analysis and probability (P value was calculated. Conclusion: Estimating the age of an individual was accurate by examining extracted third molar. Age estimation through panoramic radiography was highly accurate in upper right quadrant (mean = 0.72 and P = 0.077.
Zaporozhanov V.A.
2012-12-01
Full Text Available In the conditions of sporting-pedagogical practices objective estimation of potential possibilities gettings busy already on the initial stages of long-term preparation examined as one of issues of the day. The proper quantitative information allows to individualize preparation of gettings in obedience to requirements to the guided processes busy. Research purpose - logically and metrical to rotin expedience of metrical method of calculations of reliability of results of the control measurings, in-use for diagnostics of psychophysical fitness and prognosis of growth of trade gettings busy in the select type of sport. Material and methods. Analysed the results of the control measurings on four indexes of psychophysical preparedness and estimation of experts of fitness 24th gettings busy composition of children of gymnastic school. The results of initial and final inspection of gymnasts on the same control tests processed the method of mathematical statistics. Expected the metrical estimations of reliability of measurings is stability, co-ordination and informing of control information for current diagnostics and prognosis of sporting possibilities inspected. Results. Expedience of the use in these aims of metrical operations of calculation of complex estimation of the psychophysical state of gettings busy is metrology grounded. Conclusions. Research results confirm expedience of calculation of complex estimation of psychophysical features gettings busy for diagnostics of fitness in the select type of sport and trade prognosis on the subsequent stages of preparation.
Reliability Estimations of Control Systems Effected by Several Interference Sources
Deng Bei-xing; Jiang Ming-hu; Li Xing
2003-01-01
In order to estab lish the sufficient and necessary condition that arbitrarily reliable systems can not be construc-ted with function elements under interference sources, it is very important to expand set of interference sources with the above property. In this paper, the models of two types of in-terference sources are raised respectively: interference source possessing real input vectors and constant reliable interference source. We study the reliability of the systems effected by the interference sources, and the lower bound of the reliability is presented. The results show that it is impossible that arbi-trarily reliable systems can not be constructed with the ele-ments effected by above interference sources.
Extrapolation Method for System Reliability Assessment
Qin, Jianjun; Nishijima, Kazuyoshi; Faber, Michael Havbro
2012-01-01
The present paper presents a new scheme for probability integral solution for system reliability analysis, which takes basis in the approaches by Naess et al. (2009) and Bucher (2009). The idea is to evaluate the probability integral by extrapolation, based on a sequence of MC approximations....... The scheme is extended so that it can be applied to cases where the asymptotic property may not be valid and/or the random variables are not normally distributed. The performance of the scheme is investigated by four principal series and parallel systems and some practical examples. The results indicate...... of integrals with scaled domains. The performance of this class of approximation depends on the approach applied for the scaling and the functional form utilized for the extrapolation. A scheme for this task is derived here taking basis in the theory of asymptotic solutions to multinormal probability integrals...
A Penalized Likelihood Approach to Parameter Estimation with Integral Reliability Constraints
Barry Smith
2015-06-01
Full Text Available Stress-strength reliability problems arise frequently in applied statistics and related fields. Often they involve two independent and possibly small samples of measurements on strength and breakdown pressures (stress. The goal of the researcher is to use the measurements to obtain inference on reliability, which is the probability that stress will exceed strength. This paper addresses the case where reliability is expressed in terms of an integral which has no closed form solution and where the number of observed values on stress and strength is small. We find that the Lagrange approach to estimating constrained likelihood, necessary for inference, often performs poorly. We introduce a penalized likelihood method and it appears to always work well. We use third order likelihood methods to partially offset the issue of small samples. The proposed method is applied to draw inferences on reliability in stress-strength problems with independent exponentiated exponential distributions. Simulation studies are carried out to assess the accuracy of the proposed method and to compare it with some standard asymptotic methods.
Is visual estimation of passive range of motion in the pediatric lower limb valid and reliable
Dagher Fernand
2009-10-01
Full Text Available Abstract Background Visual estimation (VE is an essential tool for evaluation of range of motion. Few papers discussed its validity in children orthopedics' practice. The purpose of our study was to assess validity and reliability of VE for passive range of motions (PROMs of children's lower limbs. Methods Fifty typically developing children (100 lower limbs were examined. Visual estimations for PROMs of hip (flexion, adduction, abduction, internal and external rotations, knee (flexion and popliteal angle and ankle (dorsiflexion and plantarflexion were made by a pediatric orthopaedic surgeon (POS and a 5th year resident in orthopaedics. A last year medical student did goniometric measurements. Three weeks later, same measurements were performed to assess reliability of visual estimation for each examiner. Results Visual estimations of the POS were highly reliable for hip flexion, hip rotations and popliteal angle (ρc ≥ 0.8. Reliability was good for hip abduction, knee flexion, ankle dorsiflexion and plantarflexion (ρc ≥ 0.7 but poor for hip adduction (ρc = 0.5. Reproducibility for all PROMs was verified. Resident's VE showed high reliability (ρc ≥ 0.8 for hip flexion and popliteal angle. Good correlation was found for hip rotations and knee flexion (ρc ≥ 0.7. Poor results were obtained for ankle PROMs (ρc Conclusion Accuracy of VE of passive hip flexion and knee PROMs is high regardless of the examiner's experience. Same accuracy can be found for hip rotations and abduction whenever VE is performed by an experienced examiner. Goniometric evaluation is recommended for passive hip adduction and for ankle PROMs.
Reliability studies of diagnostic methods in Indian traditional Ayurveda medicine: An overview.
Kurande, Vrinda Hitendra; Waagepetersen, Rasmus; Toft, Egon; Prasad, Ramjee
2013-04-01
Recently, a need to develop supportive new scientific evidence for contemporary Ayurveda has emerged. One of the research objectives is an assessment of the reliability of diagnoses and treatment. Reliability is a quantitative measure of consistency. It is a crucial issue in classification (such as prakriti classification), method development (pulse diagnosis), quality assurance for diagnosis and treatment and in the conduct of clinical studies. Several reliability studies are conducted in western medicine. The investigation of the reliability of traditional Chinese, Japanese and Sasang medicine diagnoses is in the formative stage. However, reliability studies in Ayurveda are in the preliminary stage. In this paper, examples are provided to illustrate relevant concepts of reliability studies of diagnostic methods and their implication in practice, education, and training. An introduction to reliability estimates and different study designs and statistical analysis is given for future studies in Ayurveda.
Assessment Method of Heavy NC Machine Reliability Based on Bayes Theory
张雷; 王太勇; 胡占齐
2016-01-01
It is difficult to collect the prior information for small-sample machinery products when their reliability is assessed by using Bayes method. In this study, an improved Bayes method with gradient reliability (GR) results as prior information was proposed to solve the problem. A certain type of heavy NC boring and milling machine was considered as the research subject, and its reliability model was established on the basis of its functional and structural characteristics and working principle. According to the stress-intensity interference theory and the reli-ability model theory, the GR results of the host machine and its key components were obtained. Then the GR results were deemed as prior information to estimate the probabilistic reliability (PR) of the spindle box, the column and the host machine in the present method. The comparative studies demonstrated that the improved Bayes method was applicable in the reliability assessment of heavy NC machine tools.
A particle swarm model for estimating reliability and scheduling system maintenance
Puzis, Rami; Shirtz, Dov; Elovici, Yuval
2016-05-01
Modifying data and information system components may introduce new errors and deteriorate the reliability of the system. Reliability can be efficiently regained with reliability centred maintenance, which requires reliability estimation for maintenance scheduling. A variant of the particle swarm model is used to estimate reliability of systems implemented according to the model view controller paradigm. Simulations based on data collected from an online system of a large financial institute are used to compare three component-level maintenance policies. Results show that appropriately scheduled component-level maintenance greatly reduces the cost of upholding an acceptable level of reliability by reducing the need in system-wide maintenance.
Early Stage Software Reliability Estimation with Stochastic Reward Nets
ZHAO Jing; LIU Hong-wei; CUI Gang; YANG Xiao-zong
2005-01-01
This paper presents software reliability modeling issues at the early stage of a software development for fault tolerant software management system. Based on Stochastic Reward Nets, an effective model of hierarchical view for a fault tolerant software management system is put forward, and an approach that consists of system transient performance analysis is adopted. A quantitative approach for software reliability analysis is given. The results show its usefulness for the design and evaluation of the early-stage software reliability modeling when failure data is not available.
Order statistics & inference estimation methods
Balakrishnan, N
1991-01-01
The literature on order statistics and inferenc eis quite extensive and covers a large number of fields ,but most of it is dispersed throughout numerous publications. This volume is the consolidtion of the most important results and places an emphasis on estimation. Both theoretical and computational procedures are presented to meet the needs of researchers, professionals, and students. The methods of estimation discussed are well-illustrated with numerous practical examples from both the physical and life sciences, including sociology,psychology,a nd electrical and chemical engineering. A co
Tunnel Cost-Estimating Methods.
1981-10-01
8 ae1e 066 c LINING CALCULATES THE LINING COSTS AND THE FORMWORK COST FOR A 982928 ees C TUNNEL OR SHAFT SEGMENT 682636 0066...AD-AIO . 890 ARMY ENGINEER WATERWAYS EXPERIMENT STATION VICKSBURGETC F/B 13/13 TUNNEL COST-ESTIMATING METNDS(U) OCT 81 R D BENNETT UNCLASSIFIED WES...TR/L-81-101-3lEEEEEE EIIIl-IIIIIIIu IIIIEIIIEIIIIE llllEEEEllEEEI EEEEEEEEEIIII C EllTE-CHNICAL RGPORT GL-81-10 LI10 TUNNEL COST-ESTIMATING METHODS by
A reliable sealing method for microbatteries
Wang, Yuxing; Cartmell, Samuel; Li, Qiuyan; Xiao, Jie; Li, Huidong; Deng, Zhiqun Daniel; Zhang, Ji-Guang
2017-02-01
With continuous downsizing of electronic devices, lithium batteries of traditional shapes cannot meet the demand where small-size high energy density batteries are needed. Conventional sealing methods become increasingly difficult to apply and impose high processing cost as the size of batteries decreases. In this report, a facile sealing method is proposed and demonstrated in CFx/Li mini-batteries. The method employs a temporary barrier to liquid electrolytes while relies on the epoxies/cell casings bond for the hermetic sealing. Cells sealed by this method show no degradation for an extended period of storage time.
Reliability Analysis for Tunnel Supports System by Using Finite Element Method
E. Bukaçi
2016-09-01
Full Text Available Reliability analysis is a method that can be used in almost any geotechnical engineering problem. Using this method requires the knowledge of parameter uncertainties, which can be expressed by their standard deviation value. By performing reliability analysis to tunnel supports design, can be obtained a range of safety factors and by using them, probability of failure can be calculated. Problem becomes more complex when this analysis is performed for numerical methods, such as Finite Element Method. This paper gives a solution to how reliability analysis can be performed to design tunnel supports, by using Point Estimate Method to calculate reliability index. As a case study, is chosen one of the energy tunnels at Fan Hydropower plant, in Rrëshen Albania. As results, values of factor of safety and probability of failure are calculated. Also some suggestions using reliability analysis with numerical methods are given.
Singh, A.; Deeds, N.; Kelley, V.
2012-12-01
Estimating how much water is being used by various water users is key to effective management and optimal utilization of groundwater resources. This is especially true for aquifers like the Ogallala that are severely stressed and display depleting trends over the last many years. The High Plains Underground Water Conservation District (HPWD) is the largest and oldest of the Texas water conservation districts, and oversees approximately 1.7 million irrigated acres. Water users within the 16 counties that comprise the HPWD draw from the Ogallala extensively. The HPWD has recently proposed flow-meters as well as various 'alternative methods' for water users to report water usage. Alternative methods include using a) site specific energy conversion factors to convert total amount of energy used (for pumping stations) to water pumped, b) reporting nozzle package (on center pivot irrigation systems) specifications and hours of usage, and c) reporting concentrated animal feeding operations (CAFOs). The focus of this project was to evaluate the reliability and effectiveness for each of these water use estimation techniques for regulatory purposes. Reliability and effectiveness of direct flow-metering devices was also addressed. Findings indicate that due to site-specific variability and hydrogeologic heterogeneity, alternative methods for estimating water use can have significant uncertainties associated with water use estimates. The impact of these uncertainties on overall water usage, conservation, and management was also evaluated. The findings were communicated to the Stakeholder Advisory Group and the Water Conservation District with guidelines and recommendations on how best to implement the various techniques.
A reliable sealing method for microbatteries
Wang, Yuxing; Cartmell, Samuel; Li, Qiuyan; Xiao, Jie; Li, Huidong; Deng, Zhiqun Daniel; Zhang, Ji-Guang
2017-02-01
As electronic devices continue to become smaller, their energy sources (i.e., batteries) also need to be smaller. Typically, energy densities of batteries decrease as the battery size decreases due to the relative increase of parasitic weight such as packaging materials. In addition, the sealing methods in conventional batteries are difficult to apply to microbatteries. In this work, we developed a facile sealing method for microbatteries. The method employs a dual-sealing concept: a first rubber barrier temporally confines the organic electrolytes and a second adhesive barrier forms a hermetic seal with the battery case. With this innovative sealing approach, excellent shelf life and operation life of the batteries have been demonstrated. A minimal amount of packing materials is employed resulting in high energy densities.
Spanning Trees and bootstrap reliability estimation in correlation based networks
Tumminello, M; Lillo, F; Micciché, S; Mantegna, R N
2006-01-01
We introduce a new technique to associate a spanning tree to the average linkage cluster analysis. We term this tree as the Average Linkage Minimum Spanning Tree. We also introduce a technique to associate a value of reliability to links of correlation based graphs by using bootstrap replicas of data. Both techniques are applied to the portfolio of the 300 most capitalized stocks traded at New York Stock Exchange during the time period 2001-2003. We show that the Average Linkage Minimum Spanning Tree recognizes economic sectors and sub-sectors as communities in the network slightly better than the Minimum Spanning Tree does. We also show that the average reliability of links in the Minimum Spanning Tree is slightly greater than the average reliability of links in the Average Linkage Minimum Spanning Tree.
Generating human reliability estimates using expert judgment. Volume 1. Main report
Comer, M.K.; Seaver, D.A.; Stillwell, W.G.; Gaddy, C.D.
1984-11-01
The US Nuclear Regulatory Commission is conducting a research program to determine the practicality, acceptability, and usefulness of several different methods for obtaining human reliability data and estimates that can be used in nuclear power plant probabilistic risk assessment (PRA). One method, investigated as part of this overall research program, uses expert judgment to generate human error probability (HEP) estimates and associated uncertainty bounds. The project described in this document evaluated two techniques for using expert judgment: paired comparisons and direct numerical estimation. Volume 1 of this report provides a brief overview of the background of the project, the procedure for using psychological scaling techniques to generate HEP estimates and conclusions from evaluation of the techniques. Results of the evaluation indicate that techniques using expert judgment should be given strong consideration for use in developing HEP estimates. In addition, HEP estimates for 35 tasks related to boiling water reactors (BMRs) were obtained as part of the evaluation. These HEP estimates are also included in the report.
Generating human reliability estimates using expert judgment. Volume 2. Appendices. [PWR; BWR
Comer, M.K.; Seaver, D.A.; Stillwell, W.G.; Gaddy, C.D.
1984-11-01
The US Nuclear Regulatory Commission is conducting a research program to determine the practicality, acceptability, and usefulness of several different methods for obtaining human reliability data and estimates that can be used in nuclear power plant probabilistic risk assessments (PRA). One method, investigated as part of this overall research program, uses expert judgment to generate human error probability (HEP) estimates and associated uncertainty bounds. The project described in this document evaluated two techniques for using expert judgment: paired comparisons and direct numerical estimation. Volume 2 provides detailed procedures for using the techniques, detailed descriptions of the analyses performed to evaluate the techniques, and HEP estimates generated as part of this project. The results of the evaluation indicate that techniques using expert judgment should be given strong consideration for use in developing HEP estimates. Judgments were shown to be consistent and to provide HEP estimates with a good degree of convergent validity. Of the two techniques tested, direct numerical estimation appears to be preferable in terms of ease of application and quality of results.
Khan, M.A.; Kerkhoff, Hans G.
2013-01-01
Reliability of electronic systems has been thoroughly investigated in literature and a number of analytical approaches at the design stage are already available via examination of the circuit-level reliability effects based on device-level models. Reliability estimation during operational life of an
Khan, Muhammad Aamir; Kerkhoff, Hans G.
2013-01-01
Reliability of electronic systems has been thoroughly investigated in literature and a number of analytical approaches at the design stage are already available via examination of the circuit-level reliability effects based on device-level models. Reliability estimation during operational life of an
Operational Procedures for Optimized Reliability and Component Life Estimator (ORACLE)
1975-12-01
TOTAL FAILURE RATE AND TTF Figure 1. Block diagram of the reliability predicition program routines (cross hatched boxes), the required inputs and the...in some signifi- cant way, describe and/or identify the particular piece of equipment associated with the parts or module. The maintenance of a
Reliability-based design optimization with Cross-Entropy method
Ghidey, Hiruy
2015-01-01
Implementation of the Cross-entropy (CE) method to solve reliability-based design optimization (RBDO) problems was investigated. The emphasis of this implementation method was to solve independently both the reliability and optimization sub-problems within the RBDO problem; therefore, the main aim of this study was to evaluate the performance of the Cross-entropy method in terms of efficiency and accuracy to solve RBDO problems. A numerical approach was followed in which the implementatio...
Human Reliability Analysis for Design: Using Reliability Methods for Human Factors Issues
Ronald Laurids Boring
2010-11-01
This paper reviews the application of human reliability analysis methods to human factors design issues. An application framework is sketched in which aspects of modeling typically found in human reliability analysis are used in a complementary fashion to the existing human factors phases of design and testing. The paper provides best achievable practices for design, testing, and modeling. Such best achievable practices may be used to evaluate and human system interface in the context of design safety certifications.
A reliability assessment method based on support vector machines for CNC equipment
无
2009-01-01
With the applications of high technology,a catastrophic failure of CNC equipment rarely occurs at normal operation conditions.So it is difficult for traditional reliability assessment methods based on time-to-failure distributions to deduce the reliability level.This paper presents a novel reliability assessment methodology to estimate the reliability level of equipment with machining performance degradation data when only a few samples are available.The least squares support vector machines(LS-SVM) are introduced to analyze the performance degradation process on the equipment.A two-stage parameter optimization and searching method is proposed to improve the LS-SVM regression performance and a reliability assessment model based on the LS-SVM is built.A machining performance degradation experiment has been carried out on an OTM650 machine tool to validate the effectiveness of the proposed reliability assessment methodology.
A reliability assessment method based on support vector machines for CNC equipment
WU Jun; DENG Chao; SHAO XinYu; XIE S Q
2009-01-01
With the applications of high technology, a catastrophic failure of CNC equipment rarely occurs at normal operation conditions. So it is difficult for traditional reliability assessment methods based on time-to-failure distributions to deduce the reliability level. This paper presents a novel reliability assessment methodology to estimate the reliability level of equipment with machining performance degradation data when only a few samples are available. The least squares support vector machines(LS-SVM) are introduced to analyze the performance degradation process on the equipment. A two-stage parameter optimization and searching method is proposed to improve the LS-SVM regression performance and a reliability assessment model based on the LS-SVM is built. A machining performance degradation experiment has been carried out on an OTM650 machine tool to validate the effectiveness of the proposed reliability assessment methodology.
Reliability Evaluation Method for IP Multicast Communication under QoS Constraints
Dai Fusheng; Bao Xuecai; Han Weizhan
2011-01-01
In order to estimate the reliability performance of multicast communication under multiple constraint conditions,the weight of service rate and the reliability index are defined,accompanying the calculation method.Firstly,according to the Quality of Service requirements,the appropriate routings between the central node and target nodes that meet the requirements are calculated using the iterative method in the weighted internet.Then,the disjoint set of network state and the coefficients of weighted service rate are calculated by decomposition and merge methods.Lastly,the formula for calculating the service rate is obtained based on the disjoint set of network state and the calculation of the reliability index will be completed.The simulation result shows that the reliability of multicast communication can be appropriately reflected by the weight of service rate and the calculation method,which can provide the theoretical basis for the reliability evaluation of multicast communication.
Iwankiewicz, R.; Nielsen, Søren R. K.; Skjærbæk, P. S.
The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation.......The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation....
Engineer’s estimate reliability and statistical characteristics of bids
Fariborz M. Tehrani
2016-12-01
Full Text Available The objective of this report is to provide a methodology for examining bids and evaluating the performance of engineer’s estimates in capturing the true cost of projects. This study reviews the cost development for transportation projects in addition to two sources of uncertainties in a cost estimate, including modeling errors and inherent variability. Sample projects are highway maintenance projects with a similar scope of the work, size, and schedule. Statistical analysis of engineering estimates and bids examines the adaptability of statistical models for sample projects. Further, the variation of engineering cost estimates from inception to implementation has been presented and discussed for selected projects. Moreover, the applicability of extreme values theory is assessed for available data. The results indicate that the performance of engineer’s estimate is best evaluated based on trimmed average of bids, excluding discordant bids.
Estimating the Reliability of Electronic Parts in High Radiation Fields
Everline, Chester; Clark, Karla; Man, Guy; Rasmussen, Robert; Johnston, Allan; Kohlhase, Charles; Paulos, Todd
2008-01-01
Radiation effects on materials and electronic parts constrain the lifetime of flight systems visiting Europa. Understanding mission lifetime limits is critical to the design and planning of such a mission. Therefore, the operational aspects of radiation dose are a mission success issue. To predict and manage mission lifetime in a high radiation environment, system engineers need capable tools to trade radiation design choices against system design and reliability, and science achievements. Conventional tools and approaches provided past missions with conservative designs without the ability to predict their lifetime beyond the baseline mission.This paper describes a more systematic approach to understanding spacecraft design margin, allowing better prediction of spacecraft lifetime. This is possible because of newly available electronic parts radiation effects statistics and an enhanced spacecraft system reliability methodology. This new approach can be used in conjunction with traditional approaches for mission design. This paper describes the fundamentals of the new methodology.
A Method for Analyzing System Reliability of Existing Jacket Platforms
HE Yong; GONG Shun-feng; JIN Wei-liang
2008-01-01
Owing to the ageing of the existing structures worldwide and the lack of codes for the continued safety management of structures during their lifetime, it is very necessary to develop a tool to evaluate their system reliability over a time interval. In this paper, a method is proposed to analyze system reliability of existing jacket platforms. The influences of dint, crack and corrosion are considered. The mechanics characteristics of the existing jacket platforms to extreme loads are analyzed by use of the nonlinear mechanical analysis. The nonlinear interaction of pile-soil-structure is taken into consideration in the analysis. By use of FEM method and Monte Carlo simulation, the system reliability of the existing jacket platform can be obtained. The method has been illustrated through application to BZ28-1 three jacket platforms which have operated for sixteen years. Advantages of the proposed method for analyzing the system reliability of the existing jacket platform is also highlighted.
Age estimation from physiological changes of teeth: A reliable age marker?
Nishant Singh
2014-01-01
Full Text Available Background: Age is an essential factor in establishing the identity of a person. Teeth are one of the most durable and resilient part of skeleton. Gustafson (1950 suggested the use of six retrogressive dental changes that are seen with increasing age. Aim: The aim of the study was to evaluate the results and to check the reliability of modified Gustafson′s method for determining the age of an individual. Materials and Methods: Total 70 patients in the age group of 20-65 years, undergoing extraction were included in this present work. The ground sections of extracted teeth were prepared and examined under the microscope. Modified Gustafson′s criteria were used for the estimation of age. Degree of attrition, root translucency, secondary dentin deposition, cementum apposition, and root resorption were measured. A linear regression formula was obtained using different statistical equations in a sample of 70 patients. Results: The mean age difference of total 70 cases studied was ±2.64 years. Difference of actual and calculated age was significant and was observed at 5% level of significance, that is, t-cal > t-tab (t-cal = 7.72. P < 0.05, indicates that the results were statistically significant. Conclusion: The present study concludes that Gustafson′s method is a reliable method for age estimation with some proposed modifications.
Huang, Liping; Crino, Michelle; Wu, Jason Hy
2016-01-01
BACKGROUND: Methods based on spot urine samples (a single sample at one time-point) have been identified as a possible alternative approach to 24-hour urine samples for determining mean population salt intake. OBJECTIVE: The aim of this study is to identify a reliable method for estimating mean p...
Novel Software Reliability Estimation Model for Altering Paradigms of Software Engineering
Ritika Wason
2012-05-01
Full Text Available A number of different software engineering paradigms like Component-Based Software Engineering (CBSE, Autonomic Computing, Service-Oriented Computing (SOC, Fault-Tolerant Computing and many others are being researched currently. These paradigms denote a paradigm shift from the currently mainstream object-oriented paradigm and are altering the way we view, design, develop and exercise software. Though these paradigms indicate a major shift in the way we design and code software. However, we still rely on traditional reliability models for estimating the reliability of any of the above systems. This paper analyzes the underlying characteristics of these paradigms and proposes a novel Finite Automata Based Reliability model as a suitable model for estimating reliability of modern, complex, distributed and critical software applications. We further outline the basic framework for an intelligent, automata-based reliability model that can be used for accurate estimation of system reliability of software systems at any point in the software life cycle.
Estimation on the Reliability of Farm Vehicle Based on Artificial Neural Network
WANG Jinwu
2008-01-01
As a peculiar product in China today, farm vehicles play an important role in economic construction and development of the countryside, but its work reliability remains low. In this paper truncated tracking was used to solve the low reliability of farm vehicles. Relevant reliability data were obtained by tracking a certain model vehicle and conducting reliability experiments. Data analysis revealed the weakest part of the vehicle system was the engine assembly. The theory of Artificial Neural Network was employed to estimate a parameter of the reliability model based on self-adaptive linear neural network, and the reliability function educed by the estimation could provide important theory references for reliability reassignment, manufacture and management of farm transport vehicles.
Reliable estimation of orbit errors in spaceborne SAR interferometry
Bähr, H.; Hanssen, R.F.
2012-01-01
An approach to improve orbital state vectors by orbit error estimates derived from residual phase patterns in synthetic aperture radar interferograms is presented. For individual interferograms, an error representation by two parameters is motivated: the baseline error in cross-range and the rate of
Anupam Pathak
2014-11-01
Full Text Available Abstract: Problem Statement: The two-parameter exponentiated Rayleigh distribution has been widely used especially in the modelling of life time event data. It provides a statistical model which has a wide variety of application in many areas and the main advantage is its ability in the context of life time event among other distributions. The uniformly minimum variance unbiased and maximum likelihood estimation methods are the way to estimate the parameters of the distribution. In this study we explore and compare the performance of the uniformly minimum variance unbiased and maximum likelihood estimators of the reliability function R(t=P(X>t and P=P(X>Y for the two-parameter exponentiated Rayleigh distribution. Approach: A new technique of obtaining these parametric functions is introduced in which major role is played by the powers of the parameter(s and the functional forms of the parametric functions to be estimated are not needed. We explore the performance of these estimators numerically under varying conditions. Through the simulation study a comparison are made on the performance of these estimators with respect to the Biasness, Mean Square Error (MSE, 95% confidence length and corresponding coverage percentage. Conclusion: Based on the results of simulation study the UMVUES of R(t and ‘P’ for the two-parameter exponentiated Rayleigh distribution found to be superior than MLES of R(t and ‘P’.
Estimating the reliability of eyewitness identifications from police lineups.
Wixted, John T; Mickes, Laura; Dunn, John C; Clark, Steven E; Wells, William
2016-01-12
Laboratory-based mock crime studies have often been interpreted to mean that (i) eyewitness confidence in an identification made from a lineup is a weak indicator of accuracy and (ii) sequential lineups are diagnostically superior to traditional simultaneous lineups. Largely as a result, juries are increasingly encouraged to disregard eyewitness confidence, and up to 30% of law enforcement agencies in the United States have adopted the sequential procedure. We conducted a field study of actual eyewitnesses who were assigned to simultaneous or sequential photo lineups in the Houston Police Department over a 1-y period. Identifications were made using a three-point confidence scale, and a signal detection model was used to analyze and interpret the results. Our findings suggest that (i) confidence in an eyewitness identification from a fair lineup is a highly reliable indicator of accuracy and (ii) if there is any difference in diagnostic accuracy between the two lineup formats, it likely favors the simultaneous procedure.
Bayesian Network Enhanced with Structural Reliability Methods: Methodology
Straub, Daniel; Der Kiureghian, Armen
2012-01-01
We combine Bayesian networks (BNs) and structural reliability methods (SRMs) to create a new computational framework, termed enhanced Bayesian network (eBN), for reliability and risk analysis of engineering structures and infrastructure. BNs are efficient in representing and evaluating complex probabilistic dependence structures, as present in infrastructure and structural systems, and they facilitate Bayesian updating of the model when new information becomes available. On the other hand, SR...
J. Arora
2014-09-01
Full Text Available Dental ageing is important in medico legal cases when teeth are the only material available to the investigating agencies for identification of the deceased. Attrition, which is the wear of occlusal surface of tooth (a physiological change; can be used as a determinant parameter for this purpose. The present study has been undertaken to examine the reliability of attrition as a sole parameter for age estimation among North Western adult Indians. 109 (43males, 66 females single rooted freshly extracted teeth ranging in age from 18-75years were studied. Teeth were fixed, cleaned and sectioned labiolingually upto thickness of 1mm. Sections were then mounted and attrition was graded from 0-3 according to Gustafson’s method. Scores were subjected to regression equation to estimate age of an individual. Results of the present study revealed that this parameter is reliable in individuals of ≤ 60 years with an error of ±10years. However, periodontal disease severely affected the accuracy of age estimation from this parameter as is evident from the results. Statistically no significant difference was noted in absolute mean error of age in different age groups. No significant difference was observed in absolute mean error of age in both the sexes.
zahra Hooshyari
2013-04-01
Full Text Available Objective: the aim of the present study was the estimation of validation and reliability test of ASSIST instrument in Iran. Method: our research populations were Iranian alcohol and drugs users and abusers in the year 1390 that had referred to rehabilitation camps and addiction treatment centers for self-improving. Sample sizes of 2600, average age 36/5, were selected by cluster random sampling in eight provinces. The ASSIST and demographic form exercised for all of sample group. Also in order to validity estimation, 300 number of main sample we interviewed by ASI, SDS, DAST and DSM-IV criteria. Findings: ASSIST reliability estimated by Cronbach’s alpha for all of domains was between %79 to %95. Data analyses showed fair criteria, construct, discriminate and multi dimension validity. These types of validity for other domains were Discriminative validity of the ASSIST was investigated by comparison of ASSIST scores as groupes of dependence, abuser and user. There were significant confirmation between this scores and DSM-IV scores. Construct validity of the ASSIST was investigated by statistical comparison with health scores. ASSIST's cut off points classify clients in 3 categories in term of intensity of addiction. Conclusion: we surely recommend researchers to use this instrument in research and screening purposes or other situations in Iran.
Estimating Predictability Redundancy and Surrogate Data Method
Pecen, L
1995-01-01
A method for estimating theoretical predictability of time series is presented, based on information-theoretic functionals---redundancies and surrogate data technique. The redundancy, designed for a chosen model and a prediction horizon, evaluates amount of information between a model input (e.g., lagged versions of the series) and a model output (i.e., a series lagged by the prediction horizon from the model input) in number of bits. This value, however, is influenced by a method and precision of redundancy estimation and therefore it is a) normalized by maximum possible redundancy (given by the precision used), and b) compared to the redundancies obtained from two types of the surrogate data in order to obtain reliable classification of a series as either unpredictable or predictable. The type of predictability (linear or nonlinear) and its level can be further evaluated. The method is demonstrated using a numerically generated time series as well as high-frequency foreign exchange data and the theoretical ...
Statistical models and methods for reliability and survival analysis
Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo
2013-01-01
Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical
An Efficient Method for Reliability-based Multidisciplinary Design Optimization
Fan Hui; Li Weiji
2008-01-01
Design for modem engineering system is becoming multidisciplinary and incorporates practical uncertainties; therefore, it is necessary to synthesize reliability analysis and the multidiscipLinary design optimization (MDO) techniques for the design of complex engineering system. An advanced first order second moment method-based concurrent subspace optimization approach is proposed based on the comparison and analysis of the existing multidisciplinary optimization techniques and the reliability analysis methods. It is seen through a canard configuration optimization for a three-surface transport that the proposed method is computationally efficient and practical with the least modification to the current deterministic optimization process.
Recent advances in computational structural reliability analysis methods
Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.
1993-01-01
The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.
Updated Value of Service Reliability Estimates for Electric Utility Customers in the United States
Sullivan, Michael [Nexant Inc., Burlington, MA (United States); Schellenberg, Josh [Nexant Inc., Burlington, MA (United States); Blundell, Marshall [Nexant Inc., Burlington, MA (United States)
2015-01-01
This report updates the 2009 meta-analysis that provides estimates of the value of service reliability for electricity customers in the United States (U.S.). The meta-dataset now includes 34 different datasets from surveys fielded by 10 different utility companies between 1989 and 2012. Because these studies used nearly identical interruption cost estimation or willingness-to-pay/accept methods, it was possible to integrate their results into a single meta-dataset describing the value of electric service reliability observed in all of them. Once the datasets from the various studies were combined, a two-part regression model was used to estimate customer damage functions that can be generally applied to calculate customer interruption costs per event by season, time of day, day of week, and geographical regions within the U.S. for industrial, commercial, and residential customers. This report focuses on the backwards stepwise selection process that was used to develop the final revised model for all customer classes. Across customer classes, the revised customer interruption cost model has improved significantly because it incorporates more data and does not include the many extraneous variables that were in the original specification from the 2009 meta-analysis. The backwards stepwise selection process led to a more parsimonious model that only included key variables, while still achieving comparable out-of-sample predictive performance. In turn, users of interruption cost estimation tools such as the Interruption Cost Estimate (ICE) Calculator will have less customer characteristics information to provide and the associated inputs page will be far less cumbersome. The upcoming new version of the ICE Calculator is anticipated to be released in 2015.
Reliably Detectable Flaw Size for NDE Methods that Use Calibration
Koshti, Ajay M.
2017-01-01
Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-1823 and associated mh1823 POD software gives most common methods of POD analysis. In this paper, POD analysis is applied to an NDE method, such as eddy current testing, where calibration is used. NDE calibration standards have known size artificial flaws such as electro-discharge machined (EDM) notches and flat bottom hole (FBH) reflectors which are used to set instrument sensitivity for detection of real flaws. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. Therefore, it is important to correlate signal responses from real flaws with signal responses form artificial flaws used in calibration process to determine reliably detectable flaw size.
DAKOTA reliability methods applied to RAVEN/RELAP-7.
Swiler, Laura Painton; Mandelli, Diego; Rabiti, Cristian; Alfonsi, Andrea
2013-09-01
This report summarizes the result of a NEAMS project focused on the use of reliability methods within the RAVEN and RELAP-7 software framework for assessing failure probabilities as part of probabilistic risk assessment for nuclear power plants. RAVEN is a software tool under development at the Idaho National Laboratory that acts as the control logic driver and post-processing tool for the newly developed Thermal-Hydraulic code RELAP-7. Dakota is a software tool developed at Sandia National Laboratories containing optimization, sensitivity analysis, and uncertainty quantification algorithms. Reliability methods are algorithms which transform the uncertainty problem to an optimization problem to solve for the failure probability, given uncertainty on problem inputs and a failure threshold on an output response. The goal of this work is to demonstrate the use of reliability methods in Dakota with RAVEN/RELAP-7. These capabilities are demonstrated on a demonstration of a Station Blackout analysis of a simplified Pressurized Water Reactor (PWR).
Coefficient Alpha as an Estimate of Test Reliability under Violation of Two Assumptions.
Zimmerman, Donald W.; And Others
1993-01-01
Coefficient alpha was examined through computer simulation as an estimate of test reliability under violation of two assumptions. Coefficient alpha underestimated reliability under violation of the assumption of essential tau-equivalence of subtest scores and overestimated it under violation of the assumption of uncorrelated subtest error scores.…
Estimation of Internal Consistency Reliability When Test Parts Vary in Effective Length.
Feldt, Leonard S.; Charter, Richard A.
2003-01-01
Evaluating a test's reliability often requires dividing it into 3 or more unequal parts, which causes violation of the tau equivalence assumption of Cronbach's alpha. This article presents a criterion for abandoning alpha and an approach for computing a more appropriate estimate of reliability, the Gilmer-Feldt coefficient. (Author)
Khan, M.A.; Kerkhoff, Hans G.
2013-01-01
System dependability has become important for critical applications in recent years as technology is moving towards smaller dimensions. Achieving high dependability can be supported by reliability estimations during the operational life. In addition this requires a workflow for regularly monitoring
Khan, Muhammad Aamir; Kerkhoff, Hans G.
2013-01-01
System dependability has become important for critical applications in recent years as technology is moving towards smaller dimensions. Achieving high dependability can be supported by reliability estimations during the operational life. In addition this requires a workflow for regularly monitoring
Selected Methods For Increases Reliability The Of Electronic Systems Security
Paś Jacek
2015-11-01
Full Text Available The article presents the issues related to the different methods to increase the reliability of electronic security systems (ESS for example, a fire alarm system (SSP. Reliability of the SSP in the descriptive sense is a property preservation capacity to implement the preset function (e.g. protection: fire airport, the port, logistics base, etc., at a certain time and under certain conditions, e.g. Environmental, despite the possible non-compliance by a specific subset of elements this system. Analyzing the available literature on the ESS-SSP is not available studies on methods to increase the reliability (several works similar topics but moving with respect to the burglary and robbery (Intrusion. Based on the analysis of the set of all paths in the system suitability of the SSP for the scenario mentioned elements fire events (device critical because of security.
A NEW TWO-POINT ADAPTIVENONLINEAR APPROXIMATION METHOD FOR RELIABILITY ANALYSIS
LiuShutian
2004-01-01
A two-point adaptive nonlinear approximation (referred to as TANA4) suitable for reliability analysis is proposed. Transformed and normalized random variables in probabilistic analysis could become negative and pose a challenge to the earlier developed two-point approximations; thus a suitable method that can address this issue is needed. In the method proposed, the nonlinearity indices of intervening variables are limited to integers. Then, on the basis of the present method, an improved sequential approximation of the limit state surface for reliability analysis is presented. With the gradient projection method, the data points for the limit state surface approximation are selected on the original limit state surface, which effectively represents the nature of the original response function. On the basis of this new approximation, the reliability is estimated using a first-order second-moment method. Various examples, including both structural and non-structural ones, are presented to show the effectiveness of the method proposed.
Reduced Expanding Load Method for Simulation-Based Structural System Reliability Analysis
远方; 宋丽娜; 方江生
2004-01-01
The current situation and difficulties of the structural system reliability analysis are mentioned. Then on the basis of Monte Carlo method and computer simulation, a new analysis method reduced expanding load method ( RELM ) is presented, which can be used to solve structural reliability problems effectively and conveniently. In this method, the uncertainties of loads, structural material properties and dimensions can be fully considered. If the statistic parameters of stochastic variables are known, by using this method, the probability of failure can be estimated rather accurately. In contrast with traditional approaches, RELM method gives a much better understanding of structural failure frequency and its reliability indexβ is more meaningful. To illustrate this new idea, a specific example is given.
Reliability methods in nuclear power plant ageing management
Simola, K. [VTT Automation, Espoo (Finland). Industrial Automation
1999-07-01
The aim of nuclear power plant ageing management is to maintain an adequate safety level throughout the lifetime of the plant. In ageing studies, the reliability of components, systems and structures is evaluated taking into account the possible time-dependent degradation. The phases of ageing analyses are generally the identification of critical components, identification and evaluation of ageing effects, and development of mitigation methods. This thesis focuses on the use of reliability methods and analyses of plant- specific operating experience in nuclear power plant ageing studies. The presented applications and method development have been related to nuclear power plants, but many of the approaches can also be applied outside the nuclear industry. The thesis consists of a summary and seven publications. The summary provides an overview of ageing management and discusses the role of reliability methods in ageing analyses. In the publications, practical applications and method development are described in more detail. The application areas at component and system level are motor-operated valves and protection automation systems, for which experience-based ageing analyses have been demonstrated. Furthermore, Bayesian ageing models for repairable components have been developed, and the management of ageing by improving maintenance practices is discussed. Recommendations for improvement of plant information management in order to facilitate ageing analyses are also given. The evaluation and mitigation of ageing effects on structural components is addressed by promoting the use of probabilistic modelling of crack growth, and developing models for evaluation of the reliability of inspection results. (orig.)
Variance estimation in neutron coincidence counting using the bootstrap method
Dubi, C., E-mail: chendb331@gmail.com [Physics Department, Nuclear Research Center of the Negev, P.O.B. 9001 Beer Sheva (Israel); Ocherashvilli, A.; Ettegui, H. [Physics Department, Nuclear Research Center of the Negev, P.O.B. 9001 Beer Sheva (Israel); Pedersen, B. [Nuclear Security Unit, Institute for Transuranium Elements, Via E. Fermi, 2749 JRC, Ispra (Italy)
2015-09-11
In the study, we demonstrate the implementation of the “bootstrap” method for a reliable estimation of the statistical error in Neutron Multiplicity Counting (NMC) on plutonium samples. The “bootstrap” method estimates the variance of a measurement through a re-sampling process, in which a large number of pseudo-samples are generated, from which the so-called bootstrap distribution is generated. The outline of the present study is to give a full description of the bootstrapping procedure, and to validate, through experimental results, the reliability of the estimated variance. Results indicate both a very good agreement between the measured variance and the variance obtained through the bootstrap method, and a robustness of the method with respect to the duration of the measurement and the bootstrap parameters.
Reliability/Cost Evaluation on Power System connected with Wind Power for the Reserve Estimation
Lee, Go-Eun; Cha, Seung-Tae; Shin, Je-Seok;
2012-01-01
Wind power is ideally a renewable energy with no fuel cost, but has a risk to reduce reliability of the whole system because of uncertainty of the output. If the reserve of the system is increased, the reliability of the system may be improved. However, the cost would be increased. Therefore...... the reserve needs to be estimated considering the trade-off between reliability and economic aspects. This paper suggests a methodology to estimate the appropriate reserve, when wind power is connected to the power system. As a case study, when wind power is connected to power system of Korea, the effects...
Bouyssy, V.
1996-12-31
For tubular joints of offshore jacket structures, large discrepancies are observed between predicted and measured fatigue damages. The match between predictions and measurements is improved when one performs stochastic fatigue analyses. For a platform in the North Sea, however, it is found that stochastic fatigue life estimates still are inaccurate. The inaccuracy is due to uncertainties in the loading and local resistance and also in the calculation results - e.g. in the structural response and mean damage rate. By means of extensive numerical studies, it is shown how numerical uncertainties can be avoided in the calculation results. Further, it is explained that random fluctuations intrinsic in nature exist in the loading, local resistance and system properties. These random fluctuations can be accounted for in a probabilistic reliability analysis only. Then one computes the probability of a fatigue failure after a given service time instead of predicting a deterministic fatigue life. In the reliability analysis of offshore jacket structures usually only uncertainties in the loading and local resistance are taken into account. For dynamically excited jacket structures, however, stochastic analyses indicate that the influence of uncertainties in structural properties can be significant both with respect to extreme value failure and with respect to fatigue in some cases. A new numerical method is developed to estimate the reliability of offshore structures against both extreme and fatigue failures. The method allows to account for the random fluctuations in the loading, local resistance and structural properties. The suitability of the method to provide accurate estimates of failure probabilities in as few structural analyses as possible is investigated in two case studies representative for a number of offshore structures. (orig.) [Deutsch] Rohrverbindungen bei Meeresplattformen der Nordsee koennen durch Ermuedung versagen. Fuer bestehende Plattformen werden
Kanjilal, Oindrila; Manohar, C. S.
2017-07-01
The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations.
An adaptive neuro fuzzy model for estimating the reliability of component-based software systems
Kirti Tyagi
2014-01-01
Full Text Available Although many algorithms and techniques have been developed for estimating the reliability of component-based software systems (CBSSs, much more research is needed. Accurate estimation of the reliability of a CBSS is difficult because it depends on two factors: component reliability and glue code reliability. Moreover, reliability is a real-world phenomenon with many associated real-time problems. Soft computing techniques can help to solve problems whose solutions are uncertain or unpredictable. A number of soft computing approaches for estimating CBSS reliability have been proposed. These techniques learn from the past and capture existing patterns in data. The two basic elements of soft computing are neural networks and fuzzy logic. In this paper, we propose a model for estimating CBSS reliability, known as an adaptive neuro fuzzy inference system (ANFIS, that is based on these two basic elements of soft computing, and we compare its performance with that of a plain FIS (fuzzy inference system for different data sets.
Reliability-Based Shape Optimization using Stochastic Finite Element Methods
Enevoldsen, Ib; Sørensen, John Dalsgaard; Sigurdsson, G.
1991-01-01
Application of first-order reliability methods FORM (see Madsen, Krenk & Lind [8)) in structural design problems has attracted growing interest in recent years, see e.g. Frangopol [4), Murotsu, Kishi, Okada, Yonezawa & Taguchi [9) and Sørensen [14). In probabilistically based optimal design...
Information-theoretic methods for estimating of complicated probability distributions
Zong, Zhi
2006-01-01
Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur
Modelling application for cognitive reliability and error analysis method
Fabio De Felice
2013-10-01
Full Text Available The automation of production systems has delegated to machines the execution of highly repetitive and standardized tasks. In the last decade, however, the failure of the automatic factory model has led to partially automated configurations of production systems. Therefore, in this scenario, centrality and responsibility of the role entrusted to the human operators are exalted because it requires problem solving and decision making ability. Thus, human operator is the core of a cognitive process that leads to decisions, influencing the safety of the whole system in function of their reliability. The aim of this paper is to propose a modelling application for cognitive reliability and error analysis method.
Evaluation of Information Requirements of Reliability Methods in Engineering Design
Marini, Vinicius Kaster; Restrepo-Giraldo, John Dairo; Ahmed-Kristensen, Saeema
2010-01-01
This paper aims to characterize the information needed to perform methods for robustness and reliability, and verify their applicability to early design stages. Several methods were evaluated on their support to synthesis in engineering design. Of those methods, FMEA, FTA and HAZOP were selected...... on their insight on design risks and wide spread application. A pilot case study has been performed with a washing machine in using these methods to assess design risks, following a reverse engineering approach. The study has shown the methods can be initiated at early design stages, but cannot be concluded...
Issues in benchmarking human reliability analysis methods : a literature review.
Lois, Erasmia (US Nuclear Regulatory Commission); Forester, John Alan; Tran, Tuan Q. (Idaho National Laboratory, Idaho Falls, ID); Hendrickson, Stacey M. Langfitt; Boring, Ronald L. (Idaho National Laboratory, Idaho Falls, ID)
2008-04-01
There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.
Wang, Guoyu; Houkes, Zweitze; Ji, Guangrong; Zheng, Bing; Li, Xin
2003-01-01
This paper presents a new algorithm for estimation-based range image segmentation. Aiming at surface-primitive extraction from range data, we focus on the reliability of the primitive representation in the process of region estimation. We introduce an optimal description of surface primitives, by wh
韩明
2013-01-01
作者以前提出了一种新的参数估计方法——E-Bayes估计法,对二项分布的可靠度,给出了E-Bayes估计的定义、E-Bayes估计和多层Bayes估计公式,但没有给出E-Bayes估计的性质.该文给出了二项分布可靠度F-Bayes估计的性质.%Previously, the author introduces a new parameter estimation method-E-Bayesian estimation method, to estimate the reliability derived form Binomial distribution, the definition of E-Bayesian estimation of the reliability is provided; moreover, formulas of E-Bayesian estimation and hierarchical Bayesian estimation for the reliability are also provided, but the author did not provide propertiy of E-Bayesian estimation. This paper, properties of E-Bayesian estimation are provided.
Nobuyuki Okahashi
2014-05-01
Full Text Available 13C metabolic flux analysis (MFA is a tool of metabolic engineering for investigation of in vivo flux distribution. A direct 13C enrichment analysis of intracellular free amino acids (FAAs is expected to reduce time for labeling experiments of the MFA. Measurable FAAs should, however, vary among the MFA experiments since the pool sizes of intracellular free metabolites depend on cellular metabolic conditions. In this study, minimal 13C enrichment data of FAAs was investigated to perform the FAAs-based MFA. An examination of a continuous culture of Escherichia coli using 13C-labeled glucose showed that the time required to reach an isotopically steady state for FAAs is rather faster than that for conventional method using proteinogenic amino acids (PAAs. Considering 95% confidence intervals, it was found that the metabolic flux distribution estimated using FAAs has a similar reliability to that of the PAAs-based method. The comparative analysis identified glutamate, aspartate, alanine and phenylalanine as the common amino acids observed in E. coli under different culture conditions. The results of MFA also demonstrated that the 13C enrichment data of the four amino acids is required for a reliable analysis of the flux distribution.
How Many Sleep Diary Entries Are Needed to Reliably Estimate Adolescent Sleep?
Short, Michelle A; Arora, Teresa; Gradisar, Michael; Taheri, Shahrad; Carskadon, Mary A
2017-03-01
To investigate (1) how many nights of sleep diary entries are required for reliable estimates of five sleep-related outcomes (bedtime, wake time, sleep onset latency [SOL], sleep duration, and wake after sleep onset [WASO]) and (2) the test-retest reliability of sleep diary estimates of school night sleep across 12 weeks. Data were drawn from four adolescent samples (Australia [n = 385], Qatar [n = 245], United Kingdom [n = 770], and United States [n = 366]), who provided 1766 eligible sleep diary weeks for reliability analyses. We performed reliability analyses for each cohort using complete data (7 days), one to five school nights, and one to two weekend nights. We also performed test-retest reliability analyses on 12-week sleep diary data available from a subgroup of 55 US adolescents. Intraclass correlation coefficients for bedtime, SOL, and sleep duration indicated good-to-excellent reliability from five weekday nights of sleep diary entries across all adolescent cohorts. Four school nights was sufficient for wake times in the Australian and UK samples, but not the US or Qatari samples. Only Australian adolescents showed good reliability for two weekend nights of bedtime reports; estimates of SOL were adequate for UK adolescents based on two weekend nights. WASO was not reliably estimated using 1 week of sleep diaries. We observed excellent test-rest reliability across 12 weeks of sleep diary data in a subsample of US adolescents. We recommend at least five weekday nights of sleep dairy entries to be made when studying adolescent bedtimes, SOL, and sleep duration. Adolescent sleep patterns were stable across 12 consecutive school weeks.
Reliability Analysis of Penetration Systems Using Nondeterministic Methods
FIELD JR.,RICHARD V.; PAEZ,THOMAS L.; RED-HORSE,JOHN R.
1999-10-27
Device penetration into media such as metal and soil is an application of some engineering interest. Often, these devices contain internal components and it is of paramount importance that all significant components survive the severe environment that accompanies the penetration event. In addition, the system must be robust to perturbations in its operating environment, some of which exhibit behavior which can only be quantified to within some level of uncertainty. In the analysis discussed herein, methods to address the reliability of internal components for a specific application system are discussed. The shock response spectrum (SRS) is utilized in conjunction with the Advanced Mean Value (AMV) and Response Surface methods to make probabilistic statements regarding the predicted reliability of internal components. Monte Carlo simulation methods are also explored.
Reliability and discriminatory power of methods for dental plaque quantification
Daniela Prócida Raggio
2010-04-01
Full Text Available OBJECTIVE: This in situ study evaluated the discriminatory power and reliability of methods of dental plaque quantification and the relationship between visual indices (VI and fluorescence camera (FC to detect plaque. MATERIAL AND METHODS: Six volunteers used palatal appliances with six bovine enamel blocks presenting different stages of plaque accumulation. The presence of plaque with and without disclosing was assessed using VI. Images were obtained with FC and digital camera in both conditions. The area covered by plaque was assessed. Examinations were done by two independent examiners. Data were analyzed by Kruskal-Wallis and Kappa tests to compare different conditions of samples and to assess the inter-examiner reproducibility. RESULTS: Some methods presented adequate reproducibility. The Turesky index and the assessment of area covered by disclosed plaque in the FC images presented the highest discriminatory powers. CONCLUSION: The Turesky index and images with FC with disclosing present good reliability and discriminatory power in quantifying dental plaque.
Gandhi, Neha; Jain, Sandeep; Kumar, Manish; Rupakar, Pratik; Choyal, Kanaram; Prajapati, Seema
2015-01-01
Age assessment may be a crucial step in postmortem profiling leading to confirmative identification. In children, Demirjian's method based on eight developmental stages was developed to determine maturity scores as a function of age and polynomial functions to determine age as a function of score. Of this study was to evaluate the reliability of age estimation using Demirjian's eight teeth method following the French maturity scores and Indian-specific formula from developmental stages of third molar with the help of orthopantomograms using the Demirjian method. Dental panoramic tomograms from 30 subjects each of known chronological age and sex were collected and were evaluated according to Demirjian's criteria. Age calculations were performed using Demirjian's formula and Indian formula. Statistical analysis used was Chi-square test and ANOVA test and the P values obtained were statistically significant. There was an average underestimation of age with both Indian and Demirjian's formulas. The mean absolute error was lower using Indian formula hence it can be applied for age estimation in present Gujarati population. Also, females were ahead of achieving dental maturity than males thus completion of dental development is attained earlier in females. Greater accuracy can be obtained if population-specific formulas considering the ethnic and environmental variation are derived performing the regression analysis.
Nielsen, Søren R.K.; Sørensen, John Dalsgaard; Thoft-Christensen, Palle
1983-01-01
A method is presented for life-time reliability' estimates of randomly excited yielding systems, assuming the structure to be safe, when the plastic deformations are confined below certain limits. The accumulated plastic deformations during any single significant loading history are considered...... to be the outcome of identically distributed, independent stochastic variables,for which a model is suggested. Further assuming the interarrival times of the elementary loading histories to be specified by a Poisson process, and the duration of these to be small compared to the designed life-time, the accumulated...... plastic deformation during several loadings can be modelled as a filtered Poisson process. Using the Markov property of this quantity the considered first-passage problem as well as the related extreme distribution problems are then solved numerically, and the results are compared to simulation studies....
Ali Abd Elhakam Aliabdo
2012-09-01
Full Text Available This study aims to investigate the relationships between Schmidt hardness rebound number (RN and ultrasonic pulse velocity (UPV versus compressive strength (fc of stones and bricks. Four types of rocks (marble, pink lime stone, white lime stone and basalt and two types of burned bricks and lime-sand bricks were studied. Linear and non-linear models were proposed. High correlations were found between RN and UPV versus compressive strength. Validation of proposed models was assessed using other specimens for each material. Linear models for each material showed good correlations than non-linear models. General model between RN and compressive strength of tested stones and bricks showed a high correlation with regression coefficient R2 value of 0.94. Estimation of compressive strength for the studied stones and bricks using their rebound number and ultrasonic pulse velocity in a combined method was generally more reliable than using rebound number or ultrasonic pulse velocity only.
Reliability-based design optimization via high order response surface method
Li, Hong Shuang [Nanjing University of Aeronautics and Astronautics, Nanjing (China)
2013-04-15
To reduce the computational effort of reliability-based design optimization (RBDO), the response surface method (RSM) has been widely used to evaluate reliability constraints. We propose an efficient methodology for solving RBDO problems based on an improved high order response surface method (HORSM) that takes advantage of an efficient sampling method, Hermite polynomials and uncertainty contribution concept to construct a high order response surface function with cross terms for reliability analysis. The sampling method generates supporting points from Gauss-Hermite quadrature points, which can be used to approximate response surface function without cross terms, to identify the highest order of each random variable and to determine the significant variables connected with point estimate method. The cross terms between two significant random variables are added to the response surface function to improve the approximation accuracy. Integrating the nested strategy, the improved HORSM is explored in solving RBDO problems. Additionally, a sampling based reliability sensitivity analysis method is employed to reduce the computational effort further when design variables are distributional parameters of input random variables. The proposed methodology is applied on two test problems to validate its accuracy and efficiency. The proposed methodology is more efficient than first order reliability method based RBDO and Monte Carlo simulation based RBDO, and enables the use of RBDO as a practical design tool.
Reliability analysis method for slope stability based on sample weight
Zhi-gang YANG
2009-09-01
Full Text Available The single safety factor criteria for slope stability evaluation, derived from the rigid limit equilibrium method or finite element method (FEM, may not include some important information, especially for steep slopes with complex geological conditions. This paper presents a new reliability method that uses sample weight analysis. Based on the distribution characteristics of random variables, the minimal sample size of every random variable is extracted according to a small sample t-distribution under a certain expected value, and the weight coefficient of each extracted sample is considered to be its contribution to the random variables. Then, the weight coefficients of the random sample combinations are determined using the Bayes formula, and different sample combinations are taken as the input for slope stability analysis. According to one-to-one mapping between the input sample combination and the output safety coefficient, the reliability index of slope stability can be obtained with the multiplication principle. Slope stability analysis of the left bank of the Baihetan Project is used as an example, and the analysis results show that the present method is reasonable and practicable for the reliability analysis of steep slopes with complex geological conditions.
Estimating Between-Person and Within-Person Subscore Reliability with Profile Analysis.
Bulut, Okan; Davison, Mark L; Rodriguez, Michael C
2017-01-01
Subscores are of increasing interest in educational and psychological testing due to their diagnostic function for evaluating examinees' strengths and weaknesses within particular domains of knowledge. Previous studies about the utility of subscores have mostly focused on the overall reliability of individual subscores and ignored the fact that subscores should be distinct and have added value over the total score. This study introduces a profile reliability approach that partitions the overall subscore reliability into within-person and between-person subscore reliability. The estimation of between-person reliability and within-person reliability coefficients is demonstrated using subscores from number-correct scoring, unidimensional and multidimensional item response theory scoring, and augmented scoring approaches via a simulation study and a real data study. The effects of various testing conditions, such as subtest length, correlations among subscores, and the number of subtests, are examined. Results indicate that there is a substantial trade-off between within-person and between-person reliability of subscores. Profile reliability coefficients can be useful in determining the extent to which subscores provide distinct and reliable information under various testing conditions.
Evaluating maximum likelihood estimation methods to determine the hurst coefficients
Kendziorski, C. M.; Bassingthwaighte, J. B.; Tonellato, P. J.
1999-12-01
A maximum likelihood estimation method implemented in S-PLUS ( S-MLE) to estimate the Hurst coefficient ( H) is evaluated. The Hurst coefficient, with 0.5long memory time series by quantifying the rate of decay of the autocorrelation function. S-MLE was developed to estimate H for fractionally differenced (fd) processes. However, in practice it is difficult to distinguish between fd processes and fractional Gaussian noise (fGn) processes. Thus, the method is evaluated for estimating H for both fd and fGn processes. S-MLE gave biased results of H for fGn processes of any length and for fd processes of lengths less than 2 10. A modified method is proposed to correct for this bias. It gives reliable estimates of H for both fd and fGn processes of length greater than or equal to 2 11.
张峰; 吕震宙; 崔利杰
2011-01-01
基于β面的截断重要抽样法可以用来求解单失效模式可靠性灵敏度.该方法在设计点处作失效面的虚拟切面β面,而β面将变量空间分割成重要抽样区域R和非重要抽样区域S.在R和S区域分别建立相应的截断重要抽样密度函数hR(x)和hs(x),从hR(x)和hs(x)中抽取的样本量按照R和S区域对可靠性灵敏度的贡献来分配,并通过迭代模拟计算来得到.本文推导了基于β面截断重要抽样法的可靠性灵敏度估计值方差和变异系数的计算公式,并将该方法推广应用到并联系统中.算例结果表明:在估计值相对误差小于2%、可靠性灵敏度估计值变异系数相同时,基于β面的截断重要抽样法的可靠性灵敏度估计所需的样本数比传统重要抽样法、β球截断重要抽样法计算量少.%A novel β hyper-plane based importance sampling method is presented to estimate reliability sensitivity of a structure. By introducing a virtual hyper-plane tangent to the failure surface, the variable space is separated into an importance region R and a unimportance region S, on which the truncated importance sampling functions hR(x) and hs(x) are established, respectively. The sampling numbers generated from hR(x) and hs(x) are dependent on the contribution of the reliability sensitivity, which is determined by the iterative simulations. The formulae of the reliability sensitivity estimation, the variance and the coefficient of variation are derived for the presented β hyper-plane importance sampling method. The presented method is suitable for the reliability sensitivity estimation of both the single failure mode and the multiple failure mode in parallel. Examples show that the proposed method is more efficient than the traditional importance sampling method and the β hyper-sphere importance sampling method, in the case that the variation coefficients of three estimations keep the same quantity and the relative errors of the
Influence Factors on the Value of Reliability Estimators in Marketing Research
2011-01-01
This paper is a literature review, with a conclusion that leaves open many doors for future research. In the first part are reviewed a series of qualitative and quantitative research characteristics. The second part explains briefly the reliability and validity of instruments used in qualitative and quantitative marketing research. The third part of the paper review a series of articles on the estimators of reliability, on their power, on their strengths and weaknesses. The conclusions of the...
Reliability-Based Weighting of Visual and Vestibular Cues in Displacement Estimation.
ter Horst, Arjan C; Koppen, Mathieu; Selen, Luc P J; Medendorp, W Pieter
2015-01-01
When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimation, showing that sensory cues are combined by weighting them in proportion to their reliability, consistent with statistically optimal integration. But while heading estimation could improve with the ongoing motion, due to the constant flow of information, the estimate of how far we move requires the integration of sensory information across the whole displacement. In this study, we investigate whether the brain optimally combines visual and vestibular information during a displacement estimation task, even if their reliability varies from trial to trial. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They were subjected to a passive linear motion involving visual and vestibular cues with different levels of visual coherence to change relative cue reliability and with cue discrepancies to test relative cue weighting. Participants performed a two-interval two-alternative forced-choice task, indicating which of two sequentially perceived displacements was larger. Our results show that humans adapt their weighting of visual and vestibular information from trial to trial in proportion to their reliability. These results provide evidence that humans optimally integrate visual and vestibular information in order to estimate their body displacement.
Aircraft Combat Survivability Estimation and Synthetic Tradeoff Methods
LI Shu-lin; LI Shou-an; LI Wei-ji; LI Dong-xia; FENG Feng
2005-01-01
A new concept is proposed that susceptibility, vulnerability, reliability, maintainability and supportability should be essential factors of aircraft combat survivability. A weight coefficient method and a synthetic method are proposed to estimate aircraft combat survivability based on the essential factors. Considering that it takes cost to enhance aircraft combat survivability, a synthetic tradeoff model between aircraft combat survivability and life cycle cost is built. The aircraft combat survivability estimation methods and synthetic tradeoff with a life cycle cost model will be helpful for aircraft combat survivability design and enhancement.
An Allocation Scheme for Estimating the Reliability of a Parallel-Series System
Zohra Benkamra
2012-01-01
Full Text Available We give a hybrid two-stage design which can be useful to estimate the reliability of a parallel-series and/or by duality a series-parallel system. When the components' reliabilities are unknown, one can estimate them by sample means of Bernoulli observations. Let T be the total number of observations allowed for the system. When T is fixed, we show that the variance of the system reliability estimate can be lowered by allocation of the sample size T at components' level. This leads to a discrete optimization problem which can be solved sequentially, assuming T is large enough. First-order asymptotic optimality is proved systematically and validated T Monte Carlo simulation.
Toporkov, A A
2008-01-01
Classification of medical equipment according to the failure effects is given. Methods for increasing the reliability of medical equipment are considered. The problem of organization of the technical state monitoring, maintenance and metrological support of medical equipment is considered from the viewpoint of legislative control.
The Reliability of Electromyographic Normalization Methods for Cycling Analyses.
Sinclair, Jonathan; Taylor, Paul John; Hebron, Jack; Brooks, Darrell; Hurst, Howard Thomas; Atkins, Stephen
2015-06-27
Electromyography (EMG) is normalized in relation to a reference maximum voluntary contraction (MVC) value. Different normalization techniques are available but the most reliable method for cycling movements is unknown. This study investigated the reliability of different normalization techniques for cycling analyses. Twenty-five male cyclists (age 24.13 ± 2.79 years, body height 176.22 ± 4.87 cm and body mass 67.23 ± 4.19 kg, BMI = 21.70 ± 2.60 kg·m-1) performed different normalization procedures on two occasions, within the same testing session. The rectus femoris, biceps femoris, gastrocnemius and tibialis anterior muscles were examined. Participants performed isometric normalizations (IMVC) using an isokinetic dynamometer. Five minutes of submaximal cycling (180 W) were also undertaken, allowing the mean (DMA) and peak (PDA) activation from each muscle to serve as reference values. Finally, a 10 s cycling sprint (MxDA) trial was undertaken and the highest activation from each muscle was used as the reference value. Differences between reference EMG amplitude, as a function of normalization technique and time, were examined using repeated measures ANOVAs. The test-retest reliability of each technique was also examined using linear regression, intraclass correlations and Cronbach's alpha. The results showed that EMG amplitude differed significantly between normalization techniques for all muscles, with the IMVC and MxDA methods demonstrating the highest amplitudes. The highest levels of reliability were observed for the PDA technique for all muscles; therefore, our results support the utilization of this method for cycling analyses.
The Reliability of Electromyographic Normalization Methods for Cycling Analyses
Sinclair Jonathan
2015-06-01
Full Text Available Electromyography (EMG is normalized in relation to a reference maximum voluntary contraction (MVC value. Different normalization techniques are available but the most reliable method for cycling movements is unknown. This study investigated the reliability of different normalization techniques for cycling analyses. Twenty-five male cyclists (age 24.13 ± 2.79 years, body height 176.22 ± 4.87 cm and body mass 67.23 ± 4.19 kg, BMI = 21.70 ± 2.60 kg·m−1 performed different normalization procedures on two occasions, within the same testing session. The rectus femoris, biceps femoris, gastrocnemius and tibialis anterior muscles were examined. Participants performed isometric normalizations (IMVC using an isokinetic dynamometer. Five minutes of submaximal cycling (180 W were also undertaken, allowing the mean (DMA and peak (PDA activation from each muscle to serve as reference values. Finally, a 10 s cycling sprint (MxDA trial was undertaken and the highest activation from each muscle was used as the reference value. Differences between reference EMG amplitude, as a function of normalization technique and time, were examined using repeated measures ANOVAs. The testretest reliability of each technique was also examined using linear regression, intraclass correlations and Cronbach’s alpha. The results showed that EMG amplitude differed significantly between normalization techniques for all muscles, with the IMVC and MxDA methods demonstrating the highest amplitudes. The highest levels of reliability were observed for the PDA technique for all muscles; therefore, our results support the utilization of this method for cycling analyses.
SYNTHESIZED EXPECTED BAYESIAN METHOD OF PARAMETRIC ESTIMATE
Ming HAN; Yuanyao DING
2004-01-01
This paper develops a new method of parametric estimate, which is named as "synthesized expected Bayesian method". When samples of products are tested and no failure events occur, thedefinition of expected Bayesian estimate is introduced and the estimates of failure probability and failure rate are provided. After some failure information is introduced by making an extra-test, a synthesized expected Bayesian method is defined and used to estimate failure probability, failure rateand some other parameters in exponential distribution and Weibull distribution of populations. Finally,calculations are performed according to practical problems, which show that the synthesized expected Bayesian method is feasible and easy to operate.
Methods of Estimating Strategic Intentions
1982-05-01
of events, coding categories. A V 2. Weighting Data: polIcy capturIng, Bayesian methods, correlation and variance analysis. 3. Characterizing Data...memory aids, fuzzy sets, factor analysis. 4. Assessing Covariations: actuarial models, backcasting . bootstrapping. 5. Cause and Effect Assessment...causae search, causal analysis, search trees, stepping analysts, hypothesis, regression analysis. 6. Predictions: Backcast !ng, boot strapping, decision
A generic method for assignment of reliability scores applied to solvent accessibility predictions
Nielsen Morten
2009-07-01
Full Text Available Abstract Background Estimation of the reliability of specific real value predictions is nontrivial and the efficacy of this is often questionable. It is important to know if you can trust a given prediction and therefore the best methods associate a prediction with a reliability score or index. For discrete qualitative predictions, the reliability is conventionally estimated as the difference between output scores of selected classes. Such an approach is not feasible for methods that predict a biological feature as a single real value rather than a classification. As a solution to this challenge, we have implemented a method that predicts the relative surface accessibility of an amino acid and simultaneously predicts the reliability for each prediction, in the form of a Z-score. Results An ensemble of artificial neural networks has been trained on a set of experimentally solved protein structures to predict the relative exposure of the amino acids. The method assigns a reliability score to each surface accessibility prediction as an inherent part of the training process. This is in contrast to the most commonly used procedures where reliabilities are obtained by post-processing the output. Conclusion The performance of the neural networks was evaluated on a commonly used set of sequences known as the CB513 set. An overall Pearson's correlation coefficient of 0.72 was obtained, which is comparable to the performance of the currently best public available method, Real-SPINE. Both methods associate a reliability score with the individual predictions. However, our implementation of reliability scores in the form of a Z-score is shown to be the more informative measure for discriminating good predictions from bad ones in the entire range from completely buried to fully exposed amino acids. This is evident when comparing the Pearson's correlation coefficient for the upper 20% of predictions sorted according to reliability. For this subset, values of 0
Reliability analysis of road network for estimation of public evacuation time around NPPs
Bang, Sun-Young; Lee, Gab-Bock; Chung, Yang-Geun [Korea Electric Power Research Institute, Daejeon (Korea, Republic of)
2007-07-01
The most strong protection method of radiation emergency preparedness is the evacuation of the public members when a great deal of radioactivity is released to environment. After the Three Mile Island (TMI) nuclear power plant meltdown in the United States and Chernobyl nuclear power plant disaster in the U.S.S.R, many advanced countries including the United States and Japan have continued research on estimation of public evacuation time as one of emergency countermeasure technologies. Also in South Korea, 'Framework Act on Civil Defense: Radioactive Disaster Preparedness Plan' was established in 1983 and nuclear power plants set up a radiation emergency plan and have regularly carried out radiation emergency preparedness trainings. Nonetheless, there is still a need to improve technology to estimate public evacuation time by executing precise analysis of traffic flow to prepare practical and efficient ways to protect the public. In this research, road network for Wolsong and Kori NPPs was constructed by CORSIM code and Reliability analysis of this road network was performed.
Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.
2006-10-01
This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.
Applicability of simplified human reliability analysis methods for severe accidents
Boring, R.; St Germain, S. [Idaho National Lab., Idaho Falls, Idaho (United States); Banaseanu, G.; Chatri, H.; Akl, Y. [Canadian Nuclear Safety Commission, Ottawa, Ontario (Canada)
2016-03-15
Most contemporary human reliability analysis (HRA) methods were created to analyse design-basis accidents at nuclear power plants. As part of a comprehensive expansion of risk assessments at many plants internationally, HRAs will begin considering severe accident scenarios. Severe accidents, while extremely rare, constitute high consequence events that significantly challenge successful operations and recovery. Challenges during severe accidents include degraded and hazardous operating conditions at the plant, the shift in control from the main control room to the technical support center, the unavailability of plant instrumentation, and the need to use different types of operating procedures. Such shifts in operations may also test key assumptions in existing HRA methods. This paper discusses key differences between design basis and severe accidents, reviews efforts to date to create customized HRA methods suitable for severe accidents, and recommends practices for adapting existing HRA methods that are already being used for HRAs at the plants. (author)
The Riso-Hudson Enneagram Type Indicator: Estimates of Reliability and Validity
Newgent, Rebecca A.; Parr, Patricia E.; Newman, Isadore; Higgins, Kristin K.
2004-01-01
This investigation was conducted to estimate the reliability and validity of scores on the Riso-Hudson Enneagram Type Indicator (D. R. Riso & R. Hudson, 1999a). Results of 287 participants were analyzed. Alpha suggests an adequate degree of internal consistency. Evidence provides mixed support for construct validity using correlational and…
Chaimowicz, F. (Flávio); A. Burdorf (Alex)
2015-01-01
textabstractBackground: The nationwide dementia prevalence is usually calculated by applying the results of local surveys to countries' populations. To evaluate the reliability of such estimations in developing countries, we chose Brazil as an example. We carried out a systematic review of dementia
Boermans, M.A.; Kattenberg, M.A.C.
2011-01-01
We show how to estimate a Cronbach's alpha reliability coefficient in Stata after running a principal component or factor analysis. Alpha evaluates to what extent items measure the same underlying content when the items are combined into a scale or used for latent variable. Stata allows for testing
Procedures for reliable estimation of viral fitness from time-series data
Bonhoeffer, S.; Barbour, A.D.; Boer, R.J. de
2002-01-01
In order to develop a better understanding of the evolutionary dynamics of HIV drug resistance, it is necessary to quantify accurately the in vivo fitness costs of resistance mutations. However, the reliable estimation of such fitness costs is riddled with both theoretical and experimental difficult
Planning of operation & maintenance using risk and reliability based methods
Florian, Mihai; Sørensen, John Dalsgaard
2015-01-01
Operation and maintenance (OM) of offshore wind turbines contributes with a substantial part of the total levelized cost of energy (LCOE). The objective of this paper is to present an application of risk- and reliability-based methods for planning of OM. The theoretical basis is presented...... and illustrated by an example, namely for planning of inspections and maintenance of wind turbine blades. A life-cycle approach is used where the total expected cost in the remaining lifetime is minimized. This maintenance plan is continuously updated during the lifetime using information from previous...... inspections and from condition monitoring with time intervals between inspections and maintenance / repair options as the decision parameters....
A Method of Reliability Allocation of a Complicated Large System
WANG Zhi-sheng; QIN Yuan-yuan; WANG Dao-bo
2004-01-01
Aiming at the problem of reliability allocation for a complicated large system, a new thought is brought up. Reliability allocation should be a kind of decision-making behavior; therefore the more information is used when apportioning a reliability index, the more reasonable an allocation is obtained. Reliability allocation for a complicated large system consists of two processes, the first one is a reliability information reporting process fromt bottom to top, and the other one is a reliability index apportioning process from top to bottom. By a typical example, we illustrate the concrete process of reliability allocation algorithms.
Hai An
2016-08-01
Full Text Available Aiming to resolve the problems of a variety of uncertainty variables that coexist in the engineering structure reliability analysis, a new hybrid reliability index to evaluate structural hybrid reliability, based on the random–fuzzy–interval model, is proposed in this article. The convergent solving method is also presented. First, the truncated probability reliability model, the fuzzy random reliability model, and the non-probabilistic interval reliability model are introduced. Then, the new hybrid reliability index definition is presented based on the random–fuzzy–interval model. Furthermore, the calculation flowchart of the hybrid reliability index is presented and it is solved using the modified limit-step length iterative algorithm, which ensures convergence. And the validity of convergent algorithm for the hybrid reliability model is verified through the calculation examples in literature. In the end, a numerical example is demonstrated to show that the hybrid reliability index is applicable for the wear reliability assessment of mechanisms, where truncated random variables, fuzzy random variables, and interval variables coexist. The demonstration also shows the good convergence of the iterative algorithm proposed in this article.
Comparison of Model Reliabilities from Single-Step and Bivariate Blending Methods
Taskinen, Matti; Mäntysaari, Esa; Lidauer, Martin;
2013-01-01
Model based reliabilities in genetic evaluation are compared between three methods: animal model BLUP, single-step BLUP, and bivariate blending after genomic BLUP. The original bivariate blending is revised in this work to better account animal models. The study data is extracted from the product......Model based reliabilities in genetic evaluation are compared between three methods: animal model BLUP, single-step BLUP, and bivariate blending after genomic BLUP. The original bivariate blending is revised in this work to better account animal models. The study data is extracted from...... the production trait evaluation of Nordic Red dairy cattle. Genotyped bulls with daughters are used as training animals, and genotyped bulls and producing cows as candidate animals. For simplicity, size of the data is chosen so that the full inverses of the mixed model equation coefficient matrices can...... be calculated. Model reliabilities by the single-step and the bivariate blending methods were higher than by animal model due to genomic information. Compared to the single-step method, the bivariate blending method reliability estimates were, in general, lower. Computationally bivariate blending method was...
A reliable method for detecting complexed DNA in vitro
Holladay, C.; Keeney, M.; Newland, B.; Mathew, A.; Wang, W.; Pandit, A.
2010-12-01
Quantification of eluted nucleic acids is a critical parameter in characterizing biomaterial based gene-delivery systems. The most commonly used method is to assay samples with an intercalating fluorescent dye such as PicoGreen®. However, this technique was developed for unbound DNA and the current trend in gene delivery is to condense DNA with transfection reagents, which interfere with intercalation. Here, for the first time, the DNA was permanently labeled with the fluorescent dye Cy5 prior to complexation, an alternative technique hypothesized to allow quantification of both bound and unbound DNA. A comparison of the two methods was performed by quantifying the elution of six different varieties of DNA complexes from a model biomaterial (collagen) scaffold. After seven days of elution, the PicoGreen® assay only allowed detection of three types of complexes (those formed using Lipofectin™ and two synthesised copolymers). However, the Cy5 fluorescent labeling technique enabled detection of all six varieties including those formed via common transfection agents poly(ethylene imine), poly-l-lysine and SuperFect™. This allowed reliable quantification of the elution of all these complexes from the collagen scaffold. Thus, while intercalating dyes may be effective and reliable for detecting double-stranded, unbound DNA, the technique described in this work allowed reliable quantification of DNA independent of complexation state.Quantification of eluted nucleic acids is a critical parameter in characterizing biomaterial based gene-delivery systems. The most commonly used method is to assay samples with an intercalating fluorescent dye such as PicoGreen®. However, this technique was developed for unbound DNA and the current trend in gene delivery is to condense DNA with transfection reagents, which interfere with intercalation. Here, for the first time, the DNA was permanently labeled with the fluorescent dye Cy5 prior to complexation, an alternative technique
Age Estimation Methods in Forensic Odontology
Phuwadon Duangto
2016-12-01
Full Text Available Forensically, age estimation is a crucial step for biological identification. Currently, there are many methods with variable accuracy to predict the age for dead or living persons such as a physical examination, radiographs of the left hand, and dental assessment. Age estimation using radiographic tooth development has been found to be an accurate method because it is mainly genetically influenced and less affected by nutritional and environmental factors. The Demirjian et al. method has long been the most commonly used for dental age estimation using radiological technique in many populations. This method, based on tooth developmental changes, is an easy-to-apply method since different stages of tooth development is clearly defined. The aim of this article is to elaborate age estimation by using tooth development with a focus on the Demirjian et al. method.
Koen Cuypers
Full Text Available The goal of this study was to optimize the transcranial magnetic stimulation (TMS protocol for acquiring a reliable estimate of corticospinal excitability (CSE using single-pulse TMS. Moreover, the minimal number of stimuli required to obtain a reliable estimate of CSE was investigated. In addition, the effect of two frequently used stimulation intensities [110% relative to the resting motor threshold (rMT and 120% rMT] and gender was evaluated. Thirty-six healthy young subjects (18 males and 18 females participated in a double-blind crossover procedure. They received 2 blocks of 40 consecutive TMS stimuli at either 110% rMT or 120% rMT in a randomized order. Based upon our data, we advise that at least 30 consecutive stimuli are required to obtain the most reliable estimate for CSE. Stimulation intensity and gender had no significant influence on CSE estimation. In addition, our results revealed that for subjects with a higher rMT, fewer consecutive stimuli were required to reach a stable estimate of CSE. The current findings can be used to optimize the design of similar TMS experiments.
Cuypers, Koen; Thijs, Herbert; Meesen, Raf L J
2014-01-01
The goal of this study was to optimize the transcranial magnetic stimulation (TMS) protocol for acquiring a reliable estimate of corticospinal excitability (CSE) using single-pulse TMS. Moreover, the minimal number of stimuli required to obtain a reliable estimate of CSE was investigated. In addition, the effect of two frequently used stimulation intensities [110% relative to the resting motor threshold (rMT) and 120% rMT] and gender was evaluated. Thirty-six healthy young subjects (18 males and 18 females) participated in a double-blind crossover procedure. They received 2 blocks of 40 consecutive TMS stimuli at either 110% rMT or 120% rMT in a randomized order. Based upon our data, we advise that at least 30 consecutive stimuli are required to obtain the most reliable estimate for CSE. Stimulation intensity and gender had no significant influence on CSE estimation. In addition, our results revealed that for subjects with a higher rMT, fewer consecutive stimuli were required to reach a stable estimate of CSE. The current findings can be used to optimize the design of similar TMS experiments.
Ren, Yihui; Eubank, Stephen; Nath, Madhurima
2016-10-01
Network reliability is the probability that a dynamical system composed of discrete elements interacting on a network will be found in a configuration that satisfies a particular property. We introduce a reliability property, Ising feasibility, for which the network reliability is the Ising model's partition function. As shown by Moore and Shannon, the network reliability can be separated into two factors: structural, solely determined by the network topology, and dynamical, determined by the underlying dynamics. In this case, the structural factor is known as the joint density of states. Using methods developed to approximate the structural factor for other reliability properties, we simulate the joint density of states, yielding an approximation for the partition function. Based on a detailed examination of why naïve Monte Carlo sampling gives a poor approximation, we introduce a parallel scheme for estimating the joint density of states using a Markov-chain Monte Carlo method with a spin-exchange random walk. This parallel scheme makes simulating the Ising model in the presence of an external field practical on small computer clusters for networks with arbitrary topology with ˜106 energy levels and more than 10308 microstates.
Alaa F. Sheta
2016-04-01
Full Text Available In this age of technology, building quality software is essential to competing in the business market. One of the major principles required for any quality and business software product for value fulfillment is reliability. Estimating software reliability early during the software development life cycle saves time and money as it prevents spending larger sums fixing a defective software product after deployment. The Software Reliability Growth Model (SRGM can be used to predict the number of failures that may be encountered during the software testing process. In this paper we explore the advantages of the Grey Wolf Optimization (GWO algorithm in estimating the SRGM’s parameters with the objective of minimizing the difference between the estimated and the actual number of failures of the software system. We evaluated three different software reliability growth models: the Exponential Model (EXPM, the Power Model (POWM and the Delayed S-Shaped Model (DSSM. In addition, we used three different datasets to conduct an experimental study in order to show the effectiveness of our approach.
Digital Forensics Analysis of Spectral Estimation Methods
Mataracioglu, Tolga
2011-01-01
Steganography is the art and science of writing hidden messages in such a way that no one apart from the intended recipient knows of the existence of the message. In today's world, it is widely used in order to secure the information. In this paper, the traditional spectral estimation methods are introduced. The performance analysis of each method is examined by comparing all of the spectral estimation methods. Finally, from utilizing those performance analyses, a brief pros and cons of the spectral estimation methods are given. Also we give a steganography demo by hiding information into a sound signal and manage to pull out the information (i.e, the true frequency of the information signal) from the sound by means of the spectral estimation methods.
Investigation on Thermal Contact Conductance Based on Data Analysis Method of Reliability
WANG Zongren; YANG Jun; YANG Mingyuan; ZHANG Weifang
2012-01-01
The method of reliability is proposed for the investigation of thermal contact conductance (TCC) in this study.A new definition is introduced,namely reliability thermal contact conductance (RTCC),which is defined as the TCC value that meets the reliability design requirement of the structural materials under consideration.An experimental apparatus with the compensation heater to test the TCC is introduced here.A practical engineering example is utilized to demonstrate the applicability of the proposed approach.By using a statistical regression model along with experimental data obtained from the interfaces of the structural materials GH4169 and K417 used in aero-engine,the estimate values and the confidence level of TCC and RTCC values are studied and compared.The results show that the testing values of TCC increase with interface pressure and the proposed RTCC model matches the test results better at high interface pressure.
Estimation and enhancement of real-time software reliability through mutation analysis
Geist, Robert; Offutt, A. J.; Harris, Frederick C., Jr.
1992-01-01
A simulation-based technique for obtaining numerical estimates of the reliability of N-version, real-time software is presented. An extended stochastic Petri net is employed to represent the synchronization structure of N versions of the software, where dependencies among versions are modeled through correlated sampling of module execution times. Test results utilizing specifications for NASA's planetary lander control software indicate that mutation-based testing could hold greater potential for enhancing reliability than the desirable but perhaps unachievable goal of independence among N versions.
Ways to increase the reliability of earthquake loss estimations in emergency mode
Frolova, Nina; Bonnin, Jean; Larionov, Valeri; Ugarov, Aleksander
2016-04-01
The lessons of earthquake disasters in Nepal, China, Indonesia, India, Haiti, Turkey and many others show that authorities in charge of emergency response are most often lacking prompt and reliable information on the disaster itself and its secondary effects. Timely and adequate action just after a strong earthquake can result in significant benefits in saving lives and other benefits, especially, in densely populated areas with high level of industrialization. The reliability of rough and rapid information provided by "global systems" (i.e. systems operated without consideration on wherever the earthquake has occurred), in emergency mode is strongly dependent on many factors dealt with input data and simulation models used in such systems. The paper analyses the different factors contribution to the total "error" of fatality estimation in emergency mode. Examples of four strong events in Nepal, Italy, China, Italy allowed to make a conclusion that the reliability of loss estimations is first of all influenced by the uncertainties in event parameters determination (coordinates, magnitude, source depth); this factors' group rating is the highest; as the degree of influence on reliability of loss estimations is equal to about 50%. The second place is taken by the factors' group responsible for macroseismic field simulation; the degree of influence of the group errors is about 30%. The last place is taken by group of factors, which describes the built environment distribution and regional vulnerability functions; the factors' group contributes about 20% to the error of loss estimation. Ways to minimize the influence of different factors on the reliability of loss assessment in near real time are proposed. The first one is to determine the rating of seismological surveys for different zones in attempting to decrease uncertainties in the earthquake parameters input determination in emergency mode. The second one is to "calibrate" the "global systems" drawing advantage
Reliablity analysis of gravity dams by response surface method
Humar, Nina; Kryžanowski, Andrej; Brilly, Mitja; Schnabl, Simon
2013-04-01
A dam failure is one of the most important problems in dam industry. Since the mechanical behavior of dams is usually a complex phenomenon existing classical mathematical models are generally insufficient to adequately predict the dam failure and thus the safety of dams. Therefore, numerical reliability methods are often used to model such a complex mechanical phenomena. Thus, the main purpose of the present paper is to present the response surface method as a powerful mathematical tool used to study and foresee the dam safety considering a set of collected monitoring data. The derived mathematical model is applied to a case study, the Moste dam, which is the highest concrete gravity dam in Slovenia. Based on the derived model, the ambient/state variables are correlated with the dam deformation in order to gain a forecasting tool able to define the critical thresholds for dam management.
Current Human Reliability Analysis Methods Applied to Computerized Procedures
Ronald L. Boring
2012-06-01
Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no US nuclear power plant has implemented CPs in its main control room (Fink et al., 2009). Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of enhanced ease of use and easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.
A Maximum-Entropy Method for Estimating the Spectrum
无
2007-01-01
Based on the maximum-entropy (ME) principle, a new power spectral estimator for random waves is derived in the form of ~S(ω)=(a/8)-H2(2π)d+1ω-(d+2)exp[-b(2π/ω)n], by solving a variational problem subject to some quite general constraints. This robust method is comprehensive enough to describe the wave spectra even in extreme wave conditions and is superior to periodogram method that is not suitable to process comparatively short or intensively unsteady signals for its tremendous boundary effect and some inherent defects of FFT. Fortunately, the newly derived method for spectral estimation works fairly well, even though the sample data sets are very short and unsteady, and the reliability and efficiency of this spectral estimator have been preliminarily proved.
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
Collection of methods for reliability and safety engineering
Fussell, J.B.; Rasmuson, D.M.; Wilson, J.R.; Burdick, G.R.; Zipperer, J.C.
1976-04-01
The document presented contains five reports each describing a method of reliability and safety engineering. Report I provides a conceptual framework for the study of component malfunctions during system evaluations. Report II provides methods for locating groups of critical component failures such that all the component failures in a given group can be caused to occur by the occurrence of a single separate event. These groups of component failures are called common cause candidates. Report III provides a method for acquiring and storing system-independent component failure logic information. The information stored is influenced by the concepts presented in Report I and also includes information useful in locating common cause candidates. Report IV puts forth methods for analyzing situations that involve systems which change character in a predetermined time sequence. These phased missions techniques are applicable to the hypothetical ''accident chains'' frequently analyzed for nuclear power plants. Report V presents a unified approach to cause-consequence analysis, a method of analysis useful during risk assessments. This approach, as developed by the Danish Atomic Energy Commission, is modified to reflect the format and symbology conventionally used for other types of analysis of nuclear reactor systems.
Advanced response surface method for mechanical reliability analysis
L(U) Zhen-zhou; ZHAO Jie; YUE Zhu-feng
2007-01-01
Based on the classical response surface method (RSM), a novel RSM using improved experimental points (EPs) is presented for reliability analysis. Two novel points are included in the presented method. One is the use of linear interpolation, from which the total EPs for determining the RS are selected to be closer to the actual failure surface;the other is the application of sequential linear interpolation to control the distance between the surrounding EPs and the center EP, by which the presented method can ensure that the RS fits the actual failure surface in the region of maximum likelihood as the center EPs converge to the actual most probable point (MPP). Since the fitting precision of the RS to the actual failure surface in the vicinity of the MPP, which has significant contribution to the probability of the failure surface being exceeded, is increased by the presented method, the precision of the failure probability calculated by RS is increased as well. Numerical examples illustrate the accuracy and efficiency of the presented method.
J. Gogoi
2012-01-01
Full Text Available This paper deals with the stress vs. strength problem incorporating multi-componentsystems viz. standby redundancy. The models developed have been illustrated assuming that allthe components in the system for both stress and strength are independent and follow differentprobability distributions viz. Exponential, Gamma and Lindley. Four different conditions forstress and strength have been considered for this investigation. Under these assumptions thereliabilities of the system have been obtained with the help of the particular forms of densityfunctions of n-standby system when all stress-strengths are random variables. The expressions forthe marginal reliabilities R(1, R(2, R(3 etc. have been derived based on its stress- strengthmodels. Then the corresponding system reliabilities Rn have been computed numerically andpresented in tabular forms for different stress-strength distributions with different values of theirparameters. Here we consider n 3 for estimating the system reliability R3.
On the reliable estimation of heat transfer coefficients for nanofluids in a microchannel
Irwansyah, Ridho; Cierpka, Christian; Kähler, Christian J.
2016-09-01
Nanofluids (base fluid and nanoparticles) can enhance the heat transfer coefficient h in comparison to the base fluid. This open the door for the design of efficient cooling system for microelectronics component for instance. Since theoretical Nusselt number correlations for microchannels are not available, the direct method using an energy balance has to be applied to determine h. However, for low nanoparticle concentrations the absolute numbers are small and hard to measure. Therefore, the study examines the laminar convective heat transfer of Al2O3-water nanofluids in a square microchannel with a cross section of 0.5 × 0.5 mm2 and a length of 30 mm under constant wall temperature. The Al2O3 nanoparticles have a diameter size distribution of 30-60 nm. A sensitivity analysis with error propagation was done to reduce the error for a reliable heat transfer coefficient estimation. An enhancement of heat transfer coefficient with increasing nanoparticles volume concentration was confirmed. A maximum enhancement of 6.9% and 21% were realized for 0.6% Al2O3-water and 1% Al2O3-water nanofluids.
Fatigue Reliability Assessment of Steel Member Using Probabilistic Stress-Life Method
Dae-Hung Kang
2012-01-01
Full Text Available The fatigue reliability of a steel member in a bridge is estimated by using the probabilistic stress-life method. The stress history of a member is defined as the loading block when a truck passes over a bridge, and the stress range frequency distribution of the stress history is obtained by a stress range frequency analysis. A probabilistic method is applied to the stress range frequency distribution, and the parameters of the probability distribution for the stress range frequency distribution are used in a numerical simulation. To obtain the probability of failure of a member under a loading block, Monte Carlo simulation is performed in conjunction with Miner's rule, the modified Miner's rule, and Haibach's rule for fatigue damage evaluation. Through these analyses procedures, we obtain an evaluation method for fatigue reliability that can predict the block number of the failure load and residual fatigue life.
Liu Xiqiang; Zhou Huilan; Li Hong; Gai Dianguang
2000-01-01
Based on the propagation characteristics of shear wave in the anisotropic layers, thecorrelation among several splitting shear-wave identification methods hasbeen studied. Thispaper puts forward the method estimating splitting shear-wave phases and its reliability byusing of the assumption that variance of noise and useful signal data obey normaldistribution. To check the validity of new method, the identification results and errorestimation corresponding to 95% confidence level by analyzing simulation signals have beengiven.
才庆祥; 彭世济; 张达贤
1996-01-01
Subjected to various stochastic factors, surface mining engineering reliability is difficult to solve by using general reliability mathematical method. The concept of reliability measurement is introduced; And the authors have combined system simulation method with CAD technique and developed an interactive color character graphic design system for evaluating and solving the mining engineering reliability in surface mines under the given constraints.
Nyman, R. [Swedish Nuclear Power Inspectorate, Stockholm (Sweden); Hegedus, D.; Tomic, B. [ENCONET Consulting GesmbH, Vienna (Austria); Lydell, B. [RSA Technologies, Vista, CA (United States)
1997-12-01
This report summarizes results and insights from the final phase of a R and D project on piping reliability sponsored by the Swedish Nuclear Power Inspectorate (SKI). The technical scope includes the development of an analysis framework for estimating piping reliability parameters from service data. The R and D has produced a large database on the operating experience with piping systems in commercial nuclear power plants worldwide. It covers the period 1970 to the present. The scope of the work emphasized pipe failures (i.e., flaws/cracks, leaks and ruptures) in light water reactors (LWRs). Pipe failures are rare events. A data reduction format was developed to ensure that homogenous data sets are prepared from scarce service data. This data reduction format distinguishes between reliability attributes and reliability influence factors. The quantitative results of the analysis of service data are in the form of conditional probabilities of pipe rupture given failures (flaws/cracks, leaks or ruptures) and frequencies of pipe failures. Finally, the R and D by SKI produced an analysis framework in support of practical applications of service data in PSA. This, multi-purpose framework, termed `PFCA`-Pipe Failure Cause and Attribute- defines minimum requirements on piping reliability analysis. The application of service data should reflect the requirements of an application. Together with raw data summaries, this analysis framework enables the development of a prior and a posterior pipe rupture probability distribution. The framework supports LOCA frequency estimation, steam line break frequency estimation, as well as the development of strategies for optimized in-service inspection strategies. 63 refs, 30 tabs, 22 figs.
Kim, Ar Ryum; Jang, Inseok Jang; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Daejon (Korea, Republic of); Park, Jinkyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Jong Hyun [KEPCO, Ulsan (Korea, Republic of)
2015-05-15
The purpose of HRA implementation is 1) to achieve the human factor engineering (HFE) design goal of providing operator interfaces that will minimize personnel errors and 2) to conduct an integrated activity to support probabilistic risk assessment (PRA). For these purposes, various HRA methods have been developed such as technique for human error rate prediction (THERP), simplified plant analysis risk human reliability assessment (SPAR-H), cognitive reliability and error analysis method (CREAM) and so on. In performing HRA, such conditions that influence human performances have been represented via several context factors called performance shaping factors (PSFs). PSFs are aspects of the human's individual characteristics, environment, organization, or task that specifically decrements or improves human performance, thus respectively increasing or decreasing the likelihood of human errors. Most HRA methods evaluate the weightings of PSFs by expert judgment and explicit guidance for evaluating the weighting is not provided. It has been widely known that the performance of the human operator is one of the critical factors to determine the safe operation of NPPs. HRA methods have been developed to identify the possibility and mechanism of human errors. In performing HRA methods, the effect of PSFs which may increase or decrease human error should be investigated. However, the effect of PSFs were estimated by expert judgment so far. Accordingly, in order to estimate the effect of PSFs objectively, the quantitative framework to estimate PSFs by using PSF profiles is introduced in this paper.
1993-04-01
Estimated cost Filel $0.04 Dialnet $0.16 Estimated cost this search $0.16 Estimated total session cost 0.004 Hrs. File 4:INSPEC 2 83-91/9102B1 (COPR. IEE...File4 $1.50 Dialnet $117.05 Estimated cost this search $117.21 Estimated total session cost 0.154 Hrs. Logoff: level 25.02.8 D 10:00:23 A-38 MAILED TO
Methods for fast, reliable growth of Sn whiskers
Bozack, M. J.; Snipes, S. K.; Flowers, G. N.
2016-10-01
We report several methods to reliably grow dense fields of high-aspect ratio tin whiskers for research purposes in a period of days to weeks. The techniques offer marked improvements over previous means to grow whiskers, which have struggled against the highly variable incubation period of tin whiskers and slow growth rate. Control of the film stress is the key to fast-growing whiskers, owing to the fact that whisker incubation and growth are fundamentally a stress-relief phenomenon. The ability to grow high-density fields of whiskers (103-106/cm2) in a reasonable period of time (days, weeks) has accelerated progress in whisker growth and aided in development of whisker mitigation strategies.
A Route Confidence Evaluation Method for Reliable Hierarchical Text Categorization
Hatami, Nima; Armano, Giuliano
2012-01-01
Hierarchical Text Categorization (HTC) is becoming increasingly important with the rapidly growing amount of text data available in the World Wide Web. Among the different strategies proposed to cope with HTC, the Local Classifier per Node (LCN) approach attains good performance by mirroring the underlying class hierarchy while enforcing a top-down strategy in the testing step. However, the problem of embedding hierarchical information (parent-child relationship) to improve the performance of HTC systems still remains open. A confidence evaluation method for a selected route in the hierarchy is proposed to evaluate the reliability of the final candidate labels in an HTC system. In order to take into account the information embedded in the hierarchy, weight factors are used to take into account the importance of each level. An acceptance/rejection strategy in the top-down decision making process is proposed, which improves the overall categorization accuracy by rejecting a few percentage of samples, i.e., thos...
王鹭; 张利; 王学芝
2015-01-01
As the central component of rotating machine, the performance reliability assessment and remaining useful lifetime prediction of bearing are of crucial importance in condition-based maintenance to reduce the maintenance cost and improve the reliability. A prognostic algorithm to assess the reliability and forecast the remaining useful lifetime (RUL) of bearings was proposed, consisting of three phases. Online vibration and temperature signals of bearings in normal state were measured during the manufacturing process and the most useful time-dependent features of vibration signals were extracted based on correlation analysis (feature selection step). Time series analysis based on neural network, as an identification model, was used to predict the features of bearing vibration signals at any horizons (feature prediction step). Furthermore, according to the features, degradation factor was defined. The proportional hazard model was generated to estimate the survival function and forecast the RUL of the bearing (RUL prediction step). The positive results show that the plausibility and effectiveness of the proposed approach can facilitate bearing reliability estimation and RUL prediction.
Validity and reliability of Nike + Fuelband for estimating physical activity energy expenditure.
Tucker, Wesley J; Bhammar, Dharini M; Sawyer, Brandon J; Buman, Matthew P; Gaesser, Glenn A
2015-01-01
The Nike + Fuelband is a commercially available, wrist-worn accelerometer used to track physical activity energy expenditure (PAEE) during exercise. However, validation studies assessing the accuracy of this device for estimating PAEE are lacking. Therefore, this study examined the validity and reliability of the Nike + Fuelband for estimating PAEE during physical activity in young adults. Secondarily, we compared PAEE estimation of the Nike + Fuelband with the previously validated SenseWear Armband (SWA). Twenty-four participants (n = 24) completed two, 60-min semi-structured routines consisting of sedentary/light-intensity, moderate-intensity, and vigorous-intensity physical activity. Participants wore a Nike + Fuelband and SWA, while oxygen uptake was measured continuously with an Oxycon Mobile (OM) metabolic measurement system (criterion). The Nike + Fuelband (ICC = 0.77) and SWA (ICC = 0.61) both demonstrated moderate to good validity. PAEE estimates provided by the Nike + Fuelband (246 ± 67 kcal) and SWA (238 ± 57 kcal) were not statistically different than OM (243 ± 67 kcal). Both devices also displayed similar mean absolute percent errors for PAEE estimates (Nike + Fuelband = 16 ± 13 %; SWA = 18 ± 18 %). Test-retest reliability for PAEE indicated good stability for Nike + Fuelband (ICC = 0.96) and SWA (ICC = 0.90). The Nike + Fuelband provided valid and reliable estimates of PAEE, that are similar to the previously validated SWA, during a routine that included approximately equal amounts of sedentary/light-, moderate- and vigorous-intensity physical activity.
Exploring valid and reliable assessment methods for care management education.
Gennissen, Lokke; Stammen, Lorette; Bueno-de-Mesquita, Jolien; Wieringa, Sietse; Busari, Jamiu
2016-07-04
Purpose It is assumed that the use of valid and reliable assessment methods can facilitate the development of medical residents' management and leadership competencies. To justify this assertion, the perceptions of an expert panel of health care leaders were explored on assessment methods used for evaluating care management (CM) development in Dutch residency programs. This paper aims to investigate how assessors and trainees value these methods and examine for any inherent benefits or shortcomings when they are applied in practice. Design/methodology/approach A Delphi survey was conducted among members of the platform for medical leadership in The Netherlands. This panel of experts was made up of clinical educators, practitioners and residents interested in CM education. Findings Of the respondents, 40 (55.6 per cent) and 31 (43 per cent) participated in the first and second rounds of the Delphi survey, respectively. The respondents agreed that assessment methods currently being used to measure residents' CM competencies were weak, though feasible for use in many residency programs. Multi-source feedback (MSF, 92.1 per cent), portfolio/e-portfolio (86.8 per cent) and knowledge testing (76.3 per cent) were identified as the most commonly known assessment methods with familiarity rates exceeding 75 per cent. Practical implications The findings suggested that an "assessment framework" comprising MSF, portfolios, individual process improvement projects or self-reflections and observations in clinical practice should be used to measure CM competencies in residents. Originality/value This study reaffirms the need for objective methods to assess CM skills in post-graduate medical education, as there was not a single assessment method that stood out as the best instrument.
Evaluation of non cyanide methods for hemoglobin estimation
Vinaya B Shah
2011-01-01
Full Text Available Background: The hemoglobincyanide method (HiCN method for measuring hemoglobin is used extensively worldwide; its advantages are the ready availability of a stable and internationally accepted reference standard calibrator. However, its use may create a problem, as the waste disposal of large volumes of reagent containing cyanide constitutes a potential toxic hazard. Aims and Objective: As an alternative to drabkin`s method of Hb estimation, we attempted to estimate hemoglobin by other non-cyanide methods: alkaline hematin detergent (AHD-575 using Triton X-100 as lyser and alkaline- borax method using quarternary ammonium detergents as lyser. Materials and Methods: The hemoglobin (Hb results on 200 samples of varying Hb concentrations obtained by these two cyanide free methods were compared with a cyanmethemoglobin method on a colorimeter which is light emitting diode (LED based. Hemoglobin was also estimated in one hundred blood donors and 25 blood samples of infants and compared by these methods. Statistical analysis used was Pearson`s correlation coefficient. Results: The response of the non cyanide method is linear for serially diluted blood samples over the Hb concentration range from 3gm/dl -20 gm/dl. The non cyanide methods has a precision of + 0.25g/dl (coefficient of variation= (2.34% and is suitable for use with fixed wavelength or with colorimeters at wavelength- 530 nm and 580 nm. Correlation of these two methods was excellent (r=0.98. The evaluation has shown it to be as reliable and reproducible as HiCN for measuring hemoglobin at all concentrations. The reagents used in non cyanide methods are non-biohazardous and did not affect the reliability of data determination and also the cost was less than HiCN method. Conclusions: Thus, non cyanide methods of Hb estimation offer possibility of safe and quality Hb estimation and should prove useful for routine laboratory use. Non cyanide methods is easily incorporated in hemobloginometers
Bucknor, Matthew; Grabaskas, David; Brunett, Acacia; Grelle, Austin
2015-04-26
Advanced small modular reactor designs include many advantageous design features such as passively driven safety systems that are arguably more reliable and cost effective relative to conventional active systems. Despite their attractiveness, a reliability assessment of passive systems can be difficult using conventional reliability methods due to the nature of passive systems. Simple deviations in boundary conditions can induce functional failures in a passive system, and intermediate or unexpected operating modes can also occur. As part of an ongoing project, Argonne National Laboratory is investigating various methodologies to address passive system reliability. The Reliability Method for Passive Systems (RMPS), a systematic approach for examining reliability, is one technique chosen for this analysis. This methodology is combined with the Risk-Informed Safety Margin Characterization (RISMC) approach to assess the reliability of a passive system and the impact of its associated uncertainties. For this demonstration problem, an integrated plant model of an advanced small modular pool-type sodium fast reactor with a passive reactor cavity cooling system is subjected to a station blackout using RELAP5-3D. This paper discusses important aspects of the reliability assessment, including deployment of the methodology, the uncertainty identification and quantification process, and identification of key risk metrics.
Bahman Tarvirdizade
2014-01-01
Full Text Available We consider the estimation of stress-strength reliability based on lower record values when X and Y are independently but not identically inverse Rayleigh distributed random variables. The maximum likelihood, Bayes, and empirical Bayes estimators of R are obtained and their properties are studied. Confidence intervals, exact and approximate, as well as the Bayesian credible sets for R are obtained. A real example is presented in order to illustrate the inferences discussed in the previous sections. A simulation study is conducted to investigate and compare the performance of the intervals presented in this paper and some bootstrap intervals.
Dhatt, Sharmistha
2016-01-01
Reliability of kinetic parameters are crucial in understanding enzyme kinetics within cellular system. The present study suggests a few cautions that need introspection for estimation of parameters like K(M), V(max) and K(I) using Lineweaver-Burk plots. The quality of IC(50) too needs a thorough reinvestigation because of its direct link with K(I) and K(M) values. Inhibition kinetics under both steady-state and non-steady-state conditions are studied and errors in estimated parameters are compared against actual values to settle the question of their adequacy.
Fang, Chih-Chiang; Yeh, Chun-Wu
2016-09-01
The quantitative evaluation of software reliability growth model is frequently accompanied by its confidence interval of fault detection. It provides helpful information to software developers and testers when undertaking software development and software quality control. However, the explanation of the variance estimation of software fault detection is not transparent in previous studies, and it influences the deduction of confidence interval about the mean value function that the current study addresses. Software engineers in such a case cannot evaluate the potential hazard based on the stochasticity of mean value function, and this might reduce the practicability of the estimation. Hence, stochastic differential equations are utilised for confidence interval estimation of the software fault-detection process. The proposed model is estimated and validated using real data-sets to show its flexibility.
A Group Contribution Method for Estimating Cetane and Octane Numbers
Kubic, William Louis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Process Modeling and Analysis Group
2016-07-28
Much of the research on advanced biofuels is devoted to the study of novel chemical pathways for converting nonfood biomass into liquid fuels that can be blended with existing transportation fuels. Many compounds under consideration are not found in the existing fuel supplies. Often, the physical properties needed to assess the viability of a potential biofuel are not available. The only reliable information available may be the molecular structure. Group contribution methods for estimating physical properties from molecular structure have been used for more than 60 years. The most common application is estimation of thermodynamic properties. More recently, group contribution methods have been developed for estimating rate dependent properties including cetane and octane numbers. Often, published group contribution methods are limited in terms of types of function groups and range of applicability. In this study, a new, broadly-applicable group contribution method based on an artificial neural network was developed to estimate cetane number research octane number, and motor octane numbers of hydrocarbons and oxygenated hydrocarbons. The new method is more accurate over a greater range molecular weights and structural complexity than existing group contribution methods for estimating cetane and octane numbers.
From eggs to bites: do ovitrap data provide reliable estimates of Aedes albopictus biting females?
Mattia Manica
2017-03-01
Full Text Available Background Aedes albopictus is an aggressive invasive mosquito species that represents a serious health concern not only in tropical areas, but also in temperate regions due to its role as vector of arboviruses. Estimates of mosquito biting rates are essential to account for vector-human contact in models aimed to predict the risk of arbovirus autochthonous transmission and outbreaks, as well as nuisance thresholds useful for correct planning of mosquito control interventions. Methods targeting daytime and outdoor biting Ae. albopictus females (e.g., Human Landing Collection, HLC are expensive and difficult to implement in large scale schemes. Instead, egg-collections by ovitraps are the most widely used routine approach for large-scale monitoring of the species. The aim of this work was to assess whether ovitrap data can be exploited to estimate numbers of adult biting Ae. albopictus females and whether the resulting relationship could be used to build risk models helpful for decision-makers in charge of planning of mosquito-control activities in infested areas. Method Ovitrap collections and HLCs were carried out in hot-spots of Ae. albopictus abundance in Rome (Italy along a whole reproductive season. The relationship between the two sets of data was assessed by generalized least square analysis, taking into account meteorological parameters. Result The mean number of mosquito females/person collected by HLC in 15′ (i.e., females/HLC and the mean number of eggs/day were 18.9 ± 0.7 and 39.0 ± 2.0, respectively. The regression models found a significant positive relationship between the two sets of data and estimated an increase of one biting female/person every five additional eggs found in ovitraps. Both observed and fitted values indicated presence of adults in the absence of eggs in ovitraps. Notably, wide confidence intervals of estimates of biting females based on eggs were observed. The patterns of exotic arbovirus outbreak
From eggs to bites: do ovitrap data provide reliable estimates of Aedes albopictus biting females?
Manica, Mattia; Rosà, Roberto; della Torre, Alessandra
2017-01-01
Background Aedes albopictus is an aggressive invasive mosquito species that represents a serious health concern not only in tropical areas, but also in temperate regions due to its role as vector of arboviruses. Estimates of mosquito biting rates are essential to account for vector-human contact in models aimed to predict the risk of arbovirus autochthonous transmission and outbreaks, as well as nuisance thresholds useful for correct planning of mosquito control interventions. Methods targeting daytime and outdoor biting Ae. albopictus females (e.g., Human Landing Collection, HLC) are expensive and difficult to implement in large scale schemes. Instead, egg-collections by ovitraps are the most widely used routine approach for large-scale monitoring of the species. The aim of this work was to assess whether ovitrap data can be exploited to estimate numbers of adult biting Ae. albopictus females and whether the resulting relationship could be used to build risk models helpful for decision-makers in charge of planning of mosquito-control activities in infested areas. Method Ovitrap collections and HLCs were carried out in hot-spots of Ae. albopictus abundance in Rome (Italy) along a whole reproductive season. The relationship between the two sets of data was assessed by generalized least square analysis, taking into account meteorological parameters. Result The mean number of mosquito females/person collected by HLC in 15′ (i.e., females/HLC) and the mean number of eggs/day were 18.9 ± 0.7 and 39.0 ± 2.0, respectively. The regression models found a significant positive relationship between the two sets of data and estimated an increase of one biting female/person every five additional eggs found in ovitraps. Both observed and fitted values indicated presence of adults in the absence of eggs in ovitraps. Notably, wide confidence intervals of estimates of biting females based on eggs were observed. The patterns of exotic arbovirus outbreak probability obtained by
Reliability Estimation of Parameters of Helical Wind Turbine with Vertical Axis.
Dumitrascu, Adela-Eliza; Lepadatescu, Badea; Dumitrascu, Dorin-Ion; Nedelcu, Anisor; Ciobanu, Doina Valentina
2015-01-01
Due to the prolonged use of wind turbines they must be characterized by high reliability. This can be achieved through a rigorous design, appropriate simulation and testing, and proper construction. The reliability prediction and analysis of these systems will lead to identifying the critical components, increasing the operating time, minimizing failure rate, and minimizing maintenance costs. To estimate the produced energy by the wind turbine, an evaluation approach based on the Monte Carlo simulation model is developed which enables us to estimate the probability of minimum and maximum parameters. In our simulation process we used triangular distributions. The analysis of simulation results has been focused on the interpretation of the relative frequency histograms and cumulative distribution curve (ogive diagram), which indicates the probability of obtaining the daily or annual energy output depending on wind speed. The experimental researches consist in estimation of the reliability and unreliability functions and hazard rate of the helical vertical axis wind turbine designed and patented to climatic conditions for Romanian regions. Also, the variation of power produced for different wind speeds, the Weibull distribution of wind probability, and the power generated were determined. The analysis of experimental results indicates that this type of wind turbine is efficient at low wind speed.
Reliability Estimation of Parameters of Helical Wind Turbine with Vertical Axis
Adela-Eliza Dumitrascu
2015-01-01
Full Text Available Due to the prolonged use of wind turbines they must be characterized by high reliability. This can be achieved through a rigorous design, appropriate simulation and testing, and proper construction. The reliability prediction and analysis of these systems will lead to identifying the critical components, increasing the operating time, minimizing failure rate, and minimizing maintenance costs. To estimate the produced energy by the wind turbine, an evaluation approach based on the Monte Carlo simulation model is developed which enables us to estimate the probability of minimum and maximum parameters. In our simulation process we used triangular distributions. The analysis of simulation results has been focused on the interpretation of the relative frequency histograms and cumulative distribution curve (ogive diagram, which indicates the probability of obtaining the daily or annual energy output depending on wind speed. The experimental researches consist in estimation of the reliability and unreliability functions and hazard rate of the helical vertical axis wind turbine designed and patented to climatic conditions for Romanian regions. Also, the variation of power produced for different wind speeds, the Weibull distribution of wind probability, and the power generated were determined. The analysis of experimental results indicates that this type of wind turbine is efficient at low wind speed.
MOMENT-METHOD ESTIMATION BASED ON CENSORED SAMPLE
NI Zhongxin; FEI Heliang
2005-01-01
In reliability theory and survival analysis,the problem of point estimation based on the censored sample has been discussed in many literatures.However,most of them are focused on MLE,BLUE etc;little work has been done on the moment-method estimation in censoring case.To make the method of moment estimation systematic and unifiable,in this paper,the moment-method estimators(abbr.MEs) and modified momentmethod estimators(abbr.MMEs) of the parameters based on type I and type Ⅱ censored samples are put forward involving mean residual lifetime. The strong consistency and other properties are proved. To be worth mentioning,in the exponential distribution,the proposed moment-method estimators are exactly MLEs. By a simulation study,in the view point of bias and mean square of error,we show that the MEs and MMEs are better than MLEs and the "pseudo complete sample" technique introduced in Whitten et al.(1988).And the superiority of the MEs is especially conspicuous,when the sample is heavily censored.
METHODICAL APPROACH TO AN ESTIMATION OF PROFESSIONALISM OF AN EMPLOYEE
Татьяна Александровна Коркина
2013-08-01
Full Text Available Analysis of definitions of «professionalism», reflecting the different viewpoints of scientists and practitioners, has shown that it is interpreted as a specific property of the people effectively and reliably carry out labour activity in a variety of conditions. The article presents the methodical approach to an estimation of professionalism of the employee from the position as the external manifestations of the reliability and effectiveness of the work and the position of the personal characteristics of the employee, determining the results of his work. This approach includes the assessment of the level of qualification and motivation of the employee for each key job functions as well as the final results of its implementation on the criteria of efficiency and reliability. The proposed methodological approach to the estimation of professionalism of the employee allows to identify «bottlenecks» in the structure of its labour functions and to define directions of development of the professional qualities of the worker to ensure the required level of reliability and efficiency of the obtained results.DOI: http://dx.doi.org/10.12731/2218-7405-2013-6-11
阚英男; 杨兆军; 李国发; 何佳龙; 王彦鹍; 李洪洲
2016-01-01
A new problem that classical statistical methods are incapable of solving is reliability modeling and assessment when multiple numerical control machine tools (NCMTs) reveal zero failures after a reliability test. Thus, the zero-failure data form and corresponding Bayesian model are developed to solve the zero-failure problem of NCMTs, for which no previous suitable statistical model has been developed. An expert−judgment process that incorporates prior information is presented to solve the difficulty in obtaining reliable prior distributions of Weibull parameters. The equations for the posterior distribution of the parameter vector and the Markov chain Monte Carlo (MCMC) algorithm are derived to solve the difficulty of calculating high-dimensional integration and to obtain parameter estimators. The proposed method is applied to a real case; a corresponding programming code and trick are developed to implement an MCMC simulation in WinBUGS, and a mean time between failures (MTBF) of 1057.9 h is obtained. Given its ability to combine expert judgment, prior information, and data, the proposed reliability modeling and assessment method under the zero failure of NCMTs is validated.
On the efficiency and reliability of cluster mass estimates based on member galaxies
Biviano, A; Diaferio, A; Dolag, K; Girardi, M; Murante, G
2006-01-01
We study the efficiency and reliability of cluster mass estimators that are based on the projected phase-space distribution of galaxies in a cluster region. To this aim, we analyse a data-set of 62 clusters extracted from a concordance LCDM cosmological hydrodynamical simulation. Galaxies (or Dark Matter particles) are first selected in cylinders of given radius (from 0.5 to 1.5 Mpc/h) and ~200 Mpc/h length. Cluster members are then identified by applying a suitable interloper removal algorithm. Two cluster mass estimators are considered: the virial mass estimator (Mvir), and a mass estimator (Msigma) based entirely on the cluster velocity dispersion estimate. Mvir overestimates the true mass by ~10%, and Msigma underestimates the true mass by ~15%, on average, for sample sizes of > 60 cluster members. For smaller sample sizes, the bias of the virial mass estimator substantially increases, while the Msigma estimator becomes essentially unbiased. The dispersion of both mass estimates increases by a factor ~2 a...
Power Network Parameter Estimation Method Based on Data Mining Technology
ZHANG Qi-ping; WANG Cheng-min; HOU Zhi-fian
2008-01-01
The parameter values which actually change with the circumstances, weather and load level etc.produce great effect to the result of state estimation. A new parameter estimation method based on data mining technology was proposed. The clustering method was used to classify the historical data in supervisory control and data acquisition (SCADA) database as several types. The data processing technology was impliedto treat the isolated point, missing data and yawp data in samples for classified groups. The measurement data which belong to each classification were introduced to the linear regression equation in order to gain the regression coefficient and actual parameters by the least square method. A practical system demonstrates the high correctness, reliability and strong practicability of the proposed method.
Thie, Johnson; Sriram, Prema; Klistorner, Alexander; Graham, Stuart L
2012-01-01
This paper describes a method to reliably estimate latency of multifocal visual evoked potential (mfVEP) and a classifier to automatically separate reliable mfVEP traces from noisy traces. We also investigated which mfVEP peaks have reproducible latency across recording sessions. The proposed method performs cross-correlation between mfVEP traces and second order Gaussian wavelet kernels and measures the timing of the resulting peaks. These peak times offset by the wavelet kernel's peak time represents the mfVEP latency. The classifier algorithm performs an exhaustive series of leave-one-out classifications to find the champion mfVEP features which are most frequently selected to infer reliable traces from noisy traces. Monopolar mfVEP recording was performed on 10 subjects using the Accumap1™ system. Pattern-reversal protocol was used with 24 sectors and eccentricity upto 33°. A bipolar channel was recorded at midline with electrodes placed above and below the inion. The largest mfVEP peak and the immediate peak prior had the smallest latency variability across recording sessions, about ±2ms. The optimal classifier selected three champion features, namely, signal-to-noise ratio, the signal's peak magnitude response from 5 to 15Hz and the peak-to-peak amplitude of the trace between 70 and 250 ms. The classifier algorithm can separate reliable and noisy traces with a high success rate, typically 93%. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.
Bayesian Inference Methods for Sparse Channel Estimation
Pedersen, Niels Lovmand
2013-01-01
This thesis deals with sparse Bayesian learning (SBL) with application to radio channel estimation. As opposed to the classical approach for sparse signal representation, we focus on the problem of inferring complex signals. Our investigations within SBL constitute the basis for the development...... of Bayesian inference algorithms for sparse channel estimation. Sparse inference methods aim at finding the sparse representation of a signal given in some overcomplete dictionary of basis vectors. Within this context, one of our main contributions to the field of SBL is a hierarchical representation...... analysis of the complex prior representation, where we show that the ability to induce sparse estimates of a given prior heavily depends on the inference method used and, interestingly, whether real or complex variables are inferred. We also show that the Bayesian estimators derived from the proposed...
From eggs to bites: do ovitrap data provide reliable estimates of Aedes albopictus biting females?
Manica, Mattia; Rosà, Roberto; Della Torre, Alessandra; Caputo, Beniamino
2017-01-01
Aedes albopictus is an aggressive invasive mosquito species that represents a serious health concern not only in tropical areas, but also in temperate regions due to its role as vector of arboviruses. Estimates of mosquito biting rates are essential to account for vector-human contact in models aimed to predict the risk of arbovirus autochthonous transmission and outbreaks, as well as nuisance thresholds useful for correct planning of mosquito control interventions. Methods targeting daytime and outdoor biting Ae. albopictus females (e.g., Human Landing Collection, HLC) are expensive and difficult to implement in large scale schemes. Instead, egg-collections by ovitraps are the most widely used routine approach for large-scale monitoring of the species. The aim of this work was to assess whether ovitrap data can be exploited to estimate numbers of adult biting Ae. albopictus females and whether the resulting relationship could be used to build risk models helpful for decision-makers in charge of planning of mosquito-control activities in infested areas. Ovitrap collections and HLCs were carried out in hot-spots of Ae. albopictus abundance in Rome (Italy) along a whole reproductive season. The relationship between the two sets of data was assessed by generalized least square analysis, taking into account meteorological parameters. The mean number of mosquito females/person collected by HLC in 15' (i.e., females/HLC) and the mean number of eggs/day were 18.9 ± 0.7 and 39.0 ± 2.0, respectively. The regression models found a significant positive relationship between the two sets of data and estimated an increase of one biting female/person every five additional eggs found in ovitraps. Both observed and fitted values indicated presence of adults in the absence of eggs in ovitraps. Notably, wide confidence intervals of estimates of biting females based on eggs were observed. The patterns of exotic arbovirus outbreak probability obtained by introducing these estimates
Methods for estimation loads transported by rivers
T. S. Smart
1999-01-01
Full Text Available Ten methods for estimating the loads of constituents in a river were tested using data from the River Don in North-East Scotland. By treating loads derived from flow and concentration data collected every 2 days as a truth to be predicted, the ten methods were assessed for use when concentration data are collected fortnightly or monthly by sub-sampling from the original data. Estimates of coefficients of variation, bias and mean squared errors of the methods were compared; no method consistently outperformed all others and different methods were appropriate for different constituents. The widely used interpolation methods can be improved upon substantially by modelling the relationship of concentration with flow or seasonality but only if these relationships are strong enough.
RELAP5/MOD3.3 Best Estimate Analyses for Human Reliability Analysis
Andrej Prošek
2010-01-01
Full Text Available To estimate the success criteria time windows of operator actions the conservative approach was used in the conventional probabilistic safety assessment (PSA. The current PSA standard recommends the use of best-estimate codes. The purpose of the study was to estimate the operator action success criteria time windows in scenarios in which the human actions are supplement to safety systems actuations, needed for updated human reliability analysis (HRA. For calculations the RELAP5/MOD3.3 best estimate thermal-hydraulic computer code and the qualified RELAP5 input model representing a two-loop pressurized water reactor, Westinghouse type, were used. The results of deterministic safety analysis were examined what is the latest time to perform the operator action and still satisfy the safety criteria. The results showed that uncertainty analysis of realistic calculation in general is not needed for human reliability analysis when additional time is available and/or the event is not significant contributor to the risk.
Lim, Jong Min; Lee, Byung Chai; Lee, Ik Jin [KAIST, Daejeon (Korea, Republic of)
2015-04-15
This study develops an efficient and accurate methodology for reliability-based design optimization (RBDO) by combining the most probable point (MPP)-based dimension reduction method (DRM) to enhance accuracy and the sequential optimization and reliability assessment (SORA) to enhance efficiency. In many researches, first-order reliability method (FORM) has been utilized for RBDO methods due to its efficiency and simplicity. However, it might not be accurate enough for highly nonlinear performance functions. Therefore, the MPP-based DRM is introduced for the accurate reliability assessment in this study. Even though the MPP-based DRM significantly improves the accuracy, additional computations for the moment-based integration are required. It is desirable to reduce the number of reliability analyses in the RBDO process. Since decoupled approaches such as SORA reduce necessary reliability analyses considerably, DRM-based SORA is proposed in this study for accurate and efficient RBDO. Furthermore, convex linearization is introduced to approximate inactive probabilistic constraints to additionally improve the efficiency. The efficiency and accuracy of the proposed method are verified through numerical examples.
Dimitrov, Nikolay Krasimirov; Friis-Hansen, Peter; Berggreen, Christian
2013-01-01
Reliability analysis of fiber-reinforced composite structures is a relatively unexplored field, and it is therefore expected that engineers and researchers trying to apply such an approach will meet certain challenges until more knowledge is accumulated. While doing the analyses included...... in the present paper, the authors have experienced some of the possible pitfalls on the way to complete a precise and robust reliability analysis for layered composites. Results showed that in order to obtain accurate reliability estimates it is necessary to account for the various failure modes described...... by the composite failure criteria. Each failure mode has been considered in a separate component reliability analysis, followed by a system analysis which gives the total probability of failure of the structure. The Model Correction Factor method used in connection with FORM (First-Order Reliability Method) proved...
Milk urea analytical result reliability and its methodical possibilities in the Czech Republic
Jan Říha
2013-01-01
Full Text Available Control of milk urea concentration (MUC can be used in diagnosis of the energy–nitrogen metabolism of cows. There are more analytical methods for MUC estimation and there are also discussions about their result reliability. Aim of this work was to obtain information for MUC result reliability improvement. MUC and MUN (milk urea nitrogen were investigated in 5 milk sample sets and in 7 calibration/comparison experiments. The positions of reference and indirect methods were changed in experiments. There were following analytical methods for MUC or MUN (in mg.100 ml−1: – photometric method (PH, as reference based on paradimethylaminobenzaldehyde reaction; – method Ureakvant (UK, as reference based on difference measurement of the electrical conductivity change during ureolysis; – method Chemspec (CH based on photometrical measurement of ammonia concentration after ureolysis (as reference; – spectroscopic method in mid infrared range of spectrum (FT–MIR; indirect routine method. In all methodical combinations the correlation coefficients (r varied from 0.8803 to 0.9943 (P −1 and comparable values of repeatability (from 0.65 to 1.83 mg.100 ml−1 as compared to FT–MIR MUC or MUN methods (from 1.39 to 5.6 and from 0.76 to 1.92 mg.100 ml−1 in performed experiments.
Gray bootstrap method for estimating frequency-varying random vibration signals with small samples
Wang Yanqing
2014-04-01
Full Text Available During environment testing, the estimation of random vibration signals (RVS is an important technique for the airborne platform safety and reliability. However, the available methods including extreme value envelope method (EVEM, statistical tolerances method (STM and improved statistical tolerance method (ISTM require large samples and typical probability distribution. Moreover, the frequency-varying characteristic of RVS is usually not taken into account. Gray bootstrap method (GBM is proposed to solve the problem of estimating frequency-varying RVS with small samples. Firstly, the estimated indexes are obtained including the estimated interval, the estimated uncertainty, the estimated value, the estimated error and estimated reliability. In addition, GBM is applied to estimating the single flight testing of certain aircraft. At last, in order to evaluate the estimated performance, GBM is compared with bootstrap method (BM and gray method (GM in testing analysis. The result shows that GBM has superiority for estimating dynamic signals with small samples and estimated reliability is proved to be 100% at the given confidence level.
A Reliability-Oriented Design Method for Power Electronic Converters
Wang, Huai; Zhou, Dao; Blaabjerg, Frede
2013-01-01
handbook) to the physics-of-failure approach and design for reliability process. A systematic design procedure consisting of various design tools is presented in this paper to design reliability into the power electronic converters since the early concept phase. The corresponding design procedures...
Computer System Reliability Allocation Method and Supporting Tool
无
2001-01-01
This paper presents a computer system reliability allocationmethod that is based on the theory of statistic and Markovian chain,which can be used to allocate reliability to subsystem, to hybrid system and software modules. Arele vant supporting tool built by us is introduced.
ErpICASSO: a tool for reliability estimates of independent components in EEG event-related analysis.
Artoni, Fiorenzo; Gemignani, Angelo; Sebastiani, Laura; Bedini, Remo; Landi, Alberto; Menicucci, Danilo
2012-01-01
Independent component analysis and blind source separation methods are steadily gaining popularity for separating individual brain and non-brain source signals mixed by volume conduction in electroencephalographic data. Despite the advancements on these techniques, determining the number of embedded sources and their reliability are still open issues. In particular to date no method takes into account trial-to-trial variability in order to provide a reliability measure of independent components extracted in Event Related Potentials (ERPs) studies. In this work we present ErpICASSO, a new method which modifies a data-driven approach named ICASSO for the analysis of trials (epochs). In addition to ICASSO the method enables the user to estimate the number of embedded sources, and provides a quality index of each extracted ERP component by combining trial-to-trial bootstrapping and CCA projection. We applied ErpICASSO on ERPs recorded from 14 subjects presented with unpleasant and neutral pictures. We separated potentials putatively related to different systems and identified the four primary ERP independent sources. Standing on the confidence interval estimated by ErpICASSO, we were able to compare the components between neutral and unpleasant conditions. ErpICASSO yielded encouraging results, thus providing the scientific community with a useful tool for ICA signal processing whenever dealing with trials recorded in different conditions.
Statistical Method of Estimating Nigerian Hydrocarbon Reserves
Jeffrey O. Oseh
2015-01-01
Full Text Available Hydrocarbon reserves are basic to planning and investment decisions in Petroleum Industry. Therefore its proper estimation is of considerable importance in oil and gas production. The estimation of hydrocarbon reserves in the Niger Delta Region of Nigeria has been very popular, and very successful, in the Nigerian oil and gas industry for the past 50 years. In order to fully estimate the hydrocarbon potentials in Nigerian Niger Delta Region, a clear understanding of the reserve geology and production history should be acknowledged. Reserves estimation of most fields is often performed through Material Balance and Volumetric methods. Alternatively a simple Estimation Model and Least Squares Regression may be useful or appropriate. This model is based on extrapolation of additional reserve due to exploratory drilling trend and the additional reserve factor which is due to revision of the existing fields. This Estimation model used alongside with Linear Regression Analysis in this study gives improved estimates of the fields considered, hence can be used in other Nigerian Fields with recent production history
Parameter estimation methods for chaotic intercellular networks.
Mariño, Inés P; Ullner, Ekkehard; Zaikin, Alexey
2013-01-01
We have investigated simulation-based techniques for parameter estimation in chaotic intercellular networks. The proposed methodology combines a synchronization-based framework for parameter estimation in coupled chaotic systems with some state-of-the-art computational inference methods borrowed from the field of computational statistics. The first method is a stochastic optimization algorithm, known as accelerated random search method, and the other two techniques are based on approximate Bayesian computation. The latter is a general methodology for non-parametric inference that can be applied to practically any system of interest. The first method based on approximate Bayesian computation is a Markov Chain Monte Carlo scheme that generates a series of random parameter realizations for which a low synchronization error is guaranteed. We show that accurate parameter estimates can be obtained by averaging over these realizations. The second ABC-based technique is a Sequential Monte Carlo scheme. The algorithm generates a sequence of "populations", i.e., sets of randomly generated parameter values, where the members of a certain population attain a synchronization error that is lesser than the error attained by members of the previous population. Again, we show that accurate estimates can be obtained by averaging over the parameter values in the last population of the sequence. We have analysed how effective these methods are from a computational perspective. For the numerical simulations we have considered a network that consists of two modified repressilators with identical parameters, coupled by the fast diffusion of the autoinducer across the cell membranes.
A simple method to estimate interwell autocorrelation
Pizarro, J.O.S.; Lake, L.W. [Univ. of Texas, Austin, TX (United States)
1997-08-01
The estimation of autocorrelation in the lateral or interwell direction is important when performing reservoir characterization studies using stochastic modeling. This paper presents a new method to estimate the interwell autocorrelation based on parameters, such as the vertical range and the variance, that can be estimated with commonly available data. We used synthetic fields that were generated from stochastic simulations to provide data to construct the estimation charts. These charts relate the ratio of areal to vertical variance and the autocorrelation range (expressed variously) in two directions. Three different semivariogram models were considered: spherical, exponential and truncated fractal. The overall procedure is demonstrated using field data. We find that the approach gives the most self-consistent results when it is applied to previously identified facies. Moreover, the autocorrelation trends follow the depositional pattern of the reservoir, which gives confidence in the validity of the approach.
Bearing Capacity Estimation of Bridge Piles Using the Impulse Transient Response Method
Meng Ma
2016-01-01
Full Text Available A bearing capacity estimation method for bridge piles was developed. In this method, the pulse echo test was used to select the intact piles; the dynamic stiffness was obtained by the impulse transient response test. A total of 680 bridge piles were tested, and their capacities were estimated. Finally, core drilling analysis was used to check the reliability of this method. The results show that, for intact piles, an obvious positive correlation exits between the dynamic stiffness and bearing capacity of the piles. The core drilling analysis proved that the estimation method was reliable.
Dalsgaard Soerensen, J. [Aalborg Univ., Aalborg (Denmark); Friis-Hansen, P. [Technical Univ. Denmark, Lyngby (Denmark); Bloch, A.; Svejgaard Nielsen, J. [Ramboell, Esbjerg (Denmark)
2004-08-01
Different simple stochastic models for failure related to pushover collapse are investigated. Next, a method is proposed to estimate the reliability of real offshore jacket structures. The method is based on the Model Correction Factor Method and can be used to very efficiently to estimate the reliability for total failure/collapse of jacket type platforms with wave in deck loads. A realistic example is evaluated and it is seen that it is possible to perform probabilistic reliability analysis for collapse of a jacket type platform using the model correction factor method. The total number of deterministic, complicated, non-linear (RONJA) analysis is typically as low as 10. Such reliability analyses are recommended to be used in practical applications, especially for cases with wave in deck load, where the traditional RSR analyses give poor measures of the structural reliability. (au)
Variational bayesian method of estimating variance components.
Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi
2016-07-01
We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling.
Prato, Carlo Giacomo; Rasmussen, Thomas Kjær; Nielsen, Otto Anker
2014-01-01
both congestion and reliability terms. Results illustrated that the value of time and the value of congestion were significantly higher in the peak period because of possible higher penalties for drivers being late and consequently possible higher time pressure. Moreover, results showed...... that the marginal rate of substitution between travel time reliability and total travel time did not vary across periods and traffic conditions, with the obvious caveat that the absolute values were significantly higher for the peak period. Last, results showed the immense potential of exploiting the growing...... availability of large amounts of data from cheap and enhanced technology to obtain estimates of the monetary value of different travel time components from the observation of actual behavior, with arguably potential significant impact on the realism of large-scale models....
How reliable are ligand-centric methods for Target Fishing?
Antonio ePeon
2016-04-01
Full Text Available Computational methods for Target Fishing (TF, also known as Target Prediction or Polypharmacology Prediction, can be used to discover new targets for small-molecule drugs. This may result in repositioning the drug in a new indication or improving our current understanding of its efficacy and side effects. While there is a substantial body of research on TF methods, there is still a need to improve their validation, which is often limited to a small part of the available targets and not easily interpretable by the user. Here we discuss how target-centric TF methods are inherently limited by the number of targets that can possibly predict (this number is by construction much larger in ligand-centric techniques. We also propose a new benchmark to validate TF methods, which is particularly suited to analyse how predictive performance varies with the query molecule. On average over approved drugs, we estimate that only five predicted targets will have to be tested to find two true targets with submicromolar potency (a strong variability in performance is however observed. In addition, we find that an approved drug has currently an average of eight known targets, which reinforces the notion that polypharmacology is a common and strong event. Furthermore, with the assistance of a control group of randomly-selected molecules, we show that the targets of approved drugs are generally harder to predict.
Contour Estimation by Array Processing Methods
Bourennane Salah
2006-01-01
Full Text Available This work is devoted to the estimation of rectilinear and distorted contours in images by high-resolution methods. In the case of rectilinear contours, it has been shown that it is possible to transpose this image processing problem to an array processing problem. The existing straight line characterization method called subspace-based line detection (SLIDE leads to models with orientations and offsets of straight lines as the desired parameters. Firstly, a high-resolution method of array processing leads to the orientation of the lines. Secondly, their offset can be estimated by either the well-known method of extension of the Hough transform or another method, namely, the variable speed propagation scheme, that belongs to the array processing applications field. We associate it with the method called "modified forward-backward linear prediction" (MFBLP. The signal generation process devoted to straight lines retrieval is retained for the case of distorted contours estimation. This issue is handled for the first time thanks to an inverse problem formulation and a phase model determination. The proposed method is initialized by means of the SLIDE algorithm.
Lifetime estimation methods in power transformer insulation
Mohammad Ali Taghikhani
2012-10-01
Full Text Available Mineral oil in the power transformer has an important role in the cooling, insulation aging and chemical reactions such as oxidation. Oil temperature increases will cause quality loss. The oil should be regularly control in necessary time. Studies have been done on power transformers oils that are used in different age in Iranian power grid to identify the true relationship between age and other characteristics of power transformer oil. In this paper the first method to estimate the life of power transformer insulation (oil is based on Arrhenius law. The Arrhenius law can provide loss of power transformer oil quality and estimates remaining life. The second method that is studies to estimate the life of power transformer is the paper insulation life prediction at temperature160 ° C.
A comprehensive estimation method for enterprise capability
Tetiana Kuzhda
2015-11-01
Full Text Available In today’s highly competitive business world, the need for efficient enterprise capability management is greater than ever. As more enterprises begin to compete on a global scale, the effective use of enterprise capability will become imperative for them to improve their business activities. The definition of socio-economic capability of the enterprise has been given and the main components of enterprise capability have been pointed out. The comprehensive method to estimate enterprise capability that takes into account both social and economic components has been offered. The methodical approach concerning integrated estimation of the enterprise capability has been developed. Novelty deals with the inclusion of summary measure of the social component of enterprise capability to define the integrated index of enterprise capability. The practical significance of methodological approach is that the method allows assessing the enterprise capability comprehensively through combining two kinds of estimates – social and economic and converts them into a single integrated indicator. It provides a comprehensive approach to socio-economic estimation of enterprise capability, sets a formal basis for making decisions and helps allocate enterprise resources reasonably. Practical implementation of this method will affect the current condition and trends of the enterprise, help to make forecasts and plans for its development and capability efficient use.
Huang, Liping; Crino, Michelle; Wu, Jason HY; Woodward, Mark; Land, Mary-Anne; McLean, Rachael; Webster, Jacqui; Enkhtungalag, Batsaikhan; Nowson, Caryl A; Elliott, Paul; Cogswell, Mary; Toft, Ulla; MILL, Jose G.; Furlanetto,Tania W.; Ilich, Jasminka Z.
2016-01-01
Background Methods based on spot urine samples (a single sample at one time-point) have been identified as a possible alternative approach to 24-hour urine samples for determining mean population salt intake. Objective The aim of this study is to identify a reliable method for estimating mean population salt intake from spot urine samples. This will be done by comparing the performance of existing equations against one other and against estimates derived from 24-hour urine samples. The effect...
Adaptive Methods for Permeability Estimation and Smart Well Management
Lien, Martha Oekland
2005-04-01
The main focus of this thesis is on adaptive regularization methods. We consider two different applications, the inverse problem of absolute permeability estimation and the optimal control problem of estimating smart well management. Reliable estimates of absolute permeability are crucial in order to develop a mathematical description of an oil reservoir. Due to the nature of most oil reservoirs, mainly indirect measurements are available. In this work, dynamic production data from wells are considered. More specifically, we have investigated into the resolution power of pressure data for permeability estimation. The inversion of production data into permeability estimates constitutes a severely ill-posed problem. Hence, regularization techniques are required. In this work, deterministic regularization based on adaptive zonation is considered, i.e. a solution approach with adaptive multiscale estimation in conjunction with level set estimation is developed for coarse scale permeability estimation. A good mathematical reservoir model is a valuable tool for future production planning. Recent developments within well technology have given us smart wells, which yield increased flexibility in the reservoir management. In this work, we investigate into the problem of finding the optimal smart well management by means of hierarchical regularization techniques based on multiscale parameterization and refinement indicators. The thesis is divided into two main parts, where Part I gives a theoretical background for a collection of research papers that has been written by the candidate in collaboration with others. These constitutes the most important part of the thesis, and are presented in Part II. A brief outline of the thesis follows below. Numerical aspects concerning calculations of derivatives will also be discussed. Based on the introduction to regularization given in Chapter 2, methods for multiscale zonation, i.e. adaptive multiscale estimation and refinement
Rater reliability of fragile X mutation size estimates: A multilaboratory analysis
Fisch, G.S. [Kings County Hospital Center and SUNY/Health Science Center, Brooklyn, NY (United States); Carpenter, N. [Chapman Institute of Medical Genetics, Tulsa, OK (United States); Maddalena, A. [Medical College of Virginia, Richmond, VA (United States)] [and others
1996-08-09
Notwithstanding the use of comparable molecular protocols, description and measurement of the fra(X) (fragile X) mutation may vary according to its appearance as a discrete band, smear, multiple bands, or mosaic. Estimation of mutation size may also differ from one laboratory to another. We report on the description of a mutation size estimate for a large sample of individuals tested for the fra(X) pre- or full mutation. Of 63 DNA samples evaluated, 45 were identified previously as fra(X) pre- or full mutations. DNA from 18 unaffected individuals was used as control. Genomic DNA was extracted from peripheral blood, and DNA fragments from each of four laboratories were sent to a single center where Southern blots were prepared and hybridized with the pE5.1 probe. Photographs from autoradiographs were returned to each site, and raters blind to the identity of the specimens were asked to evaluate them. Raters` estimates of mutation size compared favorably with a reference test. Intrarater reliability was good to excellent. Variability in mutation size estimates was comparable across band types. Variability in estimates was moderate, and was significantly correlated with absolute mutation size and band type. 9 refs., 1 fig., 3 tabs.
Williamson, Laura D; Brookes, Kate L; Scott, Beth E; Graham, Isla M; Bradbury, Gareth; Hammond, Philip S; Thompson, Paul M; McPherson, Jana
2016-01-01
...‐based visual surveys. Surveys of cetaceans using acoustic loggers or digital cameras provide alternative methods to estimate relative density that have the potential to reduce cost and provide a verifiable record of all detections...
Online Reliable Peak Charge/Discharge Power Estimation of Series-Connected Lithium-Ion Battery Packs
Bo Jiang
2017-03-01
Full Text Available The accurate peak power estimation of a battery pack is essential to the power-train control of electric vehicles (EVs. It helps to evaluate the maximum charge and discharge capability of the battery system, and thus to optimally control the power-train system to meet the requirement of acceleration, gradient climbing and regenerative braking while achieving a high energy efficiency. A novel online peak power estimation method for series-connected lithium-ion battery packs is proposed, which considers the influence of cell difference on the peak power of the battery packs. A new parameter identification algorithm based on adaptive ratio vectors is designed to online identify the parameters of each individual cell in a series-connected battery pack. The ratio vectors reflecting cell difference are deduced strictly based on the analysis of battery characteristics. Based on the online parameter identification, the peak power estimation considering cell difference is further developed. Some validation experiments in different battery aging conditions and with different current profiles have been implemented to verify the proposed method. The results indicate that the ratio vector-based identification algorithm can achieve the same accuracy as the repetitive RLS (recursive least squares based identification while evidently reducing the computation cost, and the proposed peak power estimation method is more effective and reliable for series-connected battery packs due to the consideration of cell difference.
A MONTE-CARLO METHOD FOR ESTIMATING THE CORRELATION EXPONENT
MIKOSCH, T; WANG, QA
1995-01-01
We propose a Monte Carlo method for estimating the correlation exponent of a stationary ergodic sequence. The estimator can be considered as a bootstrap version of the classical Hill estimator. A simulation study shows that the method yields reasonable estimates.
A MONTE-CARLO METHOD FOR ESTIMATING THE CORRELATION EXPONENT
MIKOSCH, T; WANG, QA
We propose a Monte Carlo method for estimating the correlation exponent of a stationary ergodic sequence. The estimator can be considered as a bootstrap version of the classical Hill estimator. A simulation study shows that the method yields reasonable estimates.
Broquet, G.; Chevallier, F.; Breon, F.M.; Yver, C.; Ciais, P.; Ramonet, M.; Schmidt, M. [Laboratoire des Sciences du Climat et de l' Environnement, CEA-CNRS-UVSQ, UMR8212, IPSL, Gif-sur-Yvette (France); Alemanno, M. [Servizio Meteorologico dell' Aeronautica Militare Italiana, Centro Aeronautica Militare di Montagna, Monte Cimone/Sestola (Italy); Apadula, F. [Research on Energy Systems, RSE, Environment and Sustainable Development Department, Milano (Italy); Hammer, S. [Universitaet Heidelberg, Institut fuer Umweltphysik, Heidelberg (Germany); Haszpra, L. [Hungarian Meteorological Service, Budapest (Hungary); Meinhardt, F. [Federal Environmental Agency, Kirchzarten (Germany); Necki, J. [AGH University of Science and Technology, Krakow (Poland); Piacentino, S. [ENEA, Laboratory for Earth Observations and Analyses, Palermo (Italy); Thompson, R.L. [Max Planck Institute for Biogeochemistry, Jena (Germany); Vermeulen, A.T. [Energy research Centre of the Netherlands ECN, EEE-EA, Petten (Netherlands)
2013-07-01
The Bayesian framework of CO2 flux inversions permits estimates of the retrieved flux uncertainties. Here, the reliability of these theoretical estimates is studied through a comparison against the misfits between the inverted fluxes and independent measurements of the CO2 Net Ecosystem Exchange (NEE) made by the eddy covariance technique at local (few hectares) scale. Regional inversions at 0.5{sup 0} resolution are applied for the western European domain where {approx}50 eddy covariance sites are operated. These inversions are conducted for the period 2002-2007. They use a mesoscale atmospheric transport model, a prior estimate of the NEE from a terrestrial ecosystem model and rely on the variational assimilation of in situ continuous measurements of CO2 atmospheric mole fractions. Averaged over monthly periods and over the whole domain, the misfits are in good agreement with the theoretical uncertainties for prior and inverted NEE, and pass the chi-square test for the variance at the 30% and 5% significance levels respectively, despite the scale mismatch and the independence between the prior (respectively inverted) NEE and the flux measurements. The theoretical uncertainty reduction for the monthly NEE at the measurement sites is 53% while the inversion decreases the standard deviation of the misfits by 38 %. These results build confidence in the NEE estimates at the European/monthly scales and in their theoretical uncertainty from the regional inverse modelling system. However, the uncertainties at the monthly (respectively annual) scale remain larger than the amplitude of the inter-annual variability of monthly (respectively annual) fluxes, so that this study does not engender confidence in the inter-annual variations. The uncertainties at the monthly scale are significantly smaller than the seasonal variations. The seasonal cycle of the inverted fluxes is thus reliable. In particular, the CO2 sink period over the European continent likely ends later than
Parameter estimation methods for chaotic intercellular networks.
Inés P Mariño
Full Text Available We have investigated simulation-based techniques for parameter estimation in chaotic intercellular networks. The proposed methodology combines a synchronization-based framework for parameter estimation in coupled chaotic systems with some state-of-the-art computational inference methods borrowed from the field of computational statistics. The first method is a stochastic optimization algorithm, known as accelerated random search method, and the other two techniques are based on approximate Bayesian computation. The latter is a general methodology for non-parametric inference that can be applied to practically any system of interest. The first method based on approximate Bayesian computation is a Markov Chain Monte Carlo scheme that generates a series of random parameter realizations for which a low synchronization error is guaranteed. We show that accurate parameter estimates can be obtained by averaging over these realizations. The second ABC-based technique is a Sequential Monte Carlo scheme. The algorithm generates a sequence of "populations", i.e., sets of randomly generated parameter values, where the members of a certain population attain a synchronization error that is lesser than the error attained by members of the previous population. Again, we show that accurate estimates can be obtained by averaging over the parameter values in the last population of the sequence. We have analysed how effective these methods are from a computational perspective. For the numerical simulations we have considered a network that consists of two modified repressilators with identical parameters, coupled by the fast diffusion of the autoinducer across the cell membranes.
Methods for communication-network reliability analysis - Probabilistic graph reduction
Shooman, Andrew M.; Kershenbaum, Aaron
The authors have designed and implemented a graph-reduction algorithm for computing the k-terminal reliability of an arbitrary network with possibly unreliable nodes. The two contributions of the present work are a version of the delta-y transformation for k-terminal reliability and an extension of Satyanarayana and Wood's polygon to chain transformations to handle graphs with imperfect vertices. The exact algorithm is faster than or equal to that of Satyanarayana and Wood and the simple algorithm without delta-y and polygon to chain transformations for every problem considered. The exact algorithm runs in linear time on series-parallel graphs and is faster than the above-stated algorithms for huge problems which run in exponential time. The approximate algorithms reduce the computation time for the network reliability problem by two to three orders of magnitude for large problems, while providing reasonably accurate answers in most cases.
A simple method to evaluate the reliability of OWAS observations.
de Bruijn, I; Engels, J A; van der Gulden, J W
1998-08-01
Slides showing nurses in different working postures were used to determine the reliability of OWAS observations. Each slide could be looked at for 3 seconds, while a new slide was shown every 30 seconds to resemble the normal practice of observation. Two observers twice scored a series of slides, some of them being identical at both viewings. To reduce effects of recall there was a time interval of 4 weeks or more between the two viewings and the slides were in a different order the second time. Different series were used to evaluate inter- and intra-observer reliability. The OWAS scores of corresponding slides were compared. In almost all comparisons percentages of agreement over 85% and kappa's over 0.6 were found, which is considered as good agreement. The procedure described seems to be a useful and simple technique to determine such reliability.
Performance improvement of a moment method for reliability analysis using kriging metamodels
Ju, Byeong Hyeon; Cho, Tae Min; Lee, Byung Chai [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Jung, Do Hyun [Korea Automotive Technology Institute, Chonan (Korea, Republic of)
2006-08-15
Many methods for reliability analysis have been studied and one of them, a moment method, has the advantage that it doesn't require sensitivities of performance functions. The moment method for reliability analysis requires the first four moments of a performance function and then Pearson system is used for the probability of failure where the accuracy of the probability of failure greatly depends on that of the first four moments. But it is generally impossible to assess them analytically for multidimensional functions, and numerical integration is mainly used to estimate the moment. However, numerical integration requires many function evaluations and in case of involving finite element analyses, the calculation of the first four moments is very time-consuming. To solve the problem, this research proposes a new method of approximating the first four moments based on kriging metamodel. The proposed method substitutes the kriging metamodel for the performance function and can also evaluate the accuracy of the calculated moments adjusting the approximation range. Numerical examples show the proposed method can approximate the moments accurately with the less function evaluations and evaluate the accuracy of the calculated moments.
Point estimation of root finding methods
2008-01-01
This book sets out to state computationally verifiable initial conditions for predicting the immediate appearance of the guaranteed and fast convergence of iterative root finding methods. Attention is paid to iterative methods for simultaneous determination of polynomial zeros in the spirit of Smale's point estimation theory, introduced in 1986. Some basic concepts and Smale's theory for Newton's method, together with its modifications and higher-order methods, are presented in the first two chapters. The remaining chapters contain the recent author's results on initial conditions guaranteing convergence of a wide class of iterative methods for solving algebraic equations. These conditions are of practical interest since they depend only on available data, the information of a function whose zeros are sought and initial approximations. The convergence approach presented can be applied in designing a package for the simultaneous approximation of polynomial zeros.
The estimation method of GPS instrumental biases
无
2001-01-01
A model of estimating the global positioning system (GPS) instrumental biases and the methods to calculate the relative instrumental biases of satellite and receiver are presented. The calculated results of GPS instrumental biases, the relative instrumental biases of satellite and receiver, and total electron content (TEC) are also shown. Finally, the stability of GPS instrumental biases as well as that of satellite and receiver instrumental biases are evaluated, indicating that they are very stable during a period of two months and a half.
Lifetime estimation methods in power transformer insulation
Mohammad Ali Taghikhani
2012-01-01
Mineral oil in the power transformer has an important role in the cooling, insulation aging and chemical reactions such as oxidation. Oil temperature increases will cause quality loss. The oil should be regularly control in necessary time. Studies have been done on power transformers oils that are used in different age in Iranian power grid to identify the true relationship between age and other characteristics of power transformer oil. In this paper the first method to estimate the life of p...
Semigroup Method for a Mathematical Model in Reliability Analysis
Geni Gupur; LI Xue-zhi
2001-01-01
The system which consists of a reliable machine, an unreliable machine and a storage buffer with infinite many workpieces has been studied. The existence of a unique positive time-dependent solution of the model corresponding to the system has been obtained by using C0-semigroup theory of linear operators in functional analysis.
Simulation methods for reliability and availability of complex systems
Faulin Fajardo, Javier; Martorell Alsina, Sebastián Salvador; Ramirez-Marquez, Jose Emmanuel; Martorell, Sebastian
2010-01-01
This text discusses the use of computer simulation-based techniques and algorithms to determine reliability and/or availability levels in complex systems and to help improve these levels both at the design stage and during the system operating stage.
Wang, Z.; Lu, K.; Ye, Y.
2011-01-01
According to saliency of permanent magnet synchronous motor (PMSM), the information of rotor position is implied in performance of stator inductances due to the magnetic saturation effect. Researches focused on the initial rotor position estimation of PMSM by injecting modulated pulse voltage vec....... The experimental results show that the proposed method estimates the initial rotor position reliably and efficently. The method is also simple and can achieve satisfied estimation accuracy....
Advancing methods for global crop area estimation
King, M. L.; Hansen, M.; Adusei, B.; Stehman, S. V.; Becker-Reshef, I.; Ernst, C.; Noel, J.
2012-12-01
Cropland area estimation is a challenge, made difficult by the variety of cropping systems, including crop types, management practices, and field sizes. A MODIS derived indicator mapping product (1) developed from 16-day MODIS composites has been used to target crop type at national scales for the stratified sampling (2) of higher spatial resolution data for a standardized approach to estimate cultivated area. A global prototype is being developed using soybean, a global commodity crop with recent LCLUC dynamic and a relatively unambiguous spectral signature, for the United States, Argentina, Brazil, and China representing nearly ninety percent of soybean production. Supervised classification of soy cultivated area is performed for 40 km2 sample blocks using time-series, Landsat imagery. This method, given appropriate data for representative sampling with higher spatial resolution, represents an efficient and accurate approach for large area crop type estimation. Results for the United States sample blocks have exhibited strong agreement with the National Agricultural Statistics Service's (NASS's) Cropland Data Layer (CDL). A confusion matrix showed a 91.56% agreement and a kappa of .67 between the two products. Field measurements and RapidEye imagery have been collected for the USA, Brazil and Argentina in further assessing product accuracies. The results of this research will demonstrate the value of MODIS crop type indicator products and Landsat sample data in estimating soybean cultivated area at national scales, enabling an internally consistent global assessment of annual soybean production.
Numerical methods for reliability and safety assessment multiscale and multiphysics systems
Hami, Abdelkhalak
2015-01-01
This book offers unique insight on structural safety and reliability by combining computational methods that address multiphysics problems, involving multiple equations describing different physical phenomena, and multiscale problems, involving discrete sub-problems that together describe important aspects of a system at multiple scales. The book examines a range of engineering domains and problems using dynamic analysis, nonlinear methods, error estimation, finite element analysis, and other computational techniques. This book also: · Introduces novel numerical methods · Illustrates new practical applications · Examines recent engineering applications · Presents up-to-date theoretical results · Offers perspective relevant to a wide audience, including teaching faculty/graduate students, researchers, and practicing engineers
Control and estimation methods over communication networks
Mahmoud, Magdi S
2014-01-01
This book provides a rigorous framework in which to study problems in the analysis, stability and design of networked control systems. Four dominant sources of difficulty are considered: packet dropouts, communication bandwidth constraints, parametric uncertainty, and time delays. Past methods and results are reviewed from a contemporary perspective, present trends are examined, and future possibilities proposed. Emphasis is placed on robust and reliable design methods. New control strategies for improving the efficiency of sensor data processing and reducing associated time delay are presented. The coverage provided features: · an overall assessment of recent and current fault-tolerant control algorithms; · treatment of several issues arising at the junction of control and communications; · key concepts followed by their proofs and efficient computational methods for their implementation; and · simulation examples (including TrueTime simulations) to...
Methods for reliability based design optimization of structural components
Dersjö, Tomas
2012-01-01
Cost and quality are key properties of a product, possibly even the two most important. Onedefinition of quality is fitness for purpose. Load-bearing products, i.e. structural components,loose their fitness for purpose if they fail. Thus, the ability to withstand failure is a fundamentalmeasure of quality for structural components. Reliability based design optimization(RBDO) is an approach for development of structural components which aims to minimizethe cost while constraining the probabili...
Reliability Estimation of the Pultrusion Process Using the First-Order Reliability Method (FORM)
Baran, Ismet; Tutum, Cem Celal; Hattel, Jesper Henri
2013-01-01
distribution functions of the CDOCE and Tmax as well as the correlation coefficients are obtained by using the FORM and the results are compared with corresponding Monte-Carlo simulations (MCS). According to the results obtained from the FORM, an increase in the pulling speed yields an increase...
Kyeremateng, Samuel O; Pudlas, Marieke; Woehrle, Gerd H
2014-09-01
A novel empirical analytical approach for estimating solubility of crystalline drugs in polymers has been developed. The approach utilizes a combination of differential scanning calorimetry measurements and a reliable mathematical algorithm to construct complete solubility curve of a drug in polymer. Compared with existing methods, this novel approach reduces the required experimentation time and amount of material by approximately 80%. The predictive power and relevance of such solubility curves in development of amorphous solid dispersion (ASD) formulations are shown by applications to a number of hot-melt extrudate formulations of ibuprofen and naproxen in Soluplus. On the basis of the temperature-drug load diagrams using the solubility curves and the glass transition temperatures, physical stability of the extrudate formulations was predicted and checked by placing the formulations on real-time stability studies. An analysis of the stability samples with microscopy, thermal, and imaging techniques confirmed the predicted physical stability of the formulations. In conclusion, this study presents a fast and reliable approach for estimating solubility of crystalline drugs in polymer matrixes. This powerful approach can be applied by formulation scientists as an early and convenient tool in designing ASD formulations for maximum drug load and physical stability.
Time-Dependent Reliability Modeling and Analysis Method for Mechanics Based on Convex Process
Lei Wang
2015-01-01
Full Text Available The objective of the present study is to evaluate the time-dependent reliability for dynamic mechanics with insufficient time-varying uncertainty information. In this paper, the nonprobabilistic convex process model, which contains autocorrelation and cross-correlation, is firstly employed for the quantitative assessment of the time-variant uncertainty in structural performance characteristics. By combination of the set-theory method and the regularization treatment, the time-varying properties of structural limit state are determined and a standard convex process with autocorrelation for describing the limit state is formulated. By virtue of the classical first-passage method in random process theory, a new nonprobabilistic measure index of time-dependent reliability is proposed and its solution strategy is mathematically conducted. Furthermore, the Monte-Carlo simulation method is also discussed to illustrate the feasibility and accuracy of the developed approach. Three engineering cases clearly demonstrate that the proposed method may provide a reasonable and more efficient way to estimate structural safety than Monte-Carlo simulations throughout a product life-cycle.
Kanjilal, Oindrila, E-mail: oindrila@civil.iisc.ernet.in; Manohar, C.S., E-mail: manohar@civil.iisc.ernet.in
2017-07-15
The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations. - Highlights: • The distance minimizing control forces minimize a bound on the sampling variance. • Establishing Girsanov controls via solution of a two-point boundary value problem. • Girsanov controls via Volterra's series representation for the transfer functions.
Issues and Methods for Assessing COTS Reliability, Maintainability, and Availability
Schneidewind, Norman F.; Nikora, Allen P.
1998-01-01
Many vendors produce products that are not domain specific (e.g., network server) and have limited functionality (e.g., mobile phone). In contrast, many customers of COTS develop systems that am domain specific (e.g., target tracking system) and have great variability in functionality (e.g., corporate information system). This discussion takes the viewpoint of how the customer can ensure the quality of COTS components. In evaluating the benefits and costs of using COTS, we must consider the environment in which COTS will operate. Thus we must distinguish between using a non-mission critical application like a spreadsheet program to produce a budget and a mission critical application like military strategic and tactical operations. Whereas customers will tolerate an occasional bug in the former, zero tolerance is the rule in the latter. We emphasize the latter because this is the arena where there are major unresolved problems in the application of COTS. Furthermore, COTS components may be embedded in the larger customer system. We refer to these as embedded systems. These components must be reliable, maintainable, and available, and must be with the larger system in order for the customer to benefit from the advertised advantages of lower development and maintenance costs. Interestingly, when the claims of COTS advantages are closely examined, one finds that to a great extent these COTS components consist of hardware and office products, not mission critical software [1]. Obviously, COTS components are different from custom components with respect to one or more of the following attributes: source, development paradigm, safety, reliability, maintainability, availability, security, and other attributes. However, the important question is whether they should be treated differently when deciding to deploy them for operational use; we suggest the answer is no. We use reliability as an example to justify our answer. In order to demonstrate its reliability, a COTS
Hou, Gene J.-W.; Gumbert, Clyde R.; Newman, Perry A.
2004-01-01
A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The optimal solutions associated with the MPP provide measurements related to safety probability. This study focuses on two commonly used approximate probability integration methods; i.e., the Reliability Index Approach (RIA) and the Performance Measurement Approach (PMA). Their reliability sensitivity equations are first derived in this paper, based on the derivatives of their respective optimal solutions. Examples are then provided to demonstrate the use of these derivatives for better reliability analysis and Reliability-Based Design Optimization (RBDO).
Reliability of the Suchey-Brooks method for a French contemporary population.
Savall, Frédéric; Rérolle, Camille; Hérin, Fabrice; Dédouit, Fabrice; Rougé, Daniel; Telmon, Norbert; Saint-Martin, Pauline
2016-09-01
The Suchey-Brooks method is commonly used for pubic symphyseal aging in forensic cases. However, inter-population variability is a problem affected by several factors such as geographical location and secular trends. The aim of our study was to test the reliability of the Suchey-Brooks method on a virtual sample of contemporary French males. We carried out a retrospective study of 680 pubic symphysis from adult males undergoing clinical Multislice Computed Tomography in two hospitals between January 2013 and July 2014 (Toulouse and Tours, France). The reliability of the Suchey-Brooks method was tested by the calculation of inaccuracy and bias between real and estimated ages, and the mean age for each stage and the mean stage for each 10-years age interval were compared. The degree of inaccuracy and bias increased with age and inaccuracy exceeded 20 years for individuals over 65 years of age. The results are consistent with an overestimation of the real age for stages I and II and an underestimation of the real age for stages IV, V and VI. Furthermore, the mean stages of the reference sample were significantly lower for the 14-25 age group and significantly higher for individuals over 35 years old. Age estimation is potentially limited by differential inter-population error rates between geographical locations. Furthermore, the effects of secular trends are also supported by research in European countries showing a reduction in the age of attainment of indicators of biological maturity during the past few decades. The results suggest that the Suchey-Brooks method should be used with caution in France. Our study supports previous findings and in the future, the Suchey-Brooks method could benefit from re-evaluation of the aging standards by the establishment of new virtual reference samples.
Optimal reliability design method for remote solar systems
Suwapaet, Nuchida
A unique optimal reliability design algorithm is developed for remote communication systems. The algorithm deals with either minimizing an unavailability of the system within a fixed cost or minimizing the cost of the system with an unavailability constraint. The unavailability of the system is a function of three possible failure occurrences: individual component breakdown, solar energy deficiency (loss of load probability), and satellite/radio transmission loss. The three mathematical models of component failure, solar power failure, transmission failure are combined and formulated as a nonlinear programming optimization problem with binary decision variables, such as number and type (or size) of photovoltaic modules, batteries, radios, antennas, and controllers. Three possible failures are identified and integrated in computer algorithm to generate the parameters for the optimization algorithm. The optimization algorithm is implemented with a branch-and-bound technique solution in MS Excel Solver. The algorithm is applied to a case study design for an actual system that will be set up in remote mountainous areas of Peru. The automated algorithm is verified with independent calculations. The optimal results from minimizing the unavailability of the system with the cost constraint case and minimizing the total cost of the system with the unavailability constraint case are consistent with each other. The tradeoff feature in the algorithm allows designers to observe results of 'what-if' scenarios of relaxing constraint bounds, thus obtaining the most benefit from the optimization process. An example of this approach applied to an existing communication system in the Andes shows dramatic improvement in reliability for little increase in cost. The algorithm is a real design tool, unlike other existing simulation design tools. The algorithm should be useful for other stochastic systems where component reliability, random supply and demand, and communication are
Kapil Yadav
2015-01-01
Full Text Available Background: Continuous monitoring of salt iodization to ensure the success of the Universal Salt Iodization (USI program can be significantly strengthened by the use of a simple, safe, and rapid method of salt iodine estimation. This study assessed the validity of a new portable device, iCheck Iodine developed by the BioAnalyt GmbH to estimate the iodine content in salt. Materials and Methods: Validation of the device was conducted in the laboratory of the South Asia regional office of the International Council for Control of Iodine Deficiency Disorders (ICCIDD. The validity of the device was assessed using device specific indicators, comparison of iCheck Iodine device with the iodometric titration, and comparison between iodine estimation using 1 g and 10 g salt by iCheck Iodine using 116 salt samples procured from various small-, medium-, and large-scale salt processors across India. Results: The intra- and interassay imprecision for 10 parts per million (ppm, 30 ppm, and 50 ppm concentrations of iodized salt were 2.8%, 6.1%, and 3.1%, and 2.4%, 2.2%, and 2.1%, respectively. Interoperator imprecision was 6.2%, 6.3%, and 4.6% for the salt with iodine concentrations of 10 ppm, 30 ppm, and 50 ppm respectively. The correlation coefficient between measurements by the two methods was 0.934 and the correlation coefficient between measurements using 1 g of iodized salt and 10 g of iodized salt by the iCheck Iodine device was 0.983. Conclusions: The iCheck Iodine device is reliable and provides a valid method for the quantitative estimation of the iodine content of iodized salt fortified with potassium iodate in the field setting and in different types of salt.
Bruyn, George A W; Möller, Ingrid; Garrido, Jesus
2012-01-01
To assess the intra- and interobserver reliability of musculoskeletal ultrasonography (US) in detecting inflammatory and destructive tendon abnormalities in patients with RA using two different scanning methods....
Wenjuan ZHANG; Li CHEN; Ning QU; Hai'an LIANG
2006-01-01
Landslide is one kind of geologic hazards that often happens all over the world. It brings huge losses to human life and property; therefore, it is very important to research it. This study focused in combination between single and regional landslide, traditional slope stability analysis method and reliability analysis method. Meanwhile, methods of prediction of slopes and reliability analysis were discussed.
Graham, James M.
2006-01-01
Coefficient alpha, the most commonly used estimate of internal consistency, is often considered a lower bound estimate of reliability, though the extent of its underestimation is not typically known. Many researchers are unaware that coefficient alpha is based on the essentially tau-equivalent measurement model. It is the violation of the…
Advances in Time Estimation Methods for Molecular Data.
Kumar, Sudhir; Hedges, S Blair
2016-04-01
Molecular dating has become central to placing a temporal dimension on the tree of life. Methods for estimating divergence times have been developed for over 50 years, beginning with the proposal of molecular clock in 1962. We categorize the chronological development of these methods into four generations based on the timing of their origin. In the first generation approaches (1960s-1980s), a strict molecular clock was assumed to date divergences. In the second generation approaches (1990s), the equality of evolutionary rates between species was first tested and then a strict molecular clock applied to estimate divergence times. The third generation approaches (since ∼2000) account for differences in evolutionary rates across the tree by using a statistical model, obviating the need to assume a clock or to test the equality of evolutionary rates among species. Bayesian methods in the third generation require a specific or uniform prior on the speciation-process and enable the inclusion of uncertainty in clock calibrations. The fourth generation approaches (since 2012) allow rates to vary from branch to branch, but do not need prior selection of a statistical model to describe the rate variation or the specification of speciation model. With high accuracy, comparable to Bayesian approaches, and speeds that are orders of magnitude faster, fourth generation methods are able to produce reliable timetrees of thousands of species using genome scale data. We found that early time estimates from second generation studies are similar to those of third and fourth generation studies, indicating that methodological advances have not fundamentally altered the timetree of life, but rather have facilitated time estimation by enabling the inclusion of more species. Nonetheless, we feel an urgent need for testing the accuracy and precision of third and fourth generation methods, including their robustness to misspecification of priors in the analysis of large phylogenies and data
Lauritsen, Jakob; Gundgaard, Maria G; Mortensen, Mette S
2014-01-01
Estimates of glomerular filtration rate (eGFR) are widely used when administering nephrotoxic chemotherapy. No studies performed in oncology patients have shown whether eGFR can safely substitute a measured GFR (mGFR) based on a marker method. We aimed to assess the validity of four major formula...
Khodr, Zeina G.; Pfeiffer, Ruth M.; Gierach, Gretchen L., E-mail: GierachG@mail.nih.gov [Department of Health and Human Services, Division of Cancer Epidemiology and Genetics, National Cancer Institute, 9609 Medical Center Drive MSC 9774, Bethesda, Maryland 20892 (United States); Sak, Mark A.; Bey-Knight, Lisa [Karmanos Cancer Institute, Wayne State University, 4100 John R, Detroit, Michigan 48201 (United States); Duric, Nebojsa; Littrup, Peter [Karmanos Cancer Institute, Wayne State University, 4100 John R, Detroit, Michigan 48201 and Delphinus Medical Technologies, 46701 Commerce Center Drive, Plymouth, Michigan 48170 (United States); Ali, Haythem; Vallieres, Patricia [Henry Ford Health System, 2799 W Grand Boulevard, Detroit, Michigan 48202 (United States); Sherman, Mark E. [Division of Cancer Prevention, National Cancer Institute, Department of Health and Human Services, 9609 Medical Center Drive MSC 9774, Bethesda, Maryland 20892 (United States)
2015-10-15
Purpose: High breast density, as measured by mammography, is associated with increased breast cancer risk, but standard methods of assessment have limitations including 2D representation of breast tissue, distortion due to breast compression, and use of ionizing radiation. Ultrasound tomography (UST) is a novel imaging method that averts these limitations and uses sound speed measures rather than x-ray imaging to estimate breast density. The authors evaluated the reproducibility of measures of speed of sound and changes in this parameter using UST. Methods: One experienced and five newly trained raters measured sound speed in serial UST scans for 22 women (two scans per person) to assess inter-rater reliability. Intrarater reliability was assessed for four raters. A random effects model was used to calculate the percent variation in sound speed and change in sound speed attributable to subject, scan, rater, and repeat reads. The authors estimated the intraclass correlation coefficients (ICCs) for these measures based on data from the authors’ experienced rater. Results: Median (range) time between baseline and follow-up UST scans was five (1–13) months. Contributions of factors to sound speed variance were differences between subjects (86.0%), baseline versus follow-up scans (7.5%), inter-rater evaluations (1.1%), and intrarater reproducibility (∼0%). When evaluating change in sound speed between scans, 2.7% and ∼0% of variation were attributed to inter- and intrarater variation, respectively. For the experienced rater’s repeat reads, agreement for sound speed was excellent (ICC = 93.4%) and for change in sound speed substantial (ICC = 70.4%), indicating very good reproducibility of these measures. Conclusions: UST provided highly reproducible sound speed measurements, which reflect breast density, suggesting that UST has utility in sensitively assessing change in density.
A genetic algorithm approach for assessing soil liquefaction potential based on reliability method
M H Bagheripour; I Shooshpasha; M Afzalirad
2012-02-01
Deterministic approaches are unable to account for the variations in soil’s strength properties, earthquake loads, as well as source of errors in evaluations of liquefaction potential in sandy soils which make them questionable against other reliability concepts. Furthermore, deterministic approaches are incapable of precisely relating the probability of liquefaction and the factor of safety (FS). Therefore, the use of probabilistic approaches and especially, reliability analysis is considered since a complementary solution is needed to reach better engineering decisions. In this study, Advanced First-Order Second-Moment (AFOSM) technique associated with genetic algorithm (GA) and its corresponding sophisticated optimization techniques have been used to calculate the reliability index and the probability of liquefaction. The use of GA provides a reliable mechanism suitable for computer programming and fast convergence. A new relation is developed here, by which the liquefaction potential can be directly calculated based on the estimated probability of liquefaction (), cyclic stress ratio (CSR) and normalized standard penetration test (SPT) blow counts while containing a mean error of less than 10% from the observational data. The validity of the proposed concept is examined through comparison of the results obtained by the new relation and those predicted by other investigators. A further advantage of the proposed relation is that it relates and FS and hence it provides possibility of decision making based on the liquefaction risk and the use of deterministic approaches. This could be beneficial to geotechnical engineers who use the common methods of FS for evaluation of liquefaction. As an application, the city of Babolsar which is located on the southern coasts of Caspian Sea is investigated for liquefaction potential. The investigation is based primarily on in situ tests in which the results of SPT are analysed.
Assessment and Improving Methods of Reliability Indices in Bakhtar Regional Electricity Company
Saeed Shahrezaei
2013-04-01
Full Text Available Reliability of a system is the ability of a system to do prospected duties in future and the probability of desirable operation for doing predetermined duties. Power system elements failures data are the main data of reliability assessment in the network. Determining antiseptic parameters is the goal of reliability assessment by using system history data. These parameters help to recognize week points of the system. In other words, the goal of reliability assessment is operation improving and decreasing of the failures and power outages. This paper is developed to assess reliability indices of Bakhtar Regional Electricity Company up to 1393 and the improving methods and their effects on the reliability indices in this network. DIgSILENT Power Factory software is employed for simulation. Simulation results show the positive effect of improving methods in reliability indices of Bakhtar Regional Electricity Company.
Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der
2002-01-01
The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals...... are considered to be Gaussian. Conventional FORM analysis yields the linearization point of the idealized limit-state surface. A model correction factor is then introduced to push the idealized limit-state surface onto the actual limit-state surface. A few iterations yield a good approximation of the reliability...... reliability method; Model correction factor method; Nataf field integration; Non-Gaussion random field; Random field integration; Structural reliability; Pile foundation reliability...
Reliability-based design optimization using a moment method and a kriging metamodel
Ju, Byeong Hyeon; Chai Lee, Byung
2008-05-01
Reliability-based design optimization (RBDO) has been used for optimizing engineering systems with uncertainties in design variables and system parameters. RBDO involves reliability analysis, which requires a large amount of computational effort, so it is important to select an efficient method for reliability analysis. Of the many methods for reliability analysis, a moment method, which is called the fourth moment method, is known to be less expensive for moderate size problems and requires neither iteration nor the computation of derivatives. Despite these advantages, previous research on RBDO has been mainly based on the first-order reliability method and relatively little attention has been paid to moment-based RBDO. This article considers difficulties in implementing the moment method into RBDO; they are solved using a kriging metamodel with an active constraint strategy. Three numerical examples are tested and the results show that the proposed method is efficient and accurate.
Reliability Assessment of IGBT Modules Modeled as Systems with Correlated Components
Kostandyan, Erik; Sørensen, John Dalsgaard
2013-01-01
configuration. The estimated system reliability by the proposed method is a conservative estimate. Application of the suggested method could be extended for reliability estimation of systems composing of welding joints, bolts, bearings, etc. The reliability model incorporates the correlation between...
METHODS TO IMPROVE THE RELIABILITY OF CREDIT INSTITUTIONS
Yu. V. Mishin
2014-01-01
Full Text Available This article discusses issues of competitiveness of commercial banks at the expense of future growth profitability and liquidity of credit institutions. To solve this problem is proposed multistage economic and mathematical model. It provides, first, determining the possible volumes of recruitment and placement of funds subject to the minimum level of liquidity , interest rate risk , as well as compliance with prudential regulations of capital adequacy and liquidity , and secondly, based on the enumeration of variable accounting policies and obtain comprehensive reliability assessment forms the best option plan banking. The paper shows the practical implementation of the proposed approach on the example of one of the largest Russian banks – JSC «Bank VTB».
WANG Shu-juan; SHA You-tao; ZHANG Hui; ZHAI Guo-fu
2007-01-01
Tolerance design, including tolerance analysis and distribution, is an important part of the electronic system's reliability design. The traditional design needs to construct mathematic model of material circuit, which involves large amount of workload and lacks of practicability. This paper discusses the basic theory of electronic system's reliability tolerance design and presents a new design method based on EDA (Electronic Design Automatic) software. This method has been validated through the application research on reliability tolerance design of the DC hybrid contactor's control circuit.
Flávio Chaimowicz
Full Text Available The nationwide dementia prevalence is usually calculated by applying the results of local surveys to countries' populations. To evaluate the reliability of such estimations in developing countries, we chose Brazil as an example. We carried out a systematic review of dementia surveys, ascertained their risk of bias, and present the best estimate of occurrence of dementia in Brazil.We carried out an electronic search of PubMed, Latin-American databases, and a Brazilian thesis database for surveys focusing on dementia prevalence in Brazil. The systematic review was registered at PROSPERO (CRD42014008815. Among the 35 studies found, 15 analyzed population-based random samples. However, most of them utilized inadequate criteria for diagnostics. Six studies without these limitations were further analyzed to assess the risk of selection, attrition, outcome and population bias as well as several statistical issues. All the studies presented moderate or high risk of bias in at least two domains due to the following features: high non-response, inaccurate cut-offs, and doubtful accuracy of the examiners. Two studies had limited external validity due to high rates of illiteracy or low income. The three studies with adequate generalizability and the lowest risk of bias presented a prevalence of dementia between 7.1% and 8.3% among subjects aged 65 years and older. However, after adjustment for accuracy of screening, the best available evidence points towards a figure between 15.2% and 16.3%.The risk of bias may strongly limit the generalizability of dementia prevalence estimates in developing countries. Extrapolations that have already been made for Brazil and Latin America were based on a prevalence that should have been adjusted for screening accuracy or not used at all due to severe bias. Similar evaluations regarding other developing countries are needed in order to verify the scope of these limitations.
Reliability analysis of tunnel surrounding rock stability by Monte-Carlo method
XI Jia-mi; YANG Geng-she
2008-01-01
Discussed advantages of improved Monte-Carlo method and feasibility aboutproposed approach applying in reliability analysis for tunnel surrounding rock stability. Onthe basis of deterministic parsing for tunnel surrounding rock, reliability computing methodof surrounding rock stability was derived from improved Monte-Carlo method. The com-puting method considered random of related parameters, and therefore satisfies relativityamong parameters. The proposed method can reasonably determine reliability of sur-rounding rock stability. Calculation results show that this method is a scientific method indiscriminating and checking surrounding rock stability.
RELIABILITY ASSESSMENT OF ENTROPY METHOD FOR SYSTEM CONSISTED OF IDENTICAL EXPONENTIAL UNITS
Sun Youchao; Shi Jun
2004-01-01
The reliability assessment of unit-system near two levels is the most important content in the reliability multi-level synthesis of complex systems. Introducing the information theory into system reliability assessment, using the addible characteristic of information quantity and the principle of equivalence of information quantity, an entropy method of data information conversion is presented for the system consisted of identical exponential units. The basic conversion formulae of entropy method of unit test data are derived based on the principle of information quantity equivalence. The general models of entropy method synthesis assessment for system reliability approximate lower limits are established according to the fundamental principle of the unit reliability assessment. The applications of the entropy method are discussed by way of practical examples. Compared with the traditional methods, the entropy method is found to be valid and practicable and the assessment results are very satisfactory.
A comparative study on the HW reliability assessment methods for digital I and C equipment
Jung, Hoan Sung; Sung, T. Y.; Eom, H. S.; Park, J. K.; Kang, H. G.; Lee, G. Y. [Korea Atomic Energy Research Institute, Taejeon (Korea); Kim, M. C. [Korea Advanced Institute of Science and Technology, Taejeon (Korea); Jun, S. T. [KHNP, Taejeon (Korea)
2002-03-01
It is necessary to predict or to evaluate the reliability of electronic equipment for the probabilistic safety analysis of digital instrument and control equipment. But most databases for the reliability prediction have no data for the up-to-date equipment and the failure modes are not classified. The prediction results for the specific component show different values according to the methods and databases. For boards and systems each method shows different values than others also. This study is for reliability prediction of PDC system for Wolsong NPP1 as a digital I and C equipment. Various reliability prediction methods and failure databases are used in calculation of the reliability to compare the effects of sensitivity and accuracy of each model and database. Many considerations for the reliability assessment of digital systems are derived with the results of this study. 14 refs., 19 figs., 15 tabs. (Author)
L. Altarejos-García
2012-07-01
Full Text Available This paper addresses the use of reliability techniques such as Rosenblueth's Point-Estimate Method (PEM as a practical alternative to more precise Monte Carlo approaches to get estimates of the mean and variance of uncertain flood parameters water depth and velocity. These parameters define the flood severity, which is a concept used for decision-making in the context of flood risk assessment. The method proposed is particularly useful when the degree of complexity of the hydraulic models makes Monte Carlo inapplicable in terms of computing time, but when a measure of the variability of these parameters is still needed. The capacity of PEM, which is a special case of numerical quadrature based on orthogonal polynomials, to evaluate the first two moments of performance functions such as the water depth and velocity is demonstrated in the case of a single river reach using a 1-D HEC-RAS model. It is shown that in some cases, using a simple variable transformation, statistical distributions of both water depth and velocity approximate the lognormal. As this distribution is fully defined by its mean and variance, PEM can be used to define the full probability distribution function of these flood parameters and so allowing for probability estimations of flood severity. Then, an application of the method to the same river reach using a 2-D Shallow Water Equations (SWE model is performed. Flood maps of mean and standard deviation of water depth and velocity are obtained, and uncertainty in the extension of flooded areas with different severity levels is assessed. It is recognized, though, that whenever application of Monte Carlo method is practically feasible, it is a preferred approach.
V. Rusan
2012-01-01
Full Text Available The paper considers calculation methods for reliability of agricultural distribution power networks while using Boolean algebra functions and analytical method. Reliability of 10 kV overhead line circuits with automatic sectionalization points and automatic standby activation has been investigated in the paper.
A Method for Determining the Reliability Index of a Materiel Subsystem
无
2002-01-01
This paper presents a method for determining the reliability index of a subsystem in the materiel demonstration phase -AHP Failure Rate Method. It fully considers the various factors which influence a subsystem reliability index and solves a difficult problem that puz zles the demonstration personnel.
A Method for Surveying Control Network Optimization Based on Reliability Properties
ZHANG Zhenglu; DENG Yong; LUO Changlin
2007-01-01
Surveying control network optimization design is related to standards,such as precision,reliability,sensitivity and the cost,and these standards are related closely to each other.A new method for surveying control network simulation optimization design is proposed.This method is based on the inner reliability index of the observation values.
Methods for reliability evaluation of trust and reputation systems
Janiszewski, Marek B.
2016-09-01
Trust and reputation systems are a systematic approach to build security on the basis of observations of node's behaviour. Exchange of node's opinions about other nodes is very useful to indicate nodes which act selfishly or maliciously. The idea behind trust and reputation systems gets significance because of the fact that conventional security measures (based on cryptography) are often not sufficient. Trust and reputation systems can be used in various types of networks such as WSN, MANET, P2P and also in e-commerce applications. Trust and reputation systems give not only benefits but also could be a thread itself. Many attacks aim at trust and reputation systems exist, but such attacks still have not gain enough attention of research teams. Moreover, joint effects of many of known attacks have been determined as a very interesting field of research. Lack of an acknowledged methodology of evaluation of trust and reputation systems is a serious problem. This paper aims at presenting various approaches of evaluation such systems. This work also contains a description of generalization of many trust and reputation systems which can be used to evaluate reliability of such systems in the context of preventing various attacks.
A new lifetime estimation model for a quicker LED reliability prediction
Hamon, B. H.; Mendizabal, L.; Feuillet, G.; Gasse, A.; Bataillou, B.
2014-09-01
LED reliability and lifetime prediction is a key point for Solid State Lighting adoption. For this purpose, one hundred and fifty LEDs have been aged for a reliability analysis. LEDs have been grouped following nine current-temperature stress conditions. Stress driving current was fixed between 350mA and 1A and ambient temperature between 85C and 120°C. Using integrating sphere and I(V) measurements, a cross study of the evolution of electrical and optical characteristics has been done. Results show two main failure mechanisms regarding lumen maintenance. The first one is the typically observed lumen depreciation and the second one is a much more quicker depreciation related to an increase of the leakage and non radiative currents. Models of the typical lumen depreciation and leakage resistance depreciation have been made using electrical and optical measurements during the aging tests. The combination of those models allows a new method toward a quicker LED lifetime prediction. These two models have been used for lifetime predictions for LEDs.
Calibration Methods for Reliability-Based Design Codes
Gayton, N.; Mohamed, A.; Sørensen, John Dalsgaard
2004-01-01
The calibration methods are applied to define the optimal code format according to some target safety levels. The calibration procedure can be seen as a specific optimization process where the control variables are the partial factors of the code. Different methods are available in the literature...
Probabilistic seismic hazard assessment of Italy using kernel estimation methods
Zuccolo, Elisa; Corigliano, Mirko; Lai, Carlo G.
2013-07-01
A representation of seismic hazard is proposed for Italy based on the zone-free approach developed by Woo (BSSA 86(2):353-362, 1996a), which is based on a kernel estimation method governed by concepts of fractal geometry and self-organized seismicity, not requiring the definition of seismogenic zoning. The purpose is to assess the influence of seismogenic zoning on the results obtained for the probabilistic seismic hazard analysis (PSHA) of Italy using the standard Cornell's method. The hazard has been estimated for outcropping rock site conditions in terms of maps and uniform hazard spectra for a selected site, with 10 % probability of exceedance in 50 years. Both spectral acceleration and spectral displacement have been considered as ground motion parameters. Differences in the results of PSHA between the two methods are compared and discussed. The analysis shows that, in areas such as Italy, characterized by a reliable earthquake catalog and in which faults are generally not easily identifiable, a zone-free approach can be considered a valuable tool to address epistemic uncertainty within a logic tree framework.
胡贇; 刘少军; 丁晟; 廖雅诗
2015-01-01
In order to consider the effects of elastohydrodynamic lubrication (EHL) on contact fatigue reliability of spur gear, an accurate and efficient method that combines with response surface method (RSM) and first order second moment method (FOSM) was developed for estimating the contact fatigue reliability of spur gear under EHL. The mechanical model of contact stress analysis of spur gear under EHL was established, in which the oil film pressure was mapped into hertz contact zone. Considering the randomness of EHL, material properties and fatigue strength correction factors, the proposed method was used to analyze the contact fatigue reliability of spur gear under EHL. Compared with the results of 1.5×105 by traditional Monte-Carlo, the difference between the two failure probability results calculated by the above mentioned methods is 2.2×10−4,the relative error of the failure probability results is 26.8%, and time-consuming only accounts for 0.14% of the traditional Monte-Carlo method (MCM). Sensitivity analysis results are in very good agreement with practical cognition. Analysis results show that the proposed method is precise and efficient, and could correctly reflect the influence of EHL on contact fatigue reliability of spur gear.
RELIABILITY OF ELASTO-PLASTIC STRUCTURE USING FINITE ELEMENT METHOD
刘宁; 邓汉忠; 卓家寿
2002-01-01
A solution of probabilistic FEM for elastic-plastic materials is pre-sented based on the incremental theory of plasticity and a modified initial stressmethod. The formulations are deduced through a direct differentiation scheme. Par-tial differentiation of displacement, stress and the performance function can be it-eratively performed with the computation of the mean values of displacement andstress. The presented method enjoys the efficiency of both the perturbation methodand the finite difference method, but avoids the approximation during the partial dif-ferentiation calculation. In order to improve the efficiency, the adjoint vector methodis introduced to calculate the differentiation of stress and displacement with respectto random variables. In addition, a time-saving computational method for reliabilityindex of elastic-plastic materials is suggested based upon the advanced First OrderSecond Moment (FOSM) and by the usage of Taylor expansion for displacement. Thesuggested method is also applicable to 3-D cases.
Analysis methods for structure reliability of piping components
Schimpfke, T.; Grebner, H.; Sievers, J. [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) mbH, Koeln (Germany)
2004-07-01
In the frame of the German reactor safety research program of the Federal Ministry of Economics and Labour (BMWA) GRS has started to develop an analysis code named PROST (PRObabilistic STructure analysis) for estimating the leak and break probabilities of piping systems in nuclear power plants. The long-term objective of this development is to provide failure probabilities of passive components for probabilistic safety analysis of nuclear power plants. Up to now the code can be used for calculating fatigue problems. The paper mentions the main capabilities and theoretical background of the present PROST development and presents some of the results of a benchmark analysis in the frame of the European project NURBIM (Nuclear Risk Based Inspection Methodologies for Passive Components). (orig.)
Do tests devised to detect recent HIV-1 infection provide reliable estimates of incidence in Africa?
Sakarovitch, Charlotte; Rouet, Francois; Murphy, Gary; Minga, Albert K; Alioum, Ahmadou; Dabis, Francois; Costagliola, Dominique; Salamon, Roger; Parry, John V; Barin, Francis
2007-05-01
The objective of this study was to assess the performance of 4 biologic tests designed to detect recent HIV-1 infections in estimating incidence in West Africa (BED, Vironostika, Avidity, and IDE-V3). These tests were assessed on a panel of 135 samples from 79 HIV-1-positive regular blood donors from Abidjan, Côte d'Ivoire, whose date of seroconversion was known (Agence Nationale de Recherches sur le SIDA et les Hépatites Virales 1220 cohort). The 135 samples included 26 from recently infected patients (180 days), and 15 from patients with clinical AIDS. The performance of each assay in estimating HIV incidence was assessed through simulations. The modified commercial assays gave the best results for sensitivity (100% for both), and the IDE-V3 technique gave the best result for specificity (96.3%). In a context like Abidjan, with a 10% HIV-1 prevalence associated with a 1% annual incidence, the estimated test-specific annual incidence rates would be 1.2% (IDE-V3), 5.5% (Vironostika), 6.2% (BED), and 11.2% (Avidity). Most of the specimens falsely classified as incident cases were from patients infected for >180 days but <1 year. The authors conclude that none of the 4 methods could currently be used to estimate HIV-1 incidence routinely in Côte d'Ivoire but that further adaptations might enhance their accuracy.
Reliability analysis of shoulder balance measures: comparison of the 4 available methods.
Hong, Jae-Young; Suh, Seung-Woo; Yang, Jae-Hyuk; Park, Si-Young; Han, Ji-Hoon
2013-12-15
Observational study with 3 examiners. To compare the reliability of shoulder balance measurement methods. There are several measurement methods for shoulder balance. No reliability analysis has been performed despite the clinical importance of this measurement. Whole spine posteroanterior radiographs (n = 270) were collected to compare the reliability of the 4 shoulder balance measures in patients with adolescent idiopathic scoliosis. Each radiograph was measured twice by each of the 3 examiners using 4 measurement methods. The data were analyzed statistically to determine the inter- and intraobserver reliability. Overall, the 4 radiographical methods showed an excellent intraclass correlation coefficient regardless of severity in intraobserver comparisons (>0.904). In addition, the mean absolute difference values in all methods were low and were comparatively similar (0.445, mean absolute difference 0.810 and >0.787, respectively) regardless of severity. In addition, the mean absolute difference values in the clavicular angle method were lower (balance measurement method clinically. 3.
Using operational data to estimate the reliable yields of water-supply wells
Misstear, Bruce D. R.; Beeson, Sarah
The reliable yield of a water-supply well depends on many different factors, including the properties of the well and the aquifer; the capacities of the pumps, raw-water mains, and treatment works; the interference effects from other wells; and the constraints imposed by ion licences, water quality, and environmental issues. A relatively simple methodology for estimating reliable yields has been developed that takes into account all of these factors. The methodology is based mainly on an analysis of water-level and source-output data, where such data are available. Good operational data are especially important when dealing with wells in shallow, unconfined, fissure-flow aquifers, where actual well performance may vary considerably from that predicted using a more analytical approach. Key issues in the yield-assessment process are the identification of a deepest advisable pumping water level, and the collection of the appropriate well, aquifer, and operational data. Although developed for water-supply operators in the United Kingdom, this approach to estimating the reliable yields of water-supply wells using operational data should be applicable to a wide range of hydrogeological conditions elsewhere. Résumé La productivité d'un puits capté pour l'adduction d'eau potable dépend de différents facteurs, parmi lesquels les propriétés du puits et de l'aquifère, la puissance des pompes, le traitement des eaux brutes, les effets d'interférences avec d'autres puits et les contraintes imposées par les autorisations d'exploitation, par la qualité des eaux et par les conditions environnementales. Une méthodologie relativement simple d'estimation de la productivité qui prenne en compte tous ces facteurs a été mise au point. Cette méthodologie est basée surtout sur une analyse des données concernant le niveau piézométrique et le débit de prélèvement, quand ces données sont disponibles. De bonnes données opérationnelles sont particuli
Reliable iterative methods for solving ill-conditioned algebraic systems
Padiy, Alexander
2000-01-01
The finite element method is one of the most popular techniques for numerical solution of partial differential equations. The rapid performance increase of modern computer systems makes it possible to tackle increasingly more difficult finite-element models arising in engineering practice. However,
A method to determine validity and reliability of activity sensors
Boerema, S.T.; Hermens, H.J.
2013-01-01
METHOD Four sensors were securely fastened to a mechanical oscillator (Vibration Exciter, type 4809, Brüel & Kjær) and moved at various frequencies (6.67Hz; 13.45Hz; 19.88Hz) within the range of human physical activity. For each of the three sensor axes, the sensors were simultaneously moved for fiv
The Language Teaching Methods Scale: Reliability and Validity Studies
Okmen, Burcu; Kilic, Abdurrahman
2016-01-01
The aim of this research is to develop a scale to determine the language teaching methods used by English teachers. The research sample consisted of 300 English teachers who taught at Duzce University and in primary schools, secondary schools and high schools in the Provincial Management of National Education in the city of Duzce in 2013-2014…
Metastable legged locomotion: methods to quantify and optimize reliability
Saglam, Cenk O.; Byl, Katie
2015-05-01
Measuring the stability of highly-dynamic bipedal locomotion is a challenging but essential task for more capable human-like walking. By discretizing the walking dynamics, we treat the system as a Markov chain, which lends itself to an easy quantification of failure rates by the expected number of steps before falling. This meaningful and intuitive metric is then used for optimizing and benchmarking given controllers. While this method is applicable to any controller scheme, we illustrate the results with two case demonstrations. One scheme is the now-familiar hybrid zero dynamics approach and the other is a method using piece-wise reference trajectories with a sliding mode control. We optimize low-level controllers, to minimize failure rates for any one gait, and we adopt a hierarchical control structure to switch among low-level gaits, providing even more dramatic improvements on the system performance.
Mission to Mars. Reliable method for liquid solutions diagnostics
Vladimir F. Krapivin
2014-06-01
Full Text Available Manned mission to Mars aims at solving many problems associated with operational diagnostics of liquid solutions (including drinking water, medical issues, and liquid fuels. This paper mainly proposes a new method to solve these problems both during the flight and the stay on the surface of the planet. The proposed method consists of a database development of spectral images of liquid solutions supplied by a multiple-channel spectroellipsometer and the diagnostics of liquid solutions using this database. In addition, the process of learning and the expert system for adaptive recognition of liquid solutions is described. Finally, the test of the expert system is demonstrated for a series of liquid solutions.
On methods of estimating cosmological bulk flows
Nusser, Adi
2015-01-01
We explore similarities and differences between several estimators of the cosmological bulk flow, $\\bf B$, from the observed radial peculiar velocities of galaxies. A distinction is made between two theoretical definitions of $\\bf B$ as a dipole moment of the velocity field weighted by a radial window function. One definition involves the three dimensional (3D) peculiar velocity, while the other is based on its radial component alone. Different methods attempt at inferring $\\bf B$ for either of these definitions which coincide only for a constant velocity field. We focus on the Wiener Filtering (WF, Hoffman et al. 2015) and the Constrained Minimum Variance (CMV,Feldman et al. 2010) methodologies. Both methodologies require a prior expressed in terms of the radial velocity correlation function. Hoffman et al. compute $\\bf B$ in Top-Hat windows from a WF realization of the 3D peculiar velocity field. Feldman et al. infer $\\bf B$ directly from the observed velocities for the second definition of $\\bf B$. The WF ...
A reliability assessment method using system dynamics and application
Kyung, Min Kang; Moosung, Jae [Hanyang Univ., Dept. of Nuclear Engineering, Seoul (Korea, Republic of); Sangman, Kwak [Systemix, Inc, Seoul (Korea, Republic of)
2005-07-01
An advanced method for assessing dynamic safety of nuclear power plants is introduced and applied. A commercial software, VENtana SIMulation environment, VENSIM, is used to develop a dynamics model for an example system. In this study the 18-month refuel cycle is simulated for the dynamic analysis. The failure rate when the plant is a zero power like maintenance, test, and refueling processes, which are not properly modeled in conventional method using event/fault trees, is higher than that of the full power. This also means the human failure rate during both standby and shutdown operation is higher than that of normal operations. Various time steps are applied for the different failure cases. The simulation results show that the common cause failure is much affected by the time step process. The results also include the dynamic simulation for the standby-running and shutdown-running cases. The graphical presentation has been easily modeled by a unique graphic designed method incorporated in the VENSIM. The diagrams well understood by operators or system analysts are constructed and evaluated quantitatively using system dynamics. (authors)
Reliability and validity of the AutoCAD software method in lumbar lordosis measurement.
Letafatkar, Amir; Amirsasan, Ramin; Abdolvahabi, Zahra; Hadadnezhad, Malihe
2011-12-01
The aim of this study was to determine the reliability and validity of the AutoCAD software method in lumbar lordosis measurement. Fifty healthy volunteers with a mean age of 23 ± 1.80 years were enrolled. A lumbar lateral radiograph was taken on all participants, and the lordosis was measured according to the Cobb method. Afterward, the lumbar lordosis degree was measured via AutoCAD software and flexible ruler methods. The current study is accomplished in 2 parts: intratester and intertester evaluations of reliability as well as the validity of the flexible ruler and software methods. Based on the intraclass correlation coefficient, AutoCAD's reliability and validity in measuring lumbar lordosis were 0.984 and 0.962, respectively. AutoCAD showed to be a reliable and valid method to measure lordosis. It is suggested that this method may replace those that are costly and involve health risks, such as radiography, in evaluating lumbar lordosis.
Estimating Earth's modal Q with epicentral stacking method
Chen, X.; Park, J. J.
2014-12-01
The attenuation rates of Earth's normal modes are the most important constraints on the anelastic state of Earth's deep interior. Yet current measurements of Earth's attenuation rates suffer from 3 sources of biases: the mode coupling effect, the beating effect, and the background noise, which together lead to significant uncertainties in the attenuation rates. In this research, we present a new technique to estimate the attenuation rates of Earth's normal modes - the epicentral stacking method. Rather than using the conventional geographical coordinate system, we instead deal with Earth's normal modes in the epicentral coordinate system, in which only 5 singlets rather than 2l+1 are excited. By stacking records from the same events at a series of time lags, we are able to recover the time-varying amplitudes of the 5 excited singlets, and thus measure their attenuation rates. The advantage of our method is that it enhances the SNR through stacking and minimizes the background noise effect, yet it avoids the beating effect problem commonly associated with the conventional multiplet stacking method by singling out the singlets. The attenuation rates measured from our epicentral stacking method seem to be reliable measurements in that: a) the measured attenuation rates are generally consistent among the 10 large events we used, except for a few events with unexplained larger attenuation rates; b) the line for the log of singlet amplitudes and time lag is very close to a straight line, suggesting an accurate estimation of attenuation rate. The Q measurements from our method are consistently lower than previous modal Q measurements, but closer to the PREM model. For example, for mode 0S25 whose Coriolis force coupling is negligible, our measured Q is between 190 to 210 depending on the event, while the PREM modal Q of 0S25 is 205, and previous modal Q measurements are as high as 242. The difference between our results and previous measurements might be due to the lower
A generic method for assignment of reliability scores applied to solvent accessibility predictions
Petersen, Bent; Petersen, Thomas Nordahl; Andersen, Pernille
2009-01-01
the relative exposure of the amino acids. The method assigns a reliability score to each surface accessibility prediction as an inherent part of the training process. This is in contrast to the most commonly used procedures where reliabilities are obtained by post-processing the output. CONCLUSION...
Reliability of a semi-quantitative method for dermal exposure assessment (DREAM)
Wendel de Joode, B. van; Hemmen, J.J. van; Meijster, T.; Major, V.; London, L.; Kromhout, H.
2005-01-01
Valid and reliable semi-quantitative dermal exposure assessment methods for epidemiological research and for occupational hygiene practice, applicable for different chemical agents, are practically nonexistent. The aim of this study was to assess the reliability of a recently developed semi-quantita
System reliability with correlated components: Accuracy of the Equivalent Planes method
Roscoe, K.; Diermanse, F.; Vrouwenvelder, A.C.W.M.
2015-01-01
Computing system reliability when system components are correlated presents a challenge because it usually requires solving multi-fold integrals numerically, which is generally infeasible due to the computational cost. In Dutch flood defense reliability modeling, an efficient method for computing th
Improvement method for the combining rule of Dempster-Shafer evidence theory based on reliability
Wang Ping; Yang Genqing
2005-01-01
An improvement method for the combining rule of Dempster evidence theory is proposed. Different from Dempster theory, the reliability of evidences isn't identical; and varies with the event. By weight evidence according to their reliability, the effect of unreliable evidence is reduced, and then get the fusion result that is closer to the truth. An example to expand the advantage of this method is given. The example proves that this method is helpful to find a correct result.
MacDonell, Christopher William; Ivanova, Tanya Dimitrova; Garland, S Jayne
2007-05-15
The reliability of the afterhyperpolarization (AHP) time course, as estimated by the interval death rate (IDR) analysis was evaluated both within and between investigators. The IDR analysis uses the firing history of a single motor unit train at low tonic firing rates to calculate an estimate of the AHP time course [Matthews PB. Relationship of firing intervals of human motor units to the trajectory of post-spike after-hyperpolarization and synaptic noise. J Physiol 1996;492:597-628]. Single motor unit trains were collected from the tibialis anterior (TA) to determine intra-rater reliability (within investigator). Data from the first dorsal interosseus (FDI), collected in a previous investigation [Gossen ER, Ivanova TD, Garland SJ. The time course of the motoneurone afterhyperpolarization is related to motor unit twitch speed in human skeletal muscle. J Physiol 2003;552:657-64], were used to examine the inter-rater reliability (between investigators). The lead author was blinded to the original time constants and file identities for the re-analysis. The intra-rater reliability of the AHP time constant in the TA data was high (r(2)=0.88; pFDI data was also strong (r(2)=0.92; pFDI. It is concluded that the interval death rate analysis is a reliable tool for estimating the AHP time course with experienced investigators.
Automated migration analysis based on cell texture: method & reliability
Chittenden Thomas W
2005-03-01
Full Text Available Abstract Background In this paper, we present and validate a way to measure automatically the extent of cell migration based on automated examination of a series of digital photographs. It was designed specifically to identify the impact of Second Hand Smoke (SHS on endothelial cell migration but has broader applications. The analysis has two stages: (1 preprocessing of image texture, and (2 migration analysis. Results The output is a graphic overlay that indicates the front lines of cell migration superimposed on each original image, with automated reporting of the distance traversed vs. time. Expert preference compares to manual placement of leading edge shows complete equivalence of automated vs. manual leading edge definition for cell migration measurement. Conclusion Our method is indistinguishable from careful manual determinations of cell front lines, with the advantages of full automation, objectivity, and speed.
A Reliable Method for Rhythm Analysis during Cardiopulmonary Resuscitation
U. Ayala
2014-01-01
Full Text Available Interruptions in cardiopulmonary resuscitation (CPR compromise defibrillation success. However, CPR must be interrupted to analyze the rhythm because although current methods for rhythm analysis during CPR have high sensitivity for shockable rhythms, the specificity for nonshockable rhythms is still too low. This paper introduces a new approach to rhythm analysis during CPR that combines two strategies: a state-of-the-art CPR artifact suppression filter and a shock advice algorithm (SAA designed to optimally classify the filtered signal. Emphasis is on designing an algorithm with high specificity. The SAA includes a detector for low electrical activity rhythms to increase the specificity, and a shock/no-shock decision algorithm based on a support vector machine classifier using slope and frequency features. For this study, 1185 shockable and 6482 nonshockable 9-s segments corrupted by CPR artifacts were obtained from 247 patients suffering out-of-hospital cardiac arrest. The segments were split into a training and a test set. For the test set, the sensitivity and specificity for rhythm analysis during CPR were 91.0% and 96.6%, respectively. This new approach shows an important increase in specificity without compromising the sensitivity when compared to previous studies.
A Reliable Method for Rhythm Analysis during Cardiopulmonary Resuscitation
Ayala, U.; Irusta, U.; Ruiz, J.; Eftestøl, T.; Kramer-Johansen, J.; Alonso-Atienza, F.; Alonso, E.; González-Otero, D.
2014-01-01
Interruptions in cardiopulmonary resuscitation (CPR) compromise defibrillation success. However, CPR must be interrupted to analyze the rhythm because although current methods for rhythm analysis during CPR have high sensitivity for shockable rhythms, the specificity for nonshockable rhythms is still too low. This paper introduces a new approach to rhythm analysis during CPR that combines two strategies: a state-of-the-art CPR artifact suppression filter and a shock advice algorithm (SAA) designed to optimally classify the filtered signal. Emphasis is on designing an algorithm with high specificity. The SAA includes a detector for low electrical activity rhythms to increase the specificity, and a shock/no-shock decision algorithm based on a support vector machine classifier using slope and frequency features. For this study, 1185 shockable and 6482 nonshockable 9-s segments corrupted by CPR artifacts were obtained from 247 patients suffering out-of-hospital cardiac arrest. The segments were split into a training and a test set. For the test set, the sensitivity and specificity for rhythm analysis during CPR were 91.0% and 96.6%, respectively. This new approach shows an important increase in specificity without compromising the sensitivity when compared to previous studies. PMID:24895621
Andreas Berner; Tariq Saleem Alharbi; Eric Carlstrm; Amir Khorram-Manesh
2015-01-01
Objective: To develop a validated and generalized high reliability organizations collaborative tool in order to conduct common assessments and information sharing of potential risks during mass-gatherings. Methods: The Swedish resource and risk estimation guide was used as foundation for the development of the generalized collaborative tool, by three different expert groups, and then analyzed. Analysis of inter-rater reliability was conducted through simulated cases that showed weighted and unweightκ-statistics. Results:The results revealed a mean of unweightκ-value from the three cases of 0.37 and a mean accuracy of 62%of the tool. Conclusions:The collaboration tool,“STREET”, showed acceptable reliability and validity to be used as a foundation for high reliability organization collaboration in a simulated environment. However, the lack of reliability in one of the cases highlights the challenges of creating measurable values from simulated cases. A study on real events can provide higher reliability but need, on the other hand, an already developed tool.
A Generalized Autocovariance Least-Squares Method for Covariance Estimation
Åkesson, Bernt Magnus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad;
2007-01-01
A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter.......A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter....
A level set method for reliability-based topology optimization of compliant mechanisms
2008-01-01
Based on the level set model and the reliability theory, a numerical approach of reliability-based topology optimization for compliant mechanisms with multiple inputs and outputs is presented. A multi-objective topology optimal model of compliant mechanisms considering uncertainties of the loads, material properties, and member geometries is developed. The reliability analysis and topology optimization are integrated in the optimal iterative process. The reliabilities of the compliant mechanisms are evaluated by using the first order reliability method. Meanwhile, the problem of structural topology optimization is solved by the level set method which is flexible in handling complex topological changes and concise in describing the boundary shape of the mechanism. Numerical examples show the importance of considering the stochastic nature of the compliant mechanisms in the topology optimization process.
The method of estimating bisulfite conversion rate in DNA methylation analysis.
Yangyang, Liu; Hengmi, Cui
2015-09-01
To establish an effective method to estimate the conversion rate of bisulfite-treated genomic DNA, TaqMan qPCR assay was performed using probes and primers that are specific for bisulfite-converted or -unconverted DNA standard samples separately. Then two linear standard curves were generated by plotting Ct values against logarithm of absolute DNA amount with serial dilutions of the bisulfite-converted or unconverted DNA samples. Based on two standard curves, the unknown bisulfite-treated genomic DNA sample was analyzed using the same TaqMan probes and the bisulfite conversion rate was precisely estimated. This method was further verified to be reliable using known mixed bisulfite-converted and -unconverted DNA templates as well as DNA samples treated with different bisulfite kits. These results showed that this method can effectively estimate bisulfite conversion rate of genomic DNA and thus provides a reliable and quick method for accurate analyses of DNA methylation.
Evaluation of Soft Tissue Landmark Reliability between Manual and Computerized Plotting Methods.
Kasinathan, Geetha; Kommi, Pradeep B; Kumar, Senthil M; Yashwant, Aniruddh; Arani, Nandakumar; Sabapathy, Senkutvan
2017-04-01
The aim of the study is to evaluate the reliability of soft tissue landmark identification between manual and digital plot-tings in both X and Y axes. A total of 50 pretreatment lateral cephalograms were selected from patients who reported for orthodontic treatment. The digital images of each cephalogram were imported directly into Dolphin software for onscreen digi-talization, while for manual tracing, images were printed using a compatible X-ray printer. After the images were standardized, and 10 commonly used soft tissue landmarks were plotted on each cephalogram by six different professional observers, the values obtained were plotted in X and Y axes. Intraclass correlation coefficient was used to determine the intrarater reliability for repeated landmark plotting obtained by both the methods. The evaluation for reliability of soft tissue landmark plottings in both manual and digital methods after subjecting it to interclass correlation showed a good reliability, which was nearing complete homogeneity in both X and Y axes, except for Y axis of throat point in manual plotting, which showed moderate reliability as a cephalometric variable. Intraclass correlation of soft tissue nasion had a moderate reliability along X axis. Soft tissue pogonion shows moderate reliability in Y axis. Throat point exhibited moderate reliability in X axis. The interclass correlation in X and Y axes shows high reliability in both hard tissue and soft tissue except for throat point in Y axis, when plotted manually. The intraclass correlation is more consistent and highly reliable for soft tissue landmarks and the hard tissue landmark identification is also consistent. The results obtained for manual and digital methods were almost similar, but the digital landmark plotting has an added advantage in archiving, retrieval, transmission, and can be enhanced during plotting of lateral cephalograms. Hence, the digital method of landmark plotting could be preferred for both daily use and
Statistical methods of estimating mining costs
Long, K.R.
2011-01-01
Until it was defunded in 1995, the U.S. Bureau of Mines maintained a Cost Estimating System (CES) for prefeasibility-type economic evaluations of mineral deposits and estimating costs at producing and non-producing mines. This system had a significant role in mineral resource assessments to estimate costs of developing and operating known mineral deposits and predicted undiscovered deposits. For legal reasons, the U.S. Geological Survey cannot update and maintain CES. Instead, statistical tools are under development to estimate mining costs from basic properties of mineral deposits such as tonnage, grade, mineralogy, depth, strip ratio, distance from infrastructure, rock strength, and work index. The first step was to reestimate "Taylor's Rule" which relates operating rate to available ore tonnage. The second step was to estimate statistical models of capital and operating costs for open pit porphyry copper mines with flotation concentrators. For a sample of 27 proposed porphyry copper projects, capital costs can be estimated from three variables: mineral processing rate, strip ratio, and distance from nearest railroad before mine construction began. Of all the variables tested, operating costs were found to be significantly correlated only with strip ratio.
da Cruz, A C S; Couto, B C; Nascimento, I A; Pereira, S A; Leite, M B N L; Bertoletti, E; Zagatto, P
2007-05-01
In spite of the consideration that toxicity testing is a reduced approach to measure the effects of pollutants on ecosystems, the early-life-stage (ELS) tests have evident ecological relevance because they reflect the possible reproductive impairment of the natural populations. The procedure and validation of Crassostrea rhizophorae embryonic development test have shown that it meets the same precision as other U.S. EPA tests, where EC(50) is generally used as a toxicological endpoint. However, the recognition that EC(50) is not the best endpoint to assess contaminant effects led U.S. EPA to recently suggest EC(25) as an alternative to estimate xenobiotic effects for pollution prevention. To provide reliability to the toxicological test results on C. rhizophorae embryos, the present work aimed to establish the critical effect level for this test organism, based on its reaction to reference toxicants, by using the statistical method proposed by Norberg-King (Inhibition Concentration, version 2.0). Oyster embryos were exposed to graded series of reference toxicants (ZnSO(4) x 7H(2)O; AgNO(3); KCl; CdCl(2)H(2)O; phenol, 4-chlorophenol and dodecyl sodium sulphate). Based on the obtained results, the critical value for C. rhizophorae embryonic development test was estimated as EC(15). The present research enhances the emerging consensus that ELS tests data would be adequate for estimating the chronic safe concentrations of pollutants in the receiving waters. Based on recommended criteria and on the results of the present research, zinc sulphate and 4-chlorophenol have been pointed out, among the inorganic and organic compounds tested, as the best reference toxicants for C. rhizophorae ELS-test.
无
2010-01-01
Based on fast Markov chain simulation for generating the samples distributed in failure region and saddlepoint approximation(SA) technique,an efficient reliability analysis method is presented to evaluate the small failure probability of non-linear limit state function(LSF) with non-normal variables.In the presented method,the failure probability of the non-linear LSF is transformed into a product of the failure probability of the introduced linear LSF and a feature ratio factor.The introduced linear LSF which approximately has the same maximum likelihood points as the non-linear LSF is constructed and its failure probability can be calculated by SA technique.The feature ratio factor,which can be evaluated on the basis of multiplicative rule of probability,exhibits the relation between the failure probability of the non-linear LSF and that of the linear LSF,and it can be fast computed by utilizing the Markov chain algorithm to directly simulate the samples distributed in the failure regions of the non-linear LSF and those of the linear LSF.Moreover,the expectation and variance of the failure probability estimate are derived.The results of several examples demonstrate that the presented method has wide applicability,can be easily implemented,and possesses high precision and high efficiency.
Tong-chun LI; Dan-dan LI; Zhi-qiang WANG
2010-01-01
In this study,the limit state equation for tensile reliability analysis of the foundation surface of a gravity dam was established.The possible crack length was set as the action effect and allowable crack length was set as the resistance in the limit state.The nonlinear FEM was used to obtain the crack length of the foundation surface of the gravity dam,and the linear response surface method based on the orthogonal test design method was used to calculate the reliability,providing a reasonable and simple method for calculating the reliability of the serviceability limit state.The Longtan RCC gravity dam was chosen as an example.An orthogonal test,including eleven factors and two levels,was conducted,and the tensile reliability was calculated.The analysis shows that this method is reasonable.
Li, Yan; Chen, Jianjun; Liu, Jipeng; Zhang, Lei; Wang, Weiguo; Zhang, Shaofeng
2013-09-01
The reliability of all-ceramic crowns is of concern to both patients and doctors. This study introduces a new methodology for quantifying the reliability of all-ceramic crowns based on the stress-strength interference theory and finite element models. The variables selected for the reliability analysis include the magnitude of the occlusal contact area, the occlusal load and the residual thermal stress. The calculated reliabilities of crowns under different loading conditions showed that too small occlusal contact areas or too great a difference of the thermal coefficient between veneer and core layer led to high failure possibilities. There results were consistent with many previous reports. Therefore, the methodology is shown to be a valuable method for analyzing the reliabilities of the restorations in the complicated oral environment.
A fast nonlinear regression method for estimating permeability in CT perfusion imaging.
Bennink, Edwin; Riordan, Alan J; Horsch, Alexander D; Dankbaar, Jan Willem; Velthuis, Birgitta K; de Jong, Hugo W
2013-11-01
Blood-brain barrier damage, which can be quantified by measuring vascular permeability, is a potential predictor for hemorrhagic transformation in acute ischemic stroke. Permeability is commonly estimated by applying Patlak analysis to computed tomography (CT) perfusion data, but this method lacks precision. Applying more elaborate kinetic models by means of nonlinear regression (NLR) may improve precision, but is more time consuming and therefore less appropriate in an acute stroke setting. We propose a simplified NLR method that may be faster and still precise enough for clinical use. The aim of this study is to evaluate the reliability of in total 12 variations of Patlak analysis and NLR methods, including the simplified NLR method. Confidence intervals for the permeability estimates were evaluated using simulated CT attenuation-time curves with realistic noise, and clinical data from 20 patients. Although fixating the blood volume improved Patlak analysis, the NLR methods yielded significantly more reliable estimates, but took up to 12 × longer to calculate. The simplified NLR method was ∼4 × faster than other NLR methods, while maintaining the same confidence intervals (CIs). In conclusion, the simplified NLR method is a new, reliable way to estimate permeability in stroke, fast enough for clinical application in an acute stroke setting.
Black, Ryan A.; Yang, Yanyun; Beitra, Danette; McCaffrey, Stacey
2015-01-01
Estimation of composite reliability within a hierarchical modeling framework has recently become of particular interest given the growing recognition that the underlying assumptions of coefficient alpha are often untenable. Unfortunately, coefficient alpha remains the prominent estimate of reliability when estimating total scores from a scale with…
Black, Ryan A.; Yang, Yanyun; Beitra, Danette; McCaffrey, Stacey
2015-01-01
Estimation of composite reliability within a hierarchical modeling framework has recently become of particular interest given the growing recognition that the underlying assumptions of coefficient alpha are often untenable. Unfortunately, coefficient alpha remains the prominent estimate of reliability when estimating total scores from a scale with…
System and method for traffic signal timing estimation
Dumazert, Julien
2015-12-30
A method and system for estimating traffic signals. The method and system can include constructing trajectories of probe vehicles from GPS data emitted by the probe vehicles, estimating traffic signal cycles, combining the estimates, and computing the traffic signal timing by maximizing a scoring function based on the estimates. Estimating traffic signal cycles can be based on transition times of the probe vehicles starting after a traffic signal turns green.
Prevalence of family violence in adults and children : Estimates using the capture-recapture method
Oosterlee, A.; Vink, R.M.; Smit, F.
2009-01-01
Background: Reliable prevalence estimates of family violence in adults and children are difficult to obtain. Most are based on surveys or registration counts, whose research designs and methods are often questionable, making the results difficult to compare. This article presents an alternative
Hidayati, D. S.; Suryonegoro, H.; Makes, B. N.
2017-08-01
Age estimation is important for individual identification. Root development of third molars occurs at age 15-25 years. This study was conducted to determine the accuracy of age estimation using the Thevissen method in Indonesia. The Thevissen method was applied to 100 panoramic radiographs of both male and female subjects. Reliability was tested by the Dahlberg formula and Cohen’s Kappa test, and the significance measurement was tested by the paired t-test and the Wilcoxon test. The deviation of estimated age was then calculated. The deviation of age estimation was ±3.050 years and ±2.067 for male and female subjects, respectively. The deviation of age estimation of female subjects was less than male subject. Age estimation with the Thevissen method is preferred for age 15-22 years.
Smith, Dianna M; Pearce, Jamie R; Harland, Kirk
2011-03-01
Models created to estimate neighbourhood level health outcomes and behaviours can be difficult to validate as prevalence is often unknown at the local level. This paper tests the reliability of a spatial microsimulation model, using a deterministic reweighting method, to predict smoking prevalence in small areas across New Zealand. The difference in the prevalence of smoking between those estimated by the model and those calculated from census data is less than 20% in 1745 out of 1760 areas. The accuracy of these results provides users with greater confidence to utilize similar approaches in countries where local-level smoking prevalence is unknown.
Kingston, Greer B.; Rajabalinejad, Mohammadreza; Gouldby, Ben P.; Gelder, van Pieter H.A.J.M.
2011-01-01
With the continual rise of sea levels and deterioration of flood defence structures over time, it is no longer appropriate to define a design level of flood protection, but rather, it is necessary to estimate the reliability of flood defences under varying and uncertain conditions. For complex geote
Kingston, Greer B.; Rajabali Nejad, Mohammadreza; Gouldby, Ben P.; van Gelder, Pieter H.A.J.M.
2011-01-01
With the continual rise of sea levels and deterioration of flood defence structures over time, it is no longer appropriate to define a design level of flood protection, but rather, it is necessary to estimate the reliability of flood defences under varying and uncertain conditions. For complex
1983-09-01
Industry Australian Atomic Energy Commission, Director CSIROj Materials Science Division, Library Trans-Australia Airlines, Library Qantas Airways ...designed to evaluate the reliability functions that result from the application of reliability analysis to the fatigue of aircraft structures, in particular...Messages 60+ A.4. Program Assembly 608 DISTRIBUTION DOCUMENT CONTROL DATA II 1. INTRODUCTION The application of reliability analysis to the fatigue
Semi-quantitative method to estimate levels of Campylobacter
Introduction: Research projects utilizing live animals and/or systems often require reliable, accurate quantification of Campylobacter following treatments. Even with marker strains, conventional methods designed to quantify are labor and material intensive requiring either serial dilutions or MPN ...
Reliable estimation of adsorption isotherm parameters using adequate pore size distribution
Husseinzadeh, Danial; Shahsavand, Akbar [Ferdowsi University of Mashhad, Mashhad (Iran, Islamic Republic of)
2015-05-15
The equilibrium adsorption isotherm has a crucial effect on various characteristics of the solid adsorbent (e.g., pore volume, bulk density, surface area, pore geometry). A historical paradox exists in conventional estimation of adsorption isotherm parameters. Traditionally, the total amount of adsorb material (total adsorption isotherm) has been considered equivalent to the local adsorption isotherm. This assumption is only valid when the corresponding pore size or energy distribution (PSD or ED) of the porous adsorbent can be successfully represented with the Dirac delta function. In practice, the actual PSD (or ED) is far from such assumption, and the traditional method for prediction of local adsorption isotherm parameters leads to serious errors. Up to now, the powerful combination of inverse theory and linear regularization technique has drastically failed when used for extraction of PSD from real adsorption data. For this reason, all previous researches used synthetic data because they were not able to extract proper PSD from the measured total adsorption isotherm with unrealistic parameters of local adsorption isotherm. We propose a novel approach that can successfully provide the correct values of local adsorption isotherm parameters without any a priori and unrealistic assumptions. Two distinct methods are suggested and several illustrative (synthetic and real experimental) examples are presented to clearly demonstrate the effectiveness of the newly proposed methods on computing the correct values of local adsorption isotherm parameters. The so-called Iterative and Optima methods' impressive performances on extraction of correct PSD are validated using several experimental data sets.
How reliable is estimation of glomerular filtration rate at diagnosis of type 2 diabetes?
Chudleigh, Richard A; Dunseath, Gareth; Evans, William; Harvey, John N; Evans, Philip; Ollerton, Richard; Owens, David R
2007-02-01
The Cockcroft-Gault (CG) and Modification of Diet in Renal Disease (MDRD) equations previously have been recommended to estimate glomerular filtration rate (GFR). We compared both estimates with true GFR, measured by the isotopic (51)Cr-EDTA method, in newly diagnosed, treatment-naïve subjects with type 2 diabetes. A total of 292 mainly normoalbuminuric (241 of 292) subjects were recruited. Subjects were classified as having mild renal impairment (group 1, GFR /=90 ml/min per 1.73 m(2)). Estimated GFR (eGFR) was calculated by the CG and MDRD equations. Blood samples drawn at 44, 120, 180, and 240 min after administration of 1 MBq of (51)Cr-EDTA were used to measure isotopic GFR (iGFR). For subjects in group 1, mean (+/-SD) iGFR was 83.8 +/- 4.3 ml/min per 1.73 m(2). eGFR was 78.0 +/- 16.5 or 73.7 +/- 12.0 ml/min per 1.73 m(2) using CG and MDRD equations, respectively. Ninety-five percent CIs for method bias were -11.1 to -0.6 using CG and -14.4 to -7.0 using MDRD. Ninety-five percent limits of agreement (mean bias +/- 2 SD) were -37.2 to 25.6 and -33.1 to 11.7, respectively. In group 2, iGFR was 119.4 +/- 20.3 ml/min per 1.73 m(2). eGFR was 104.4 +/- 26.3 or 92.3 +/- 18.7 ml/min per 1.73 m(2) using CG and MDRD equations, respectively. Ninety-five percent CIs for method bias were -17.4 to -12.5 using CG and -29.1 to -25.1 using MDRD. Ninety-five percent limits of agreement were -54.4 to 24.4 and -59.5 to 5.3, respectively. In newly diagnosed type 2 diabetic patients, particularly those with a GFR >/=90 ml/min per 1.73 m(2), both CG and MDRD equations significantly underestimate iGFR. This highlights a limitation in the use of eGFR in the majority of diabetic subjects outside the setting of chronic kidney disease.
An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.
Darren Kidney
Full Text Available Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will
Zhongwei Deng
2016-06-01
Full Text Available In the field of state of charge (SOC estimation, the Kalman filter has been widely used for many years, although its performance strongly depends on the accuracy of the battery model as well as the noise covariance. The Kalman gain determines the confidence coefficient of the battery model by adjusting the weight of open circuit voltage (OCV correction, and has a strong correlation with the measurement noise covariance (R. In this paper, the online identification method is applied to acquire the real model parameters under different operation conditions. A criterion based on the OCV error is proposed to evaluate the reliability of online parameters. Besides, the equivalent circuit model produces an intrinsic model error which is dependent on the load current, and the property that a high battery current or a large current change induces a large model error can be observed. Based on the above prior knowledge, a fuzzy model is established to compensate the model error through updating R. Combining the positive strategy (i.e., online identification and negative strategy (i.e., fuzzy model, a more reliable and robust SOC estimation algorithm is proposed. The experiment results verify the proposed reliability criterion and SOC estimation method under various conditions for LiFePO4 batteries.
A Modified Extended Bayesian Method for Parameter Estimation
无
2007-01-01
This paper presents a modified extended Bayesian method for parameter estimation. In this method the mean value of the a priori estimation is taken from the values of the estimated parameters in the previous iteration step. In this way, the parameter covariance matrix can be automatically updated during the estimation procedure, thereby avoiding the selection of an empirical parameter. Because the extended Bayesian method can be regarded as a Tikhonov regularization, this new method is more stable than both the least-squares method and the maximum likelihood method. The validity of the proposed method is illustrated by two examples: one based on simulated data and one based on real engineering data.
A new method for computing the reliability of consecutive k-out-of-n:F systems
Gökdere Gökhan
2016-01-01
Full Text Available In many physical systems, reliability evaluation, such as ones encountered in telecommunications, the design of integrated circuits, microwave relay stations, oil pipeline systems, vacuum systems in accelerators, computer ring networks, and spacecraft relay stations, have had applied consecutive k-out-of-n system models. These systems are characterized as logical connections among the components of the systems placed in lines or circles. In literature, a great deal of attention has been paid to the study of the reliability evaluation of consecutive k-out-of-n systems. In this paper, we propose a new method to compute the reliability of consecutive k-out-of-n:F systems, with n linearly and circularly arranged components. The proposed method provides a simple way for determining the system failure probability. Also, we write R-Project codes based on our proposed method to compute the reliability of the linear and circular systems which have a great number of components.
A single-loop deterministic method for reliability-based design optimization
Li, Fan; Wu, Teresa; Badiru, Adedeji; Hu, Mengqi; Soni, Som
2013-04-01
Reliability-based design optimization (RBDO) is a technique used for engineering design when uncertainty is being considered. A typical RBDO problem can be formulated as a stochastic optimization model where the performance of a system is optimized and the reliability requirements are treated as constraints. One major challenge of RBDO research has been the prohibitive computational expenses. In this research, a new approximation approach, termed the single-loop deterministic method for RBDO (SLDM_RBDO), is proposed to reduce the computational effort of RBDO without sacrificing much accuracy. Based on the first order reliability method, the SLDM_RBDO method converts the probabilistic constraints to approximate deterministic constraints so that the RBDO problems can be transformed to deterministic optimization problems in one step. Three comparison experiments are conducted to show the performance of the SLDM_RBDO. In addition, a reliable forearm crutch design is studied to demonstrate the applicability of SLDM_RBDO to a real industry case.
Parent, Maxim; Niezgoda, Helen; Keller, Heather H; Chambers, Larry W; Daly, Shauna
2012-10-01
A variety of methods are available for assessing diet; however, many are impractical for large research studies in an institutional environment. Technology, specifically digital imaging, can make diet estimations more feasible for research. Our goal was to compare a digital imaging method of estimating regular and modified-texture main plate food waste with traditional on-site visual estimations, in a continuing and long-term care setting using a meal-tray delivery service. Food waste was estimated for participants on regular (n=36) and modified-texture (n=42) diets. A tracking system to ensure collection and digital imaging of all main meal plates was developed. Four observers used a modified Comstock method to assess food waste for vegetables, starches, and main courses on 551 main meal plates. Intermodal, inter-rater, and intra-rater reliability were calculated using intraclass correlation for absolute agreement. Intermodal reliability was based on one rater's assessments. The digital imaging method results were in high agreement with the real-time visual method for both regular and modified-texture food (intraclass correlation=0.90 and 0.88, respectively). Agreements between observers for regular diets were higher than those for modified-texture food (range=0.91 to 0.94; 0.82 to 0.91, respectively). Intra-rater agreements were very high for both regular and modified-texture food (range=0.93 to 0.99; 0.91 to 0.98). The digital imaging method is a reliable alternative to estimating regular and modified-texture food waste for main meal plates when compared with real-time visual estimation. Color, shape, reheating, mixing, and use of sauces made modified-texture food waste slightly more difficult to estimate, regardless of estimation method. Copyright © 2012 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.
A Framework for Estimating Piping Reliability Subject to Corrosion Under Insulation
Mokhtar Ainul Akmar
2014-07-01
Full Text Available Corrosion under insulation (CUI is one of the serious damage mechanisms experienced by insulated piping systems. Optimizing the inspection schedule for insulated piping systems is a major challenge faced by inspection and corrosion engineers since CUI takes place beneath the insulation which makes the detection and prediction of the damage mechanism harder. In recent years, risk-based inspection (RBI approach has been adopted to optimize CUI inspection plan. RBI approach is based on risk, a product of the likelihood of a failure and the consequence of such failure. The likelihood analysis usually follows either the qualitative or the semi-qualitative methods, thus precluding it to be used for quantitative risk assessment. This paper presents a framework for estimating quantitatively the likelihood of failure due to CUI based on the type of data available.
Brůžek, Jaroslav; Santos, Frédéric; Dutailly, Bruno; Murail, Pascal; Cunha, Eugenia
2017-10-01
A new tool for skeletal sex estimation based on measurements of the human os coxae is presented using skeletons from a metapopulation of identified adult individuals from twelve independent population samples. For reliable sex estimation, a posterior probability greater than 0.95 was considered to be the classification threshold: below this value, estimates are considered indeterminate. By providing free software, we aim to develop an even more disseminated method for sex estimation. Ten metric variables collected from 2,040 ossa coxa of adult subjects of known sex were recorded between 1986 and 2002 (reference sample). To test both the validity and reliability, a target sample consisting of two series of adult ossa coxa of known sex (n = 623) was used. The DSP2 software (Diagnose Sexuelle Probabiliste v2) is based on Linear Discriminant Analysis, and the posterior probabilities are calculated using an R script. For the reference sample, any combination of four dimensions provides a correct sex estimate in at least 99% of cases. The percentage of individuals for whom sex can be estimated depends on the number of dimensions; for all ten variables it is higher than 90%. Those results are confirmed in the target sample. Our posterior probability threshold of 0.95 for sex estimate corresponds to the traditional sectioning point used in osteological studies. DSP2 software is replacing the former version that should not be used anymore. DSP2 is a robust and reliable technique for sexing adult os coxae, and is also user friendly. © 2017 Wiley Periodicals, Inc.
ICA Model Order Estimation Using Clustering Method
P. Sovka
2007-12-01
Full Text Available In this paper a novel approach for independent component analysis (ICA model order estimation of movement electroencephalogram (EEG signals is described. The application is targeted to the brain-computer interface (BCI EEG preprocessing. The previous work has shown that it is possible to decompose EEG into movement-related and non-movement-related independent components (ICs. The selection of only movement related ICs might lead to BCI EEG classification score increasing. The real number of the independent sources in the brain is an important parameter of the preprocessing step. Previously, we used principal component analysis (PCA for estimation of the number of the independent sources. However, PCA estimates only the number of uncorrelated and not independent components ignoring the higher-order signal statistics. In this work, we use another approach - selection of highly correlated ICs from several ICA runs. The ICA model order estimation is done at significance level ÃŽÂ± = 0.05 and the model order is less or more dependent on ICA algorithm and its parameters.
Ratnayake, M; Obertová, Z; Dose, M; Gabriel, P; Bröker, H M; Brauckmann, M; Barkus, A; Rizgeliene, R; Tutkuviene, J; Ritz-Timme, S; Marasciuolo, L; Gibelli, D; Cattaneo, C
2014-09-01
In cases of suspected child pornography, the age of the victim represents a crucial factor for legal prosecution. The conventional methods for age estimation provide unreliable age estimates, particularly if teenage victims are concerned. In this pilot study, the potential of age estimation for screening purposes is explored for juvenile faces. In addition to a visual approach, an automated procedure is introduced, which has the ability to rapidly scan through large numbers of suspicious image data in order to trace juvenile faces. Age estimations were performed by experts, non-experts and the Demonstrator of a developed software on frontal facial images of 50 females aged 10-19 years from Germany, Italy, and Lithuania. To test the accuracy, the mean absolute error (MAE) between the estimates and the real ages was calculated for each examiner and the Demonstrator. The Demonstrator achieved the lowest MAE (1.47 years) for the 50 test images. Decreased image quality had no significant impact on the performance and classification results. The experts delivered slightly less accurate MAE (1.63 years). Throughout the tested age range, both the manual and the automated approach led to reliable age estimates within the limits of natural biological variability. The visual analysis of the face produces reasonably accurate age estimates up to the age of 18 years, which is the legally relevant age threshold for victims in cases of pedo-pornography. This approach can be applied in conjunction with the conventional methods for a preliminary age estimation of juveniles depicted on images.
Mendizabal, A.; González-Díaz, J. B.; San Sebastián, M.; Echeverría, A.
2016-07-01
This paper describes the implementation of a simple strategy adopted for the inherent shrinkage method (ISM) to predict welding-induced distortion. This strategy not only makes it possible for the ISM to reach accuracy levels similar to the detailed transient analysis method (considered the most reliable technique for calculating welding distortion) but also significantly reduces the time required for these types of calculations. This strategy is based on the sequential activation of welding blocks to account for welding direction and transient movement of the heat source. As a result, a significant improvement in distortion prediction is achieved. This is demonstrated by experimentally measuring and numerically analyzing distortions in two case studies: a vane segment subassembly of an aero-engine, represented with 3D-solid elements, and a car body component, represented with 3D-shell elements. The proposed strategy proves to be a good alternative for quickly estimating the correct behaviors of large welded components and may have important practical applications in the manufacturing industry.
Senanayake, Chathuri; Senanayake, S M N Arosha
2011-10-01
In this paper, a gait event detection algorithm is presented that uses computer intelligence (fuzzy logic) to identify seven gait phases in walking gait. Two inertial measurement units and four force-sensitive resistors were used to obtain knee angle and foot pressure patterns, respectively. Fuzzy logic is used to address the complexity in distinguishing gait phases based on discrete events. A novel application of the seven-dimensional vector analysis method to estimate the amount of abnormalities detected was also investigated based on the two gait parameters. Experiments were carried out to validate the application of the two proposed algorithms to provide accurate feedback in rehabilitation. The algorithm responses were tested for two cases, normal and abnormal gait. The large amount of data required for reliable gait-phase detection necessitate the utilisation of computer methods to store and manage the data. Therefore, a database management system and an interactive graphical user interface were developed for the utilisation of the overall system in a clinical environment.
Itagaki, H. [Yokohama National University, Yokohama (Japan). Faculty of Engineering; Asada, H.; Ito, S. [National Aerospace Laboratory, Tokyo (Japan); Shinozuka, M.
1996-12-31
Risk assessed structural positions in a pressurized fuselage of a transport-type aircraft applied with damage tolerance design are taken up as the subject of discussion. A small number of data obtained from inspections on the positions was used to discuss the Bayesian reliability analysis that can estimate also a proper non-periodic inspection schedule, while estimating proper values for uncertain factors. As a result, time period of generating fatigue cracks was determined according to procedure of detailed visual inspections. The analysis method was found capable of estimating values that are thought reasonable and the proper inspection schedule using these values, in spite of placing the fatigue crack progress expression in a very simple form and estimating both factors as the uncertain factors. Thus, the present analysis method was verified of its effectiveness. This study has discussed at the same time the structural positions, modeling of fatigue cracks generated and develop in the positions, conditions for destruction, damage factors, and capability of the inspection from different viewpoints. This reliability analysis method is thought effective also on such other structures as offshore structures. 18 refs., 8 figs., 1 tab.
Michael O. Harris-Love
2016-02-01
Full Text Available Background. Quantitative diagnostic ultrasound imaging has been proposed as a method of estimating muscle quality using measures of echogenicity. The Rectangular Marquee Tool (RMT and the Free Hand Tool (FHT are two types of editing features used in Photoshop and ImageJ for determining a region of interest (ROI within an ultrasound image. The primary objective of this study is to determine the intrarater and interrater reliability of Photoshop and ImageJ for the estimate of muscle tissue echogenicity in older adults via grayscale histogram analysis. The secondary objective is to compare the mean grayscale values obtained using both the RMT and FHT methods across both image analysis platforms. Methods. This cross-sectional observational study features 18 community-dwelling men (age = 61.5 ± 2.32 years. Longitudinal views of the rectus femoris were captured using B-mode ultrasound. The ROI for each scan was selected by 2 examiners using the RMT and FHT methods from each software program. Their reliability is assessed using intraclass correlation coefficients (ICCs and the standard error of the measurement (SEM. Measurement agreement for these values is depicted using Bland-Altman plots. A paired t-test is used to determine mean differences in echogenicity expressed as grayscale values using the RMT and FHT methods to select the post-image acquisition ROI. The degree of association among ROI selection methods and image analysis platforms is analyzed using the coefficient of determination (R2. Results. The raters demonstrated excellent intrarater and interrater reliability using the RMT and FHT methods across both platforms (lower bound 95% CI ICC = .97–.99, p < .001. Mean differences between the echogenicity estimates obtained with the RMT and FHT methods was .87 grayscale levels (95% CI [.54–1.21], p < .0001 using data obtained with both programs. The SEM for Photoshop was .97 and 1.05 grayscale levels when using the RMT and FHT ROI selection
Development of a Method for Quantifying the Reliability of Nuclear Safety-Related Software
Yi Zhang; Michael W. Golay
2003-10-01
The work of our project is intended to help introducing digital technologies into nuclear power into nuclear power plant safety related software applications. In our project we utilize a combination of modern software engineering methods: design process discipline and feedback, formal methods, automated computer aided software engineering tools, automatic code generation, and extensive feasible structure flow path testing to improve software quality. The tactics include ensuring that the software structure is kept simple, permitting routine testing during design development, permitting extensive finished product testing in the input data space of most likely service and using test-based Bayesian updating to estimate the probability that a random software input will encounter an error upon execution. From the results obtained the software reliability can be both improved and its value estimated. Hopefully our success in the project's work can aid the transition of the nuclear enterprise into the modern information world. In our work, we have been using the proprietary sample software, the digital Signal Validation Algorithm (SVA), provided by Westinghouse. Also our work is being done with their collaboration. The SVA software is used for selecting the plant instrumentation signal set which is to be used as the input the digital Plant Protection System (PPS). This is the system that automatically decides whether to trip the reactor. In our work, we are using -001 computer assisted software engineering (CASE) tool of Hamilton Technologies Inc. This tool is capable of stating the syntactic structure of a program reflecting its state requirements, logical functions and data structure.
Quantum Estimation Methods for Quantum Illumination.
Sanz, M; Las Heras, U; García-Ripoll, J J; Solano, E; Di Candia, R
2017-02-17
Quantum illumination consists in shining quantum light on a target region immersed in a bright thermal bath with the aim of detecting the presence of a possible low-reflective object. If the signal is entangled with the receiver, then a suitable choice of the measurement offers a gain with respect to the optimal classical protocol employing coherent states. Here, we tackle this detection problem by using quantum estimation techniques to measure the reflectivity parameter of the object, showing an enhancement in the signal-to-noise ratio up to 3 dB with respect to the classical case when implementing only local measurements. Our approach employs the quantum Fisher information to provide an upper bound for the error probability, supplies the concrete estimator saturating the bound, and extends the quantum illumination protocol to non-Gaussian states. As an example, we show how Schrödinger's cat states may be used for quantum illumination.
Improved Estimation of Subsurface Magnetic Properties using Minimum Mean-Square Error Methods
Saether, Bjoern
1997-12-31
This thesis proposes an inversion method for the interpretation of complicated geological susceptibility models. The method is based on constrained Minimum Mean-Square Error (MMSE) estimation. The MMSE method allows the incorporation of available prior information, i.e., the geometries of the rock bodies and their susceptibilities. Uncertainties may be included into the estimation process. The computation exploits the subtle information inherent in magnetic data sets in an optimal way in order to tune the initial susceptibility model. The MMSE method includes a statistical framework that allows the computation not only of the estimated susceptibilities, given by the magnetic measurements, but also of the associated reliabilities of these estimations. This allows the evaluation of the reliabilities in the estimates before any measurements are made, an option, which can be useful for survey planning. The MMSE method has been tested on a synthetic data set in order to compare the effects of various prior information. When more information is given as input to the estimation, the estimated models come closer to the true model, and the reliabilities in their estimates are increased. In addition, the method was evaluated using a real geological model from a North Sea oil field, based on seismic data and well information, including susceptibilities. Given that the geometrical model is correct, the observed mismatch between the forward calculated magnetic anomalies and the measured anomalies causes changes in the susceptibility model, which may show features of interesting geological significance to the explorationists. Such magnetic anomalies may be due to small fractures and faults not detectable on seismic, or local geochemical changes due to the upward migration of water or hydrocarbons. 76 refs., 42 figs., 18 tabs.
Ambühl, Simon; Kofoed, Jens Peter; Sørensen, John Dalsgaard
2014-01-01
Wave energy power plants are expected to become one of the major future contribution to the sustainable electricity production. Optimal design of wave energy power plants is associated with modeling of physical, statistical, measurement and model uncertainties. This paper presents stochastic models....... The stochastic model for extreme value estimation covers annual extreme value distributions and the statistical uncertainty due to limited amount of available data. Furthermore, updating based on new available data is explained based on a Bayesian approach. The statistical uncertainties are estimated based...... on the Maximum-Likelihood method, and the extreme value estimation uses the peaks-over-threshold (POT) method. Two generic examples of reliability assessments for failure due to fatigue and extreme...
Methods for estimating production and utilization of paper birch saplings
US Fish and Wildlife Service, Department of the Interior — Development of technique to estimate browse production and utilization. Developed a set of methods for estimating annual production and utilization of paper birch...
Enhancing Use Case Points Estimation Method Using Soft Computing Techniques
Nassif, Ali Bou; Capretz, Luiz Fernando; Ho, Danny
2016-01-01
Software estimation is a crucial task in software engineering. Software estimation encompasses cost, effort, schedule, and size. The importance of software estimation becomes critical in the early stages of the software life cycle when the details of software have not been revealed yet. Several commercial and non-commercial tools exist to estimate software in the early stages. Most software effort estimation methods require software size as one of the important metric inputs and consequently,...
Nonlinear Least Squares Methods for Joint DOA and Pitch Estimation
Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt
2013-01-01
In this paper, we consider the problem of joint direction-of-arrival (DOA) and fundamental frequency estimation. Joint estimation enables robust estimation of these parameters in multi-source scenarios where separate estimators may fail. First, we derive the exact and asymptotic Cram\\'{e}r-Rao...... estimation. Moreover, simulations on real-life data indicate that the NLS and aNLS methods are applicable even when reverberation is present and the noise is not white Gaussian....
Reliability/Risk Methods and Design Tools for Application in Space Programs
Townsend, John S.; Smart, Christian
1999-01-01
Since 1984 NASA has funded several major programs to develop Reliability/Risk Methods and tools for engineers to apply in the design and assessment of aerospace hardware. Two probabilistic software tools that show great promise for practical application are the finite element code NESSUS and the system risk analysis code QRAS. This paper examines NASA's past, present, and future directions in reliability and risk engineering applications, Both the NESSUS and QRAS software tools are detailed.
Goldie, John; Schwartz, Lisa; McConnachie, Alex; Jolly, Brian; Morrison, Jillian
2004-12-01
Although ethics is an important part of modern curricula, measures of students' ethical disposition have not been easy to develop. A potential method is to assess students' written justifications for selecting one option from a preset range of answers to vignettes and compare these justifications with predetermined 'expert' consensus. We describe the development of and reliability estimation for such a method -- the Ethics in Health Care Instrument (EHCI). Seven raters classified the responses of ten subjects to nine vignettes, on two occasions. The first stage of analysis involved raters' judging how consistent with consensus were subjects' justifications using generalizability theory, and then rating consensus responses on the action justification and values recognition hierarchies. The inter-rater reliability was 0.39 for the initial rating. Differential performance on questions was identified as the largest source of variance. Hence reliability was investigated also for students' total scores over the nine consensus vignettes. Rater effects were the largest source of variance identified. Examination of rater performance showed lack of rater consistency. D-studies were performed which showed acceptable reliability could nevertheless be obtained using four raters per EHCI. This study suggests that the EHCI has potential as an assessment instrument although further testing is required of all components of the methodology.
A fusion method for estimate of trajectory
吴翊; 朱炬波
1999-01-01
The multiple station method is important in missile and space tracking system. A fusion method is presented. Based on the theory of multiple tracking, and starting with the investigation of precision of location by a single station, a recognition model for occasion system error is constructed, and a principle for preventing pollution by occasion system error is presented. Theoretical analysis and simulation results prove the proposed method correct.
Ashot Davtian
2011-05-01
Full Text Available Two methods for the estimation of number per unit volume NV of spherical particles are discussed: the (physical disector (Sterio, 1984 and Saltykov's estimator (Saltykov, 1950; Fullman, 1953. A modification of Saltykov's estimator is proposed which reduces the variance. Formulae for bias and variance are given for both disector and improved Saltykov estimator for the case of randomly positioned particles. They enable the comparison of the two estimators with respect to their precision in terms of mean squared error.
Su, G; Guldbrandtsen, B; Gregersen, V R
2010-01-01
were available. In the analysis, all SNP were fitted simultaneously as random effects in a Bayesian variable selection model, which allows heterogeneous variances for different SNP markers. The response variables were the official EBV. Direct GEBV were calculated as the sum of individual SNP effects...... for all 18 index traits. Reliability of GEBV was assessed by squared correlation between GEBV and conventional EBV (r2GEBV, EBV), and expected reliability was obtained from prediction error variance using a 5-fold cross validation. Squared correlations between GEBV and published EBV (without any...... that genomic selection can greatly improve the accuracy of preselection for young bulls compared with traditional selection based on parent average information....
Traveling-wave tube reliability estimates, life tests, and space flight experience
Lalli, V. R.; Speck, C. E.
1977-01-01
Infant mortality, useful life, and wearout phase of twt life are considered. The performance of existing developmental tubes, flight experience, and sequential hardware testing are evaluated. The reliability history of twt's in space applications is documented by considering: (1) the generic parts of the tube in light of the manner in which their design and operation affect the ultimate reliability of the device, (2) the flight experience of medium power tubes, and (3) the available life test data for existing space-qualified twt's in addition to those of high power devices.
Testing a statistical method of global mean palotemperature estimations in a long climate simulation
Zorita, E.; Gonzalez-Rouco, F. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Hydrophysik
2001-07-01
Current statistical methods of reconstructing the climate of the last centuries are based on statistical models linking climate observations (temperature, sea-level-pressure) and proxy-climate data (tree-ring chronologies, ice-cores isotope concentrations, varved sediments, etc.). These models are calibrated in the instrumental period, and the longer time series of proxy data are then used to estimate the past evolution of the climate variables. Using such methods the global mean temperature of the last 600 years has been recently estimated. In this work this method of reconstruction is tested using data from a very long simulation with a climate model. This testing allows to estimate the errors of the estimations as a function of the number of proxy data and the time scale at which the estimations are probably reliable. (orig.)
ROLE OF STATISTICAL VIS-A-VIS PHYSICS-OFFAILURE METHODS IN RELIABILITY ENGINEERING
P.V.Varde
2009-01-01
Full Text Available Traditionally the statistical or more specifically probabilistic methods form the basicframework for assessing the reliability characteristics of the components. However the recenttrend for predicting the reliability or life of the component involves application of physics-offailuremethods. This rather new approach is finding wider application as it is based on basicfundamentals of science and thereby provides an improved framework to understand the failuremechanism. Since accelerated testing of component forms part of this approach, the prediction oftime-to-failure of the components is more accurate compared to the existing methods whichdepends only historical data and its evaluation using probabilistic methods. The new approach isall the more relevant when it comes to assessment of reliability of new components as thetraditional probabilistic approach is not adequate to predict reliability of new components as itdepends on historical data for prediction of reliability.In view of the above this paper investigates the role of statistical or probabilisticapproach and physics-of-failure approach for reliability assessment of engineering components ingeneral and electronics components in particular.
Kiedrowicz Maciej
2016-01-01
Full Text Available The deliberations presented in this study refer to the method for assessing software reliability of the docu-ment management system, using the RFID technology. A method for determining the reliability structure of the dis-cussed software, understood as the index vector for assessing reliability of its components, was proposed. The model of the analyzed software is the control transfer graph, in which the probability of activating individual components during the system's operation results from the so-called operational profile, which characterizes the actual working environment. The reliability structure is established as a result of the solution of a specific mathematical software task. The knowledge of the reliability structure of the software makes it possible to properly plan the time and finan-cial expenses necessary to build the software, which would meet the reliability requirements. The application of the presented method is illustrated by the number example, corresponding to the software reality of the RFID document management system.
Gabre, P; Martinsson, T; Gahnberg, L
1999-08-01
The aim of the present study was to evaluate whether estimation of lactobacilli was possible with simplified saliva sampling methods. Dentocult LB (Orion Diagnostica AB, Trosa, Sweden) was used to estimate the number of lactobacilli in saliva sampled by 3 different methods from 96 individuals: (i) Collecting and pouring stimulated saliva over a Dentocult dip-slide; (ii) direct licking of the Dentocult LB dip-slide; (iii) contaminating a wooden spatula with saliva and pressing against the Dentocult dip-slide. The first method was in accordance with the manufacturer's instructions and selected as the 'gold standard'; the other 2 methods were compared with this result. The 2 simplified methods for estimating levels of lactobacilli in saliva showed good reliability and specificity. Sensitivity, defined as the ability to detect individuals with a high number of lactabacilli in saliva, was sufficient for the licking method (85%), but significantly reduced for the wooden spatula method (52%).
Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria
Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong
2017-08-01
In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.
GAO Hai-feng; BAI Guang-chen
2015-01-01
To ameliorate reliability analysis efficiency for aeroengine components, such as compressor blade, support vector machine response surface method (SRSM) is proposed. SRSM integrates the advantages of support vector machine (SVM) and traditional response surface method (RSM), and utilizes experimental samples to construct a suitable response surface function (RSF) to replace the complicated and abstract finite element model. Moreover, the randomness of material parameters, structural dimension and operating condition are considered during extracting data so that the response surface function is more agreeable to the practical model. The results indicate that based on the same experimental data, SRSM has come closer than RSM reliability to approximating Monte Carlo method (MCM); while SRSM (17.296 s) needs far less running time than MCM (10958 s) and RSM (9840 s). Therefore, under the same simulation conditions, SRSM has the largest analysis efficiency, and can be considered a feasible and valid method to analyze structural reliability.
R&D program benefits estimation: DOE Office of Electricity Delivery and Energy Reliability
None, None
2006-12-04
The overall mission of the U.S. Department of Energy’s Office of Electricity Delivery and Energy Reliability (OE) is to lead national efforts to modernize the electric grid, enhance the security and reliability of the energy infrastructure, and facilitate recovery from disruptions to the energy supply. In support of this mission, OE conducts a portfolio of research and development (R&D) activities to advance technologies to enhance electric power delivery. Multiple benefits are anticipated to result from the deployment of these technologies, including higher quality and more reliable power, energy savings, and lower cost electricity. In addition, OE engages State and local government decision-makers and the private sector to address issues related to the reliability and security of the grid, including responding to national emergencies that affect energy delivery. The OE R&D activities are comprised of four R&D lines: High Temperature Superconductivity (HTS), Visualization and Controls (V&C), Energy Storage and Power Electronics (ES&PE), and Distributed Systems Integration (DSI).
Wilson, Celia M.
2010-01-01
Research pertaining to the distortion of the squared canonical correlation coefficient has traditionally been limited to the effects of sampling error and associated correction formulas. The purpose of this study was to compare the degree of attenuation of the squared canonical correlation coefficient under varying conditions of score reliability.…
Wilson, Celia M.
2010-01-01
Research pertaining to the distortion of the squared canonical correlation coefficient has traditionally been limited to the effects of sampling error and associated correction formulas. The purpose of this study was to compare the degree of attenuation of the squared canonical correlation coefficient under varying conditions of score reliability.…
Ong, M M; Kihara, R; Zentler, J M; Kreitzer, B R; DeHope, W J
2007-06-27
At Lawrence Livermore National Laboratory (LLNL), our flash X-ray accelerator (FXR) is used on multi-million dollar hydrodynamic experiments. Because of the importance of the radiographs, FXR must be ultra-reliable. Flash linear accelerators that can generate a 3 kA beam at 18 MeV are very complex. They have thousands, if not millions, of critical components that could prevent the machine from performing correctly. For the last five years, we have quantified and are tracking component failures. From this data, we have determined that the reliability of the high-voltage gas-switches that initiate the pulses, which drive the accelerator cells, dominates the statistics. The failure mode is a single-switch pre-fire that reduces the energy of the beam and degrades the X-ray spot-size. The unfortunate result is a lower resolution radiograph. FXR is a production machine that allows only a modest number of pulses for testing. Therefore, reliability switch testing that requires thousands of shots is performed on our test stand. Study of representative switches has produced pre-fire statistical information and probability distribution curves. This information is applied to FXR to develop test procedures and determine individual switch reliability using a minimal number of accelerator pulses.
Clarifying the Blurred Image: Estimating the Inter-Rater Reliability of Performance Assessments.
Moore, Alan D.; Young, Suzanne
As schools move toward performance assessment, there is increasing discussion of using these assessments for accountability purposes. When used for making decisions, performance assessments must meet high standards of validity and reliability. One major source of unreliability in performance assessments is interrater disagreement. In this paper,…
Tong-chun LI; Li, Dan-Dan; Wang, Zhi-Qiang
2010-01-01
In the paper, the limit state equation of tensile reliability of foundation base of gravity dam is established. The possible crack length is set as action effect and the allowance crack length is set as resistance in this limit state. The nonlinear FEM is applied to obtain the crack length of foundation base of gravity dam, and linear response surface method based on the orthogonal test design method is used to calculate the reliability,which offered an reasonable and simple analysis method t...
Giovanni Francesco Spatola
2015-04-01
Full Text Available The use of image analysis methods has allowed us to obtain more reliable and repro-ducible immunohistochemistry (IHC results. Wider use of such approaches and sim-plification of software allowing a colorimetric study has meant that these methods are available to everyone, and made it possible to standardize the technique by a reliable systems score. Moreover, the recent introduction of multispectral image acquisition systems methods has further refined these techniques, minimizing artefacts and eas-ing the evaluation of the data by the observer.
eobe
The design variables for the design of the sla. The design ... The presence of uncertainty in the analysis and de of engineering .... however, for certain complex elements, the methods ..... Standard BS EN 1990, CEN, European Committee for.
PERFORMANCE ANALYSIS OF METHODS FOR ESTIMATING ...
2014-12-31
Dec 31, 2014 ... analysis revealed that the MLM was the most accurate model ..... obtained using the empirical method as the same formula is used. ..... and applied meteorology, American meteorological society, October 1986, vol.25, pp.
Almeida, Mariana R; Fidelis, Carlos H V; Barata, Lauro E S; Poppi, Ronei J
2013-12-15
The Amazon tree Aniba rosaeodora Ducke (rosewood) provides an essential oil valuable for the perfume industry, but after decades of predatory extraction it is at risk of extinction. The extraction of the essential oil from wood implies the cutting of the tree, and then the study of oil extracted from the leaves is important as a sustainable alternative. The goal of this study was to test the applicability of Raman spectroscopy and Partial Least Square Discriminant Analysis (PLS-DA) as means to classify the essential oil extracted from different parties (wood, leaves and branches) of the Brazilian tree A. rosaeodora. For the development of classification models, the Raman spectra were split into two sets: training and test. The value of the limit that separates the classes was calculated based on the distribution of samples of training. This value was calculated in a manner that the classes are divided with a lower probability of incorrect classification for future estimates. The best model presented sensitivity and specificity of 100%, predictive accuracy and efficiency of 100%. These results give an overall vision of the behavior of the model, but do not give information about individual samples; in this case, the confidence interval for each sample of classification was also calculated using the resampling bootstrap technique. The methodology developed have the potential to be an alternative for standard procedures used for oil analysis and it can be employed as screening method, since it is fast, non-destructive and robust.
portfolio optimization based on nonparametric estimation methods
mahsa ghandehari
2017-03-01
Full Text Available One of the major issues investors are facing with in capital markets is decision making about select an appropriate stock exchange for investing and selecting an optimal portfolio. This process is done through the risk and expected return assessment. On the other hand in portfolio selection problem if the assets expected returns are normally distributed, variance and standard deviation are used as a risk measure. But, the expected returns on assets are not necessarily normal and sometimes have dramatic differences from normal distribution. This paper with the introduction of conditional value at risk ( CVaR, as a measure of risk in a nonparametric framework, for a given expected return, offers the optimal portfolio and this method is compared with the linear programming method. The data used in this study consists of monthly returns of 15 companies selected from the top 50 companies in Tehran Stock Exchange during the winter of 1392 which is considered from April of 1388 to June of 1393. The results of this study show the superiority of nonparametric method over the linear programming method and the nonparametric method is much faster than the linear programming method.
Advancing Methods for Estimating Cropland Area
King, L.; Hansen, M.; Stehman, S. V.; Adusei, B.; Potapov, P.; Krylov, A.
2014-12-01
Measurement and monitoring of complex and dynamic agricultural land systems is essential with increasing demands on food, feed, fuel and fiber production from growing human populations, rising consumption per capita, the expansion of crops oils in industrial products, and the encouraged emphasis on crop biofuels as an alternative energy source. Soybean is an important global commodity crop, and the area of land cultivated for soybean has risen dramatically over the past 60 years, occupying more than 5% of all global croplands (Monfreda et al 2008). Escalating demands for soy over the next twenty years are anticipated to be met by an increase of 1.5 times the current global production, resulting in expansion of soybean cultivated land area by nearly the same amount (Masuda and Goldsmith 2009). Soybean cropland area is estimated with the use of a sampling strategy and supervised non-linear hierarchical decision tree classification for the United States, Argentina and Brazil as the prototype in development of a new methodology for crop specific agricultural area estimation. Comparison of our 30 m2 Landsat soy classification with the National Agricultural Statistical Services Cropland Data Layer (CDL) soy map shows a strong agreement in the United States for 2011, 2012, and 2013. RapidEye 5m2 imagery was also classified for soy presence and absence and used at the field scale for validation and accuracy assessment of the Landsat soy maps, describing a nearly 1 to 1 relationship in the United States, Argentina and Brazil. The strong correlation found between all products suggests high accuracy and precision of the prototype and has proven to be a successful and efficient way to assess soybean cultivated area at the sub-national and national scale for the United States with great potential for application elsewhere.
The Reliability of the Symax Method of Measuring the Radiographic Femoral Varus Angle
Allpass, Maja; Miles, James Edward; Schmökel, Hugo
2014-01-01
Objective: To determine the practicability of curved osteotomy to correct femoral varus in small breed dogs, and to assess the reliability of the Symax method of measuring the radiographic femoral varus angle (FVA). Methods: Eleven cadaveric femora plus one clinical case were included in this stu...
The comparability and reliability of five health-state valuation methods
Krabbe, P F; Essink-Bot, M L; Bonsel, G J
1997-01-01
The objective of the study was to consider five methods for valuing health states with respect to their comparability (convergent validity, value functions) and reliability. Valuation tasks were performed by 104 student volunteers using five frequently used valuation methods: standard gamble (SG),
Thermodynamic properties of organic compounds estimation methods, principles and practice
Janz, George J
1967-01-01
Thermodynamic Properties of Organic Compounds: Estimation Methods, Principles and Practice, Revised Edition focuses on the progression of practical methods in computing the thermodynamic characteristics of organic compounds. Divided into two parts with eight chapters, the book concentrates first on the methods of estimation. Topics presented are statistical and combined thermodynamic functions; free energy change and equilibrium conversions; and estimation of thermodynamic properties. The next discussions focus on the thermodynamic properties of simple polyatomic systems by statistical the
Peng Gao
2014-01-01
Full Text Available It is necessary to develop dynamic reliability models when considering strength degradation of mechanical components. Instant probability density function (IPDF of stress and process probability density function (PPDF of stress, which are obtained via different statistical methods, are defined, respectively. In practical engineering, the probability density function (PDF for the usage of mechanical components is mostly PPDF, such as the PDF acquired via the rain flow counting method. For the convenience of application, IPDF is always approximated by PPDF when using the existing dynamic reliability models. However, it may cause errors in the reliability calculation due to the approximation of IPDF by PPDF. Therefore, dynamic reliability models directly based on PPDF of stress are developed in this paper. Furthermore, the proposed models can be used for reliability assessment in the case of small amount of stress process samples by employing the fuzzy set theory. In addition, the mechanical components in solar array of satellites are chosen as representative examples to illustrate the proposed models. The results show that errors are caused because of the approximation of IPDF by PPDF and the proposed models are accurate in the reliability computation.
DG Placement with Considering Reliability Improvement and Power Loss Reduction with GA Method
Mohammad Mohammadi
2011-08-01
Full Text Available Distributed generators are beneficial in reducing the losses effectively compared to other methods of loss reduction. DGs can be implemented for improvement of reliability of system; too In this study optimal DG unit placement using GA is discussed. The optimal size of the DG unit is calculated analytically using approximate reasoning suitable nodes are determined for DG unit placement. Reliability and power loss reduction indices of distribution system nodes are modeled. GA containing a set of rules is used to determine the DG unit placement. DG units are placed with the highest suitability index. Simulation results show the advantage of optimal DG unit placement. Compared to other power loss and reliability improvement techniques, placement it is giving very good reduction not only in power loss but also it is improving reliability improvement.
Service Priority based Reliable Routing Path Select Method in Smart Grid Communication Network
Kaixuan Wang
2012-11-01
Full Text Available The new challenges and schemes for the Smart Grid require high reliable transmission technologiesto support various types of electrical services and applications. This paper concentrates the degree of importance of services and tries to allocate more important service to more reliable network routing path to deliver the key instructions in the Smart Grid communication networks. Pareto probability distribution is used to weight the reliability of IP-based router path. In order to definition the relationship of service and reliability of router path, we devise a mapping and optimal function is presented to access. An optimal method is used for adapting to find the appropriate value to match the objective function. Finally, we validate the proposed algorithms by experiments. The simulation results show that the proposed algorithm outperforms the random routing algorithms.
Analysis and estimation of risk management methods
Kankhva Vadim Sergeevich
2016-05-01
Full Text Available At the present time risk management is an integral part of state policy in all the countries with developed market economy. Companies dealing with consulting services and implementation of the risk management systems carve out a niche. Unfortunately, conscious preventive risk management in Russia is still far from the level of standardized process of a construction company activity, which often leads to scandals and disapproval in case of unprofessional implementation of projects. The authors present the results of the investigation of the modern understanding of the existing methodology classification and offer the authorial concept of classification matrix of risk management methods. Creation of the developed matrix is based on the analysis of the method in the context of incoming and outgoing transformed information, which may include different elements of risk control stages. So the offered approach allows analyzing the possibilities of each method.
Sakurai, Kiyoshi; Arakawa, Takuya; Yamamoto, Toshihiro; Naito, Yoshitaka [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1996-08-01
Estimation accuracy for subcriticality on `Indirect Estimation Method for Calculation Error` is expressed in form of {rho}{sub m} - {rho}{sub C} = K ({gamma}{sub zc}{sup 2} - {gamma}{sub zm}{sup 2}). This expression means that estimation accuracy for subcriticality is proportional to ({gamma}{sub zc}{sup 2} - {gamma}{sub zm}{sup 2}) as estimation accuracy of buckling for axial direction. The proportional constant K is calculated, but the influence of the uncertainty of K to estimation accuracy for subcriticality is smaller than in case of comparison for {rho}{sub m} = -K ({gamma}{sub zm}{sup 2} + B{sub z}{sup 2}) with calculated {rho}{sub c}. When the values of K were calculated, the estimation accuracy is kept enough. If {gamma}{sub zc}{sup 2} equal to {gamma}{sub zm}{sup 2}, {rho}{sub c} equal to {rho}{sub m}. Reliability of this method is shown on base of results in which are calculated using MCNP 4A for four subcritical cores of TCA. (author)
Optical method of atomic ordering estimation
Prutskij, T. [Instituto de Ciencias, BUAP, Privada 17 Norte, No 3417, col. San Miguel Huyeotlipan, Puebla, Pue. (Mexico); Attolini, G. [IMEM/CNR, Parco Area delle Scienze 37/A - 43010, Parma (Italy); Lantratov, V.; Kalyuzhnyy, N. [Ioffe Physico-Technical Institute, 26 Polytekhnicheskaya, St Petersburg 194021, Russian Federation (Russian Federation)
2013-12-04
It is well known that within metal-organic vapor-phase epitaxy (MOVPE) grown semiconductor III-V ternary alloys atomically ordered regions are spontaneously formed during the epitaxial growth. This ordering leads to bandgap reduction and to valence bands splitting, and therefore to anisotropy of the photoluminescence (PL) emission polarization. The same phenomenon occurs within quaternary semiconductor alloys. While the ordering in ternary alloys is widely studied, for quaternaries there have been only a few detailed experimental studies of it, probably because of the absence of appropriate methods of its detection. Here we propose an optical method to reveal atomic ordering within quaternary alloys by measuring the PL emission polarization.
Residual-based a posteriori error estimation for multipoint flux mixed finite element methods
Du, Shaohong
2015-10-26
A novel residual-type a posteriori error analysis technique is developed for multipoint flux mixed finite element methods for flow in porous media in two or three space dimensions. The derived a posteriori error estimator for the velocity and pressure error in L-norm consists of discretization and quadrature indicators, and is shown to be reliable and efficient. The main tools of analysis are a locally postprocessed approximation to the pressure solution of an auxiliary problem and a quadrature error estimate. Numerical experiments are presented to illustrate the competitive behavior of the estimator.
The Monte Carlo Simulation Method for System Reliability and Risk Analysis
Zio, Enrico
2013-01-01
Monte Carlo simulation is one of the best tools for performing realistic analysis of complex systems as it allows most of the limiting assumptions on system behavior to be relaxed. The Monte Carlo Simulation Method for System Reliability and Risk Analysis comprehensively illustrates the Monte Carlo simulation method and its application to reliability and system engineering. Readers are given a sound understanding of the fundamentals of Monte Carlo sampling and simulation and its application for realistic system modeling. Whilst many of the topics rely on a high-level understanding of calculus, probability and statistics, simple academic examples will be provided in support to the explanation of the theoretical foundations to facilitate comprehension of the subject matter. Case studies will be introduced to provide the practical value of the most advanced techniques. This detailed approach makes The Monte Carlo Simulation Method for System Reliability and Risk Analysis a key reference for senior undergra...
Method and Application for Reliability Analysis of Measurement Data in Nuclear Power Plant
Yun, Hun; Hwang, Kyeongmo; Lee, Hyoseoung [KEPCO E and C, Seoungnam (Korea, Republic of); Moon, Seungjae [Hanyang University, Seoul (Korea, Republic of)
2015-02-15
Pipe wall-thinning by flow-accelerated corrosion and various types of erosion is significant damage in secondary system piping of nuclear power plants(NPPs). All NPPs in Korea have management programs to ensure pipe integrity from degradation mechanisms. Ultrasonic test(UT) is widely used for pipe wall thickness measurement. Numerous UT measurements have been performed during scheduled outages. Wall-thinning rates are determined conservatively according to several evaluation methods developed by Electric Power Research Institute(EPRI). The issue of reliability caused by measurement error should be considered in the process of evaluation. The reliability analysis method was developed for single and multiple measurement data in the previous researches. This paper describes the application results of reliability analysis method to real measurement data during scheduled outage and proved its benefits.
Autoregressive Methods for Spectral Estimation from Interferograms.
1986-09-19
Forman/Steele/Vanasse [12] phase filter approach, which approximately removes the linear phase distortion introduced into the interferogram by retidation...band interferogram for the spectrum to be analyzed. The symmetrizing algorithm, based on the Forman/Steele/Vanasse method [12] computes a phase filter from
Novel method for quantitative estimation of biofilms
Syal, Kirtimaan
2017-01-01
Biofilm protects bacteria from stress and hostile environment. Crystal violet (CV) assay is the most popular method for biofilm determination adopted by different laboratories so far. However, biofilm layer formed at the liquid-air interphase known as pellicle is extremely sensitive to its washin...
Evaluation of Six Methods for Estimating Synonymous and Nonsynonymous Substitution Rates
Zhang Zhang; Jun Yu
2006-01-01
Methods for estimating synonymous and nonsynonymous substitution rates among protein-coding sequences adopt different mutation (substitution) models with subtle yet significant differences, which lead to different estimates of evolutionary information. Little attention has been devoted to the comparison of methods for obtaining reliable estimates since the amount of sequence variations within targeted datasets is always unpredictable. To our knowledge, there is little information available in literature about evaluation of these different methods. In this study, we compared six widely used methods and provided with evaluation results using simulated sequences. The results indicate that incorporating sequence features (such as transition/transversion bias and nucleotide/codon frequency bias)into methods could yield better performance. We recommend that conclusions related to or derived from Ka and Ks analyses should not be readily drawn only according to results from one method.
Jang, Seunghyun; Jae, Moosung [Hanyang University, Seoul (Korea, Republic of)
2016-10-15
The human failure events (HFEs) are considered in the development of system fault trees as well as accident sequence event trees in part of Probabilistic Safety Assessment (PSA). As a method for analyzing the human error, several methods, such as Technique for Human Error Rate Prediction (THERP), Human Cognitive Reliability (HCR), and Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) are used and new methods for human reliability analysis (HRA) are under developing at this time. This paper presents a dynamic HRA method for assessing the human failure events and estimation of human error probability for filtered containment venting system (FCVS) is performed. The action associated with implementation of the containment venting during a station blackout sequence is used as an example. In this report, dynamic HRA method was used to analyze FCVS-related operator action. The distributions of the required time and the available time were developed by MAAP code and LHS sampling. Though the numerical calculations given here are only for illustrative purpose, the dynamic HRA method can be useful tools to estimate the human error estimation and it can be applied to any kind of the operator actions, including the severe accident management strategy.
ABCLS method for high-reliability aerospace mechanism with truncated random uncertainties
Peng Wensheng
2015-08-01
Full Text Available The random variables are always truncated in aerospace engineering and the truncated distribution is more feasible and effective for the random variables due to the limited samples available. For high-reliability aerospace mechanism with truncated random variables, a method based on artificial bee colony (ABC algorithm and line sampling (LS is proposed. The artificial bee colony-based line sampling (ABCLS method presents a multi-constrained optimization model to solve the potential non-convergence problem when calculating design point (is also as most probable point, MPP of performance function with truncated variables; by implementing ABC algorithm to search for MPP in the standard normal space, the optimization efficiency and global searching ability are increased with this method dramatically. When calculating the reliability of aerospace mechanism with too small failure probability, the Monte Carlo simulation method needs too large sample size. The ABCLS method could overcome this drawback. For reliability problems with implicit functions, this paper combines the ABCLS with Kriging response surface method, therefore could alleviate computational burden of calculating the reliability of complex aerospace mechanism. A numerical example and an engineering example are carried out to verify this method and prove the applicability.
System and method for correcting attitude estimation
Josselson, Robert H. (Inventor)
2010-01-01
A system includes an angular rate sensor disposed in a vehicle for providing angular rates of the vehicle, and an instrument disposed in the vehicle for providing line-of-sight control with respect to a line-of-sight reference. The instrument includes an integrator which is configured to integrate the angular rates of the vehicle to form non-compensated attitudes. Also included is a compensator coupled across the integrator, in a feed-forward loop, for receiving the angular rates of the vehicle and outputting compensated angular rates of the vehicle. A summer combines the non-compensated attitudes and the compensated angular rates of the to vehicle to form estimated vehicle attitudes for controlling the instrument with respect to the line-of-sight reference. The compensator is configured to provide error compensation to the instrument free-of any feedback loop that uses an error signal. The compensator may include a transfer function providing a fixed gain to the received angular rates of the vehicle. The compensator may, alternatively, include a is transfer function providing a variable gain as a function of frequency to operate on the received angular rates of the vehicle.
Harris-Love, Michael O; Seamon, Bryant A; Teixeira, Carla; Ismail, Catheeja
2016-01-01
Background. Quantitative diagnostic ultrasound imaging has been proposed as a method of estimating muscle quality using measures of echogenicity. The Rectangular Marquee Tool (RMT) and the Free Hand Tool (FHT) are two types of editing features used in Photoshop and ImageJ for determining a region of interest (ROI) within an ultrasound image. The primary objective of this study is to determine the intrarater and interrater reliability of Photoshop and ImageJ for the estimate of muscle tissue echogenicity in older adults via grayscale histogram analysis. The secondary objective is to compare the mean grayscale values obtained using both the RMT and FHT methods across both image analysis platforms. Methods. This cross-sectional observational study features 18 community-dwelling men (age = 61.5 ± 2.32 years). Longitudinal views of the rectus femoris were captured using B-mode ultrasound. The ROI for each scan was selected by 2 examiners using the RMT and FHT methods from each software program. Their reliability is assessed using intraclass correlation coefficients (ICCs) and the standard error of the measurement (SEM). Measurement agreement for these values is depicted using Bland-Altman plots. A paired t-test is used to determine mean differences in echogenicity expressed as grayscale values using the RMT and FHT methods to select the post-image acquisition ROI. The degree of association among ROI selection methods and image analysis platforms is analyzed using the coefficient of determination (R (2)). Results. The raters demonstrated excellent intrarater and interrater reliability using the RMT and FHT methods across both platforms (lower bound 95% CI ICC = .97-.99, p ImageJ. Uniform coefficients of determination (R (2) = .96-.99, p ImageJ are suitable for the post-acquisition image analysis of tissue echogenicity in older adults.
An Intelligent Method for Structural Reliability Analysis Based on Response Surface
桂劲松; 刘红; 康海贵
2004-01-01
As water depth increases, the structural safety and reliability of a system become more and more important and challenging. Therefore, the structural reliability method must be applied in ocean engineering design such as offshore platform design. If the performance function is known in structural reliability analysis, the first-order second-moment method is often used. If the performance function could not be definitely expressed, the response surface method is always used because it has a very clear train of thought and simple programming. However, the traditional response surface method fits the response surface of quadratic polynomials where the problem of accuracy could not be solved, because the true limit state surface can be fitted well only in the area near the checking point. In this paper, an intelligent computing method based on the whole response surface is proposed, which can be used for the situation where the performance function could not be definitely expressed in structural reliability analysis. In this method, a response surface of the fuzzy neural network for the whole area should be constructed first, and then the structural reliability can be calculated by the genetic algorithm. In the proposed method, all the sample points for the training network come from the whole area, so the true limit state surface in the whole area can be fitted. Through calculational examples and comparative analysis, it can be known that the proposed method is much better than the traditional response surface method of quadratic polynomials, because, the amount of calculation of finite element analysis is largely reduced, the accuracy of calculation is improved,and the true limit state surface can be fitted very well in the whole area. So, the method proposed in this paper is suitable for engineering application.
Automatic training and reliability estimation for 3D ASM applied to cardiac MRI segmentation.
Tobon-Gomez, Catalina; Sukno, Federico M; Butakoff, Constantine; Huguet, Marina; Frangi, Alejandro F
2012-07-07
Training active shape models requires collecting manual ground-truth meshes in a large image database. While shape information can be reused across multiple imaging modalities, intensity information needs to be imaging modality and protocol specific. In this context, this study has two main purposes: (1) to test the potential of using intensity models learned from MRI simulated datasets and (2) to test the potential of including a measure of reliability during the matching process to increase robustness. We used a population of 400 virtual subjects (XCAT phantom), and two clinical populations of 40 and 45 subjects. Virtual subjects were used to generate simulated datasets (MRISIM simulator). Intensity models were trained both on simulated and real datasets. The trained models were used to segment the left ventricle (LV) and right ventricle (RV) from real datasets. Segmentations were also obtained with and without reliability information. Performance was evaluated with point-to-surface and volume errors. Simulated intensity models obtained average accuracy comparable to inter-observer variability for LV segmentation. The inclusion of reliability information reduced volume errors in hypertrophic patients (EF errors from 17 ± 57% to 10 ± 18%; LV MASS errors from -27 ± 22 g to -14 ± 25 g), and in heart failure patients (EF errors from -8 ± 42% to -5 ± 14%). The RV model of the simulated images needs further improvement to better resemble image intensities around the myocardial edges. Both for real and simulated models, reliability information increased segmentation robustness without penalizing accuracy.
Aslan, Serdar; Taylan Cemgil, Ali; Akın, Ata
2016-08-01
Objective. In this paper, we aimed for the robust estimation of the parameters and states of the hemodynamic model by using blood oxygen level dependent signal. Approach. In the fMRI literature, there are only a few successful methods that are able to make a joint estimation of the states and parameters of the hemodynamic model. In this paper, we implemented a maximum likelihood based method called the particle smoother expectation maximization (PSEM) algorithm for the joint state and parameter estimation. Main results. Former sequential Monte Carlo methods were only reliable in the hemodynamic state estimates. They were claimed to outperform the local linearization (LL) filter and the extended Kalman filter (EKF). The PSEM algorithm is compared with the most successful method called square-root cubature Kalman smoother (SCKS) for both state and parameter estimation. SCKS was found to be better than the dynamic expectation maximization (DEM) algorithm, which was shown to be a better estimator than EKF, LL and particle filters. Significance. PSEM was more accurate than SCKS for both the state and the parameter estimation. Hence, PSEM seems to be the most accurate method for the system identification and state estimation for the hemodynamic model inversion literature. This paper do not compare its results with Tikhonov-regularized Newton—CKF (TNF-CKF), a recent robust method which works in filtering sense.
Equation reliability of soil ingestion estimates in mass-balance soil ingestion studies.
Stanek Iii, Edward J; Xu, Bo; Calabrese, Edward J
2012-03-01
Exposure to chemicals from ingestion of contaminated soil may be an important pathway with potential health consequences for children. A key parameter used in assessing this exposure is the quantity of soil ingested, with estimates based on four short longitudinal mass-balance soil ingestion studies among children. The estimates use trace elements in the soil with low bioavailability that are minimally present in food. Soil ingestion corresponds to the excess trace element amounts excreted, after subtracting trace element amounts ingested from food and medications, expressed as an equivalent quantity of soil. The short duration of mass-balance studies, different concentrations of trace elements in food and soil, and potential for trace elements to be ingested from other nonsoil, nonfood sources contribute to variability and bias in the estimates. We develop a stochastic model for a soil ingestion estimator based on a trace element that accounts for critical features of the mass-balance equation. Using results from four mass-balance soil ingestion studies, we estimate the accuracy of soil ingestion estimators for different trace elements, and identify subjects where the difference between Al and Si estimates is larger (>3 RMSE) than expected. Such large differences occur in fewer than 12% of subjects in each of the four studies. We recommend the use of such criteria to flag and exclude subjects from soil ingestion analyses. © 2011 Society for Risk Analysis.
Bayesian methods to estimate urban growth potential
Smith, Jordan W.; Smart, Lindsey S.; Dorning, Monica; Dupéy, Lauren Nicole; Méley, Andréanne; Meentemeyer, Ross K.
2017-01-01
Urban growth often influences the production of ecosystem services. The impacts of urbanization on landscapes can subsequently affect landowners’ perceptions, values and decisions regarding their land. Within land-use and land-change research, very few models of dynamic landscape-scale processes like urbanization incorporate empirically-grounded landowner decision-making processes. Very little attention has focused on the heterogeneous decision-making processes that aggregate to influence broader-scale patterns of urbanization. We examine the land-use tradeoffs faced by individual landowners in one of the United States’ most rapidly urbanizing regions − the urban area surrounding Charlotte, North Carolina. We focus on the land-use decisions of non-industrial private forest owners located across the region’s development gradient. A discrete choice experiment is used to determine the critical factors influencing individual forest owners’ intent to sell their undeveloped properties across a series of experimentally varied scenarios of urban growth. Data are analyzed using a hierarchical Bayesian approach. The estimates derived from the survey data are used to modify a spatially-explicit trend-based urban development potential model, derived from remotely-sensed imagery and observed changes in the region’s socioeconomic and infrastructural characteristics between 2000 and 2011. This modeling approach combines the theoretical underpinnings of behavioral economics with spatiotemporal data describing a region’s historical development patterns. By integrating empirical social preference data into spatially-explicit urban growth models, we begin to more realistically capture processes as well as patterns that drive the location, magnitude and rates of urban growth.
Prescott Gordon J
2007-05-01
Full Text Available Abstract Background A nutritional assessment method that is quick and easy to conduct would be extremely useful in a complex emergency, where currently there is no agreed practical and acceptable method. Hair pluckability has been suggested to be a useful method of assessing protein nutritional status. The aim was to investigate the reliability of the trichotillometer and to explore the effects of patient characteristics on hair epilation force. Methods Three observers plucked hair from twelve participants to investigate the within- and between-observer reliability. To investigate the effect of patient characteristics on hair pluckability, 12 black African and 12 white volunteers were recruited. Participants completed a short questionnaire to provide basic information on their characteristics and hair. Results Mean hair pluckability measurements for the 12 participants obtained by the three observers (39.5 g, 41.2 g and 32.7 g were significantly different (p Conclusion Due to significant variation in measurements, hair pluckability does not appear to be a reliable method for assessing adult nutritional status. Hair pluckability could be a useful method of nutritional assessment in complex humanitarian emergencies but only if the reliability was improved.
METHOD ON ESTIMATION OF DRUG'S PENETRATED PARAMETERS
刘宇红; 曾衍钧; 许景锋; 张梅
2004-01-01
Transdermal drug delivery system (TDDS) is a new method for drug delivery. The analysis of plenty of experiments in vitro can lead to a suitable mathematical model for the description of the process of the drug's penetration through the skin, together with the important parameters that are related to the characters of the drugs.After the research work of the experiments data,a suitable nonlinear regression model was selected. Using this model, the most important parameter-penetrated coefficient of 20 drugs was computed.In the result one can find, this work supports the theory that the skin can be regarded as singular membrane.
Understanding Rasch measurement: estimation methods for Rasch measures.
Linacre, J M
1999-01-01
Rasch parameter estimation methods can be classified as non-interative and iterative. Non-iterative methods include the normal approximation algorithm (PROX) for complete dichotomous data. Iterative methods fall into 3 types. Datum-by-datum methods include Gaussian least-squares, minimum chi-square, and the pairwise (PAIR) method. Marginal methods without distributional assumptions include conditional maximum-likelihood estimation (CMLE), joint maximum-likelihood estimation (JMLE) and log-linear approaches. Marginal methods with distributional assumptions include marginal maximum-likelihood estimation (MMLE) and the normal approximation algorithm (PROX) for missing data. Estimates from all methods are characterized by standard errors and quality-control fit statistics. Standard errors can be local (defined relative to the measure of a particular item) or general (defined relative to the abstract origin of the scale). They can also be ideal (as though the data fit the model) or inflated by the misfit to the model present in the data. Five computer programs, implementing different estimation methods, produce statistically equivalent estimates. Nevertheless, comparing estimates from different programs requires care.
A Fast Optimization Method for Reliability and Performance of Cloud Services Composition Application
Zhao Wu
2013-01-01
Full Text Available At present the cloud computing is one of the newest trends of distributed computation, which is propelling another important revolution of software industry. The cloud services composition is one of the key techniques in software development. The optimization for reliability and performance of cloud services composition application, which is a typical stochastic optimization problem, is confronted with severe challenges due to its randomness and long transaction, as well as the characteristics of the cloud computing resources such as openness and dynamic. The traditional reliability and performance optimization techniques, for example, Markov model and state space analysis and so forth, have some defects such as being too time consuming and easy to cause state space explosion and unsatisfied the assumptions of component execution independence. To overcome these defects, we propose a fast optimization method for reliability and performance of cloud services composition application based on universal generating function and genetic algorithm in this paper. At first, a reliability and performance model for cloud service composition application based on the multiple state system theory is presented. Then the reliability and performance definition based on universal generating function is proposed. Based on this, a fast reliability and performance optimization algorithm is presented. In the end, the illustrative examples are given.
Qinghai Zhao
2015-01-01
Full Text Available A mathematical framework is developed which integrates the reliability concept into topology optimization to solve reliability-based topology optimization (RBTO problems under uncertainty. Two typical methodologies have been presented and implemented, including the performance measure approach (PMA and the sequential optimization and reliability assessment (SORA. To enhance the computational efficiency of reliability analysis, stochastic response surface method (SRSM is applied to approximate the true limit state function with respect to the normalized random variables, combined with the reasonable design of experiments generated by sparse grid design, which was proven to be an effective and special discretization technique. The uncertainties such as material property and external loads are considered on three numerical examples: a cantilever beam, a loaded knee structure, and a heat conduction problem. Monte-Carlo simulations are also performed to verify the accuracy of the failure probabilities computed by the proposed approach. Based on the results, it is demonstrated that application of SRSM with SGD can produce an efficient reliability analysis in RBTO which enables a more reliable design than that obtained by DTO. It is also found that, under identical accuracy, SORA is superior to PMA in view of computational efficiency.
Robust time and frequency domain estimation methods in adaptive control
Lamaire, Richard Orville
1987-01-01
A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.
Monica C Junkes
Full Text Available The aim of the present study was to translate, perform the cross-cultural adaptation of the Rapid Estimate of Adult Literacy in Dentistry to Brazilian-Portuguese language and test the reliability and validity of this version.After translation and cross-cultural adaptation, interviews were conducted with 258 parents/caregivers of children in treatment at the pediatric dentistry clinics and health units in Curitiba, Brazil. To test the instrument's validity, the scores of Brazilian Rapid Estimate of Adult Literacy in Dentistry (BREALD-30 were compared based on occupation, monthly household income, educational attainment, general literacy, use of dental services and three dental outcomes.The BREALD-30 demonstrated good internal reliability. Cronbach's alpha ranged from 0.88 to 0.89 when words were deleted individually. The analysis of test-retest reliability revealed excellent reproducibility (intraclass correlation coefficient = 0.983 and Kappa coefficient ranging from moderate to nearly perfect. In the bivariate analysis, BREALD-30 scores were significantly correlated with the level of general literacy (rs = 0.593 and income (rs = 0.327 and significantly associated with occupation, educational attainment, use of dental services, self-rated oral health and the respondent's perception regarding his/her child's oral health. However, only the association between the BREALD-30 score and the respondent's perception regarding his/her child's oral health remained significant in the multivariate analysis.The BREALD-30 demonstrated satisfactory psychometric properties and is therefore applicable to adults in Brazil.
Cavuoti, Stefano; Brescia, Massimo; Vellucci, Civita; Tortora, Crescenzo; Longo, Giuseppe
2016-01-01
A variety of fundamental astrophysical science topics require the determination of very accurate photometric redshifts (photo-z's). A wide plethora of methods have been developed, based either on template models fitting or on empirical explorations of the photometric parameter space. Machine learning based techniques are not explicitly dependent on the physical priors and able to produce accurate photo-z estimations within the photometric ranges derived from the spectroscopic training set. These estimates, however, are not easy to characterize in terms of a photo-z Probability Density Function (PDF), due to the fact that the analytical relation mapping the photometric parameters onto the redshift space is virtually unknown. We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method designed to provide a reliable PDF of the error distribution for empirical techniques. The method is implemented as a modular workflow, whose internal engine for photo-z estimation makes use...
Reliability Estimation for Rolling Bearings Based on Virtual Information%基于虚拟信息的滚动轴承可靠性估计
楼洪梁; 陈磊; 李兴林; 但召江; 陈炳顺
2015-01-01
为了提高轴承截尾时间点的可靠度估计的可信度与稳定性，在滚动轴承截尾试验中出现无失效数据时，提出了在每个截尾时间点的可靠度估计过程中引入前一个截尾时间点无失效样本的虚拟失效信息的可靠度计算方法。经实例分析证明，在不同超参数取值下，采用该方法得到的特征寿命和形状参数估计值波动最小，具有更好的稳定性。%In order to improve credibility and stability of reliability estimation for censored time point of bearings,when the zero -failure data appeared in rolling bearing censored test,the virtual failure information of zero -failure sample in previous censored time point is introduced during reliability estimation process for each censored time point.The exam-ple analysis proof that the estimation value of characteristics life and shape parameter has the smallest fluctuation under different hyper parameters,and the method is better than other methods in stability.
Methods of gas hydrate concentration estimation with field examples
Kumar, D.; Dash, R.; Dewangan, P.
different methods of gas hydrate concentration estimation that make use of data from the measurements of the seismic properties, electrical resistivity, chlorinity, porosity, density, and temperature are summarized in this paper. We demonstrate the methods...
A least squares estimation method for the linear learning model
B. Wierenga (Berend)
1978-01-01
textabstractThe author presents a new method for estimating the parameters of the linear learning model. The procedure, essentially a least squares method, is easy to carry out and avoids certain difficulties of earlier estimation procedures. Applications to three different data sets are reported, a
Devosa, Iván; Kozinszky, Zoltán; Vanya, Melinda; Szili, Károly; Fáyné Dombi, Alice; Barabás, Katalin
2016-04-03
Promiscuity and lack of use of reliable contraceptive methods increase the probability of sexually transmitted diseases and the risk of unwanted pregnancies, which are quite common among university students. The aim of the study was to assess the knowledge of university students about reliable contraceptive methods and sexually transmitted diseases, and to assess the effectiveness of the sexual health education in secondary schools, with specific focus on the education held by peers. An anonymous, self-administered questionnaire survey was carried out in a randomized sample of students at the University of Szeged (n = 472, 298 women and 174 men, average age 21 years) between 2009 and 2011. 62.1% of the respondents declared that reproductive health education lessons in high schools held by peers were reliable and authentic source of information, 12.3% considered as a less reliable source, and 25.6% defined the school health education as irrelevant source. Among those, who considered the health education held by peers as a reliable source, there were significantly more females (69.3% vs. 46.6%, p = 0.001), significantly fewer lived in cities (83.6% vs. 94.8%, p = 0.025), and significantly more responders knew that Candida infection can be transmitted through sexual intercourse (79.5% versus 63.9%, p = 0.02) as compared to those who did not consider health education held by peers as a reliable source. The majority of respondents obtained knowledge about sexual issues from the mass media. Young people who considered health educating programs reliable were significantly better informed about Candida disease.
Andreas Berner; Tariq Saleem Alharbi; Eric Carlstrm; Amir Khorram-Manesh
2015-01-01
Objective: To develop a validated and generalized collaborative tool to be utilized by high reliability organizations in order to conduct common resource assessment before major events and mass gatherings.Methods:The Swedish resource and risk estimation guide was used as foundation for the development of the generalized collaborative tool, by three different expert groups, and then analyzed. Analysis of inter-rater reliability was conducted through simulated cases that showed weighted and unweight κ-statistics.Results:The results revealed a mean of unweight κ-value from the three cases of 0.44 and a mean accuracy of 61% of the tool.Conclusions:A better collaboration ability and more accurate resource assessment with acceptable reliability and validity were shown in this study to be used as a foundation for resource assessment before major events/mass-gathering in a simulated environment. However, the result also indicates the challenges of creating measurable values from simulated cases. A study on real events can provide higher reliability but needs, on the other hand, an already developed tool.
Carbon footprint: current methods of estimation.
Pandey, Divya; Agrawal, Madhoolika; Pandey, Jai Shanker
2011-07-01
Increasing greenhouse gaseous concentration in the atmosphere is perturbing the environment to cause grievous global warming and associated consequences. Following the rule that only measurable is manageable, mensuration of greenhouse gas intensiveness of different products, bodies, and processes is going on worldwide, expressed as their carbon footprints. The methodologies for carbon footprint calculations are still evolving and it is emerging as an important tool for greenhouse gas management. The concept of carbon footprinting has permeated and is being commercialized in all the areas of life and economy, but there is little coherence in definitions and calculations of carbon footprints among the studies. There are disagreements in the selection of gases, and the order of emissions to be covered in footprint calculations. Standards of greenhouse gas accounting are the common resources used in footprint calculations, although there is no mandatory provision of footprint verification. Carbon footprinting is intended to be a tool to guide the relevant emission cuts and verifications, its standardization at international level are therefore necessary. Present review describes the prevailing carbon footprinting methods and raises the related issues.
A Comparative Study of Distribution System Parameter Estimation Methods
Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup
2016-07-17
In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.
Reliability evaluation of I-123 ADAM SPECT imaging using SPM software and AAL ROI methods
Yang, Bang-Hung; Tsai, Sung-Yi; Wang, Shyh-Jen; Su, Tung-Ping; Chou, Yuan-Hwa; Chen, Chia-Chieh; Chen, Jyh-Cheng
2011-08-01
The level of serotonin was regulated by serotonin transporter (SERT), which is a decisive protein in regulation of serotonin neurotransmission system. Many psychiatric disorders and therapies were also related to concentration of cerebral serotonin. I-123 ADAM was the novel radiopharmaceutical to image SERT in brain. The aim of this study was to measure reliability of SERT densities of healthy volunteers by automated anatomical labeling (AAL) method. Furthermore, we also used statistic parametric mapping (SPM) on a voxel by voxel analysis to find difference of cortex between test and retest of I-123 ADAM single photon emission computed tomography (SPECT) images.Twenty-one healthy volunteers were scanned twice with SPECT at 4 h after intravenous administration of 185 MBq of 123I-ADAM. The image matrix size was 128×128 and pixel size was 3.9 mm. All images were obtained through filtered back-projection (FBP) reconstruction algorithm. Region of interest (ROI) definition was performed based on the AAL brain template in PMOD version 2.95 software package. ROI demarcations were placed on midbrain, pons, striatum, and cerebellum. All images were spatially normalized to the SPECT MNI (Montreal Neurological Institute) templates supplied with SPM2. And each image was transformed into standard stereotactic space, which was matched to the Talairach and Tournoux atlas. Then differences across scans were statistically estimated on a voxel by voxel analysis using paired t-test (population main effect: 2 cond's, 1 scan/cond.), which was applied to compare concentration of SERT between the test and retest cerebral scans.The average of specific uptake ratio (SUR: target/cerebellum-1) of 123I-ADAM binding to SERT in midbrain was 1.78±0.27, pons was 1.21±0.53, and striatum was 0.79±0.13. The cronbach's α of intra-class correlation coefficient (ICC) was 0.92. Besides, there was also no significant statistical finding in cerebral area using SPM2 analysis. This finding might help us
Reliability evaluation of I-123 ADAM SPECT imaging using SPM software and AAL ROI methods
Yang, Bang-Hung [Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taipei, Taiwan (China); Department of Nuclear Medicine, Taipei Veterans General Hospital, Taiwan (China); Tsai, Sung-Yi [Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taipei, Taiwan (China); Department of Imaging Medical, St.Martin De Porres Hospital, Chia-Yi, Taiwan (China); Wang, Shyh-Jen [Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taipei, Taiwan (China); Department of Nuclear Medicine, Taipei Veterans General Hospital, Taiwan (China); Su, Tung-Ping; Chou, Yuan-Hwa [Department of Psychiatry, Taipei Veterans General Hospital, Taipei, Taiwan (China); Chen, Chia-Chieh [Institute of Nuclear Energy Research, Longtan, Taiwan (China); Chen, Jyh-Cheng, E-mail: jcchen@ym.edu.tw [Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taipei, Taiwan (China)
2011-08-21
The level of serotonin was regulated by serotonin transporter (SERT), which is a decisive protein in regulation of serotonin neurotransmission system. Many psychiatric disorders and therapies were also related to concentration of cerebral serotonin. I-123 ADAM was the novel radiopharmaceutical to image SERT in brain. The aim of this study was to measure reliability of SERT densities of healthy volunteers by automated anatomical labeling (AAL) method. Furthermore, we also used statistic parametric mapping (SPM) on a voxel by voxel analysis to find difference of cortex between test and retest of I-123 ADAM single photon emission computed tomography (SPECT) images. Twenty-one healthy volunteers were scanned twice with SPECT at 4 h after intravenous administration of 185 MBq of {sup 123}I-ADAM. The image matrix size was 128x128 and pixel size was 3.9 mm. All images were obtained through filtered back-projection (FBP) reconstruction algorithm. Region of interest (ROI) definition was performed based on the AAL brain template in PMOD version 2.95 software package. ROI demarcations were placed on midbrain, pons, striatum, and cerebellum. All images were spatially normalized to the SPECT MNI (Montreal Neurological Institute) templates supplied with SPM2. And each image was transformed into standard stereotactic space, which was matched to the Talairach and Tournoux atlas. Then differences across scans were statistically estimated on a voxel by voxel analysis using paired t-test (population main effect: 2 cond's, 1 scan/cond.), which was applied to compare concentration of SERT between the test and retest cerebral scans. The average of specific uptake ratio (SUR: target/cerebellum-1) of {sup 123}I-ADAM binding to SERT in midbrain was 1.78{+-}0.27, pons was 1.21{+-}0.53, and striatum was 0.79{+-}0.13. The cronbach's {alpha} of intra-class correlation coefficient (ICC) was 0.92. Besides, there was also no significant statistical finding in cerebral area using SPM2
An efficient hybrid reliability analysis method with random and interval variables
Xie, Shaojun; Pan, Baisong; Du, Xiaoping
2016-09-01
Random and interval variables often coexist. Interval variables make reliability analysis much more computationally intensive. This work develops a new hybrid reliability analysis method so that the probability analysis (PA) loop and interval analysis (IA) loop are decomposed into two separate loops. An efficient PA algorithm is employed, and a new efficient IA method is developed. The new IA method consists of two stages. The first stage is for monotonic limit-state functions. If the limit-state function is not monotonic, the second stage is triggered. In the second stage, the limit-state function is sequentially approximated with a second order form, and the gradient projection method is applied to solve the extreme responses of the limit-state function with respect to the interval variables. The efficiency and accuracy of the proposed method are demonstrated by three examples.
Joint 2-D DOA and Noncircularity Phase Estimation Method
Wang Ling
2012-03-01
Full Text Available Classical joint estimation methods need large calculation quantity and multidimensional search. In order to avoid these shortcoming, a novel joint two-Dimension (2-D Direction Of Arrival (DOA and noncircularity phase estimation method based on three orthogonal linear arrays is proposed. The problem of 3-D parameter estimation can be transformed to three parallel 2-D parameter estimation according to the characteristic of three orthogonal linear arrays. Further more, the problem of 2-D parameter estimation can be transformed to 1-D parameter estimation by using the rotational invariance property among signal subspace and orthogonal property of noise subspace at the same time in every subarray. Ultimately, the algorithm can realize joint estimation and pairing parameters by one eigen-decomposition of extended covariance matrix. The proposed algorithm can be applicable for low SNR and small snapshot scenarios, and can estiame 2(M −1 signals. Simulation results verify that the proposed algorithm is effective.
A Fast LMMSE Channel Estimation Method for OFDM Systems
Zhou Wen
2009-01-01
Full Text Available A fast linear minimum mean square error (LMMSE channel estimation method has been proposed for Orthogonal Frequency Division Multiplexing (OFDM systems. In comparison with the conventional LMMSE channel estimation, the proposed channel estimation method does not require the statistic knowledge of the channel in advance and avoids the inverse operation of a large dimension matrix by using the fast Fourier transform (FFT operation. Therefore, the computational complexity can be reduced significantly. The normalized mean square errors (NMSEs of the proposed method and the conventional LMMSE estimation have been derived. Numerical results show that the NMSE of the proposed method is very close to that of the conventional LMMSE method, which is also verified by computer simulation. In addition, computer simulation shows that the performance of the proposed method is almost the same with that of the conventional LMMSE method in terms of bit error rate (BER.
Comparison of reliability techniques of parametric and non-parametric method
C. Kalaiselvan
2016-06-01
Full Text Available Reliability of a product or system is the probability that the product performs adequately its intended function for the stated period of time under stated operating conditions. It is function of time. The most widely used nano ceramic capacitor C0G and X7R is used in this reliability study to generate the Time-to failure (TTF data. The time to failure data are identified by Accelerated Life Test (ALT and Highly Accelerated Life Testing (HALT. The test is conducted at high stress level to generate more failure rate within the short interval of time. The reliability method used to convert accelerated to actual condition is Parametric method and Non-Parametric method. In this paper, comparative study has been done for Parametric and Non-Parametric methods to identify the failure data. The Weibull distribution is identified for parametric method; Kaplan–Meier and Simple Actuarial Method are identified for non-parametric method. The time taken to identify the mean time to failure (MTTF in accelerating condition is the same for parametric and non-parametric method with relative deviation.
AlBarakati, SF; Kula, KS; Ghoneima, AA
2012-01-01
Objective The aim of this study was to assess the reliability and reproducibility of angular and linear measurements of conventional and digital cephalometric methods. Methods A total of 13 landmarks and 16 skeletal and dental parameters were defined and measured on pre-treatment cephalometric radiographs of 30 patients. The conventional and digital tracings and measurements were performed twice by the same examiner with a 6 week interval between measurements. The reliability within the method was determined using Pearson's correlation coefficient (r2). The reproducibility between methods was calculated by paired t-test. The level of statistical significance was set at p < 0.05. Results All measurements for each method were above 0.90 r2 (strong correlation) except maxillary length, which had a correlation of 0.82 for conventional tracing. Significant differences between the two methods were observed in most angular and linear measurements except for ANB angle (p = 0.5), angle of convexity (p = 0.09), anterior cranial base (p = 0.3) and the lower anterior facial height (p = 0.6). Conclusion In general, both methods of conventional and digital cephalometric analysis are highly reliable. Although the reproducibility of the two methods showed some statistically significant differences, most differences were not clinically significant. PMID:22184624
The comparability and reliability of five health-state valuation methods.
Krabbe, P F; Essink-Bot, M L; Bonsel, G J
1997-12-01
The objective of the study was to consider five methods for valuing health states with respect to their comparability (convergent validity, value functions) and reliability. Valuation tasks were performed by 104 student volunteers using five frequently used valuation methods: standard gamble (SG), time trade-off (TTO), rating scale (RS), willingness-to-pay (WTP) and the paired comparisons method (PC). Throughout the study, the EuroQol classification system was used to construct 13 health-state descriptions. Validity was investigated using the multitrait-multimethod (MTMM) methodology. The extent to which results of one method could be predicted by another was examined by transformations. Reliability of the methods was studied parametrically with Generalisability Theory (an ANOVA extension), as well as non-parametrically. Mean values for SG were slightly higher than TTO values. The RS could be distinguished from the other methods. After a simple power transformation, the RS values were found to be close to SG and TTO. Mean values of WTP were linearly related to SG and TTO, except at the extremes of the scale. However, the reliability of WTP was low and the number of inconsistencies substantial. Valuations made by the RS proved to be the most reliable. Paired comparisons did not provide stable results. In conclusion, the results of the parametric transformation function between RS and SG/TTO provide evidence to justify the current use of RS (with transformations) not only for reasons of feasibility and reliability but also for reasons of comparability. A definite judgement on PC requires data of a complete design. Due to the specific structure of the correlation matrix which is inherent in valuing health states, we believe that full MTMM is not applicable for the standard analysis of health-state valuations.
Akhmetova, I. G.; Chichirova, N. D.
2016-12-01
Heat supply is the most energy-consuming sector of the economy. Approximately 30% of all used primary fuel-and-energy resources is spent on municipal heat-supply needs. One of the key indicators of activity of heat-supply organizations is the reliability of an energy facility. The reliability index of a heat supply organization is of interest to potential investors for assessing risks when investing in projects. The reliability indices established by the federal legislation are actually reduced to a single numerical factor, which depends on the number of heat-supply outages in connection with disturbances in operation of heat networks and the volume of their resource recovery in the calculation year. This factor is rather subjective and may change in a wide range during several years. A technique is proposed for evaluating the reliability of heat-supply organizations with the use of the simple additive weighting (SAW) method. The technique for integrated-index determination satisfies the following conditions: the reliability level of the evaluated heat-supply system is represented maximum fully and objectively; the information used for the reliability-index evaluation is easily available (is located on the Internet in accordance with demands of data-disclosure standards). For reliability estimation of heat-supply organizations, the following indicators were selected: the wear of equipment of thermal energy sources, the wear of heat networks, the number of outages of supply of thermal energy (heat carrier due to technological disturbances on heat networks per 1 km of heat networks), the number of outages of supply of thermal energy (heat carrier due to technologic disturbances on thermal energy sources per 1 Gcal/h of installed power), the share of expenditures in the cost of thermal energy aimed at recovery of the resource (renewal of fixed assets), coefficient of renewal of fixed assets, and a coefficient of fixed asset retirement. A versatile program is developed
Computing interval-valued reliability measures: application of optimal control methods
Kozin, Igor; Krymsky, Victor
2017-01-01
The paper describes an approach to deriving interval-valued reliability measures given partial statistical information on the occurrence of failures. We apply methods of optimal control theory, in particular, Pontryagin’s principle of maximum to solve the non-linear optimisation problem and derive...
Moment Method Based on Fuzzy Reliability Sensitivity Analysis for a Degradable Structural System
Song Jun; Lu Zhenzhou
2008-01-01
For a degradable structural system with fuzzy failure region, a moment method based on fuzzy reliability sensitivity algorithm is presented. According to the value assignment of porformance function, the integral region for calculating the fuzzy failure probability is first split into a series of subregions in which the membership function values of the performance function within the fuzzy failure region can be approximated by a set of constants. The fuzzy failure probability is then transformed into a sum of products oftbe random failure probabilities and the approximate constants of the membership function in the subregions. Furthermore, the fuzzy reliability sensitivity analysis is transformed into a series of random reliability sensitivity analysis, and the random reliability sensitivity can be obtained by the constructed moment method. The primary advantages of the presented method include higher efficiency for implicit performance function with low and medium dimensionality and wide applicability to multiple failure modes and nonnormal basic random variables. The limitation is that the required computation effort grows exponentially with the increase of dimensionality of the basic random vari-able; hence, it is not suitable for high dimensionality problem. Compared with the available methods, the presented one is pretty com-petitive in the case that the dimensionality is lower than 10. The presented examples are used to verify the advantages and indicate the limitations.
Quan, Xueling; Yi, Sun; Heiskanen, Arto
Biosensing systems based on detecting changes in cantilever surface stress have attracted great interest. To achieve high reliability of measurements, high quality and high reproducibility in functionalization of the sensor surface are key points. In this paper, we investigate different methods t...
A Fault-tolerance Estimating Method for Ionosphere Corrections in Satellite Navigation System
GAO Shuliang; LI Rui; HUANG Zhigang
2011-01-01
Aiming to the reliable estimates of the ionosphere differential corrections for the satellite navigation system in the presence of the ionosphere anomaly,a fault-tolerance estimating method,which is based on the distributed Kalman filtering,is proposed.The method utilizes the parallel sub-filters for estimating the ionosphere differential corrections.Meanwhile,an infinite norm (IN) method is proposed for the detection of the ionosphere irregularity in the filter processing.Once the anomaly is detected,the sub-filter contaminated by the anomaly measurements will be excluded to ensure the reliability of the estimates.The simulation is conducted to validate the method and the results indicate that the anomaly can be found timely due to the novel fault detection method based on the infinite norm.Because of the parallel sub-filter architecture,the measurements are classified by the spatial distribution so that the ionosphere anomaly can be positioned and excluded more easily.Thus,the method can provide the robust and accurate ionosphere differential corrections.
A novel TOA estimation method with effective NLOS error reduction
ZHANG Yi-heng; CUI Qi-mei; LI Yu-xiang; ZHANG Ping
2008-01-01
It is well known that non-line-of-sight (NLOS)error has been the major factor impeding the enhancement ofaccuracy for time of arrival (TOA) estimation and wirelesspositioning. This article proposes a novel method of TOAestimation effectively reducing the NLOS error by 60%,comparing with the traditional timing and synchronizationmethod. By constructing the orthogonal training sequences,this method converts the traditional TOA estimation to thedetection of the first arrival path (FAP) in the NLOS multipathenvironment, and then estimates the TOA by the round-triptransmission (RTT) technology. Both theoretical analysis andnumerical simulations prove that the method proposed in thisarticle achieves better performance than the traditional methods.
Limits to the reliability of size-based fishing status estimation for data-poor stocks
Kokkalis, Alexandros; Thygesen, Uffe Høgsbro; Nielsen, Anders
2015-01-01
in more than 60% of the cases, and almost always correctly assess whether a stock is subject to overfishing. Adding information about age, i.e., assuming that growth rate and asymptotic size are known, does not improve the estimation. Only knowledge of the ratio between mortality and growth led...
Reliability-based weighting of visual and vestibular cues in displacement estimation
Horst, A.C. ter; Koppen, M.G.M.; Selen, L.P.J.; Medendorp, W.P.
2015-01-01
When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimat
Limits to the reliability of size-based fishing status estimation for data-poor stocks
Kokkalis, Alexandros; Thygesen, Uffe Høgsbro; Nielsen, Anders
2015-01-01
in more than 60% of the cases, and almost always correctly assess whether a stock is subject to overfishing. Adding information about age, i.e., assuming that growth rate and asymptotic size are known, does not improve the estimation. Only knowledge of the ratio between mortality and growth led...
Improved Accuracy of Nonlinear Parameter Estimation with LAV and Interval Arithmetic Methods
Humberto Muñoz
2009-06-01
Full Text Available The reliable solution of nonlinear parameter es- timation problems is an important computational problem in many areas of science and engineering, including such applications as real time optimization. Its goal is to estimate accurate model parameters that provide the best ﬁt to measured data, despite small- scale noise in the data or occasional large-scale mea- surement errors (outliers. In general, the estimation techniques are based on some kind of least squares or maximum likelihood criterion, and these require the solution of a nonlinear and non-convex optimiza- tion problem. Classical solution methods for these problems are local methods, and may not be reliable for ﬁnding the global optimum, with no guarantee the best model parameters have been found. Interval arithmetic can be used to compute completely and reliably the global optimum for the nonlinear para- meter estimation problem. Finally, experimental re- sults will compare the least squares, l2, and the least absolute value, l1, estimates using interval arithmetic in a chemical engineering application.
Ezure, Hideo
1988-09-01
Effective combination of measured data with theoretical analysis has permitted deriving a mehtod for more accurately estimating the power distribution in BWRs. Use is made of least squares method for the combination between relationship of the power distribution with measured values and the model used in FLARE or in the three-dimensional two-group diffusion code. Trial application of the new method to estimating the power distribution in JPDR-1 has proved the method to provide reliable results.